Anthropic recently released its AI Fluency Index, analyzing 9,830 multi-turn Claude conversations to understand how people collaborate with AI. The encouraging news? Iteration correlates strongly with better behavior.
What the signals tell us
Their data shows that conversations that included iteration and refinement were:
- 5.6x more likely to question the model’s reasoning
- 4x more likely to identify missing context
- And showed roughly double the number of observable fluency behaviors overall
That’s a strong signal.
People who stay in the conversation with AI — pushing, refining, clarifying — demonstrate stronger evaluative instincts than those who accept the first answer and move on.
But there’s a second pattern in Anthropic’s data that should make product leaders pause. When AI produces polished artifacts — code, documents, apps, interactive tools — users behave differently.
Compared to conversations that don’t produce artifacts, users were 3 to 5 percentage points less likely to:
- Notice when important context was missing
- Double-check whether the output was factually accurate
- Ask the AI to explain how it reached its conclusion
In other words: When the output looks complete, people are measurably less likely to question it.
Where this gets risky for product teams
Inside product organizations, polished output can quietly gain authority.
- Personas feel coherent.
- PRFAQs sound strategic.
- Roadmaps read confidently.
- Specs appear ready for execution.
The danger isn’t speed. It’s unexamined confidence. AI can compress hours of thinking into minutes. It cannot compress the responsibility of deciding well.
As product leaders, we have to examine something more consequential: How AI meaningfully shapes product decisions.
The question is: “Have we built safeguards that preserve rigor as AI becomes embedded across discovery, prioritization, and delivery?”
3 Practical Ways to Institutionalize AI Fluency
1. Make context a first-class input
AI output quality rises or falls based on the clarity of the inputs it receives. If context is fragmented, incomplete, or inconsistent, the output will be too.
Leaders can institutionalize context in three lightweight ways:
- Standardize AI-ready foundations. Every AI-driven artifact should trace back to a clear problem, persona, prioritized needs, and defined outcome. These become reusable inputs that travel across prompts and tools. (Our Groundwork AI Blueprint helps teams do this quickly and with integrity!)
- Create a context-conducive environment. Centralize research artifacts, strategy documents, usage data, and past experiment results so PMs can easily incorporate them into their AI workflows.
- Use prompt libraries that force connection. Prompts should require explicit reference to customer needs, constraints, and intended outcomes — not simply “draft X and make it look like Y.”
When context is easy to access and reuse, disciplined AI usage becomes the default rather than the exception.
2. Use a shared skeptic prompt for polished output
Anthropic’s data shows that evaluation drops when artifacts are created. Let’s establish a standard “polish trigger” prompt that any PM can run against AI output. Some examples we’ve seen:
- What assumptions is this [artifact] making that were not explicitly provided?
- Where might this conflict with our stated customer problem, needs or business strategy?
- If this were wrong, what would be the most likely reason?
These three questions apply across most product management AI use cases — from personas to roadmaps to specs. This is not about slowing teams down. It is about ensuring that fluency does not quietly replace discernment.
3. Publicly showcase best-in-class AI usage
Culture follows what gets amplified. Instead of celebrating: “AI helped us ship faster.”
Highlight: “AI helped us move faster and avoid a flawed decision.”
In internal forums, have teams share their initiatives:
- What AI accelerated
- What context they provided
- Where they challenged the output
- What they validated externally
- What decision changed as a result
For example:
“We used AI to cluster 28 interviews in 30 minutes. It surfaced five needs. Two were driven primarily by power users. We ran six additional validation calls. One need collapsed. We avoided building a feature that would have impacted less than 8% of our base.”
That narrative reinforces that speed and rigor are not opposing forces — they are strongest when combined intentionally.
The Takeaway
Iteration correlates with stronger fluency behaviors.
But polished output still reduces scrutiny.
AI capability is accelerating.
Organizational judgment must accelerate with it.
As AI becomes embedded across product work, leadership must deliberately resource, reinforce, and showcase:
- Clear problem framing
- Explicit customer evidence
- Transparent tradeoffs
- Accountable decision-making
AI reshapes how quickly teams can produce artifacts. It does not eliminate the need for disciplined thinking behind them.
Want help putting this into practice?
If you’re thinking about how to establish consistent AI practices across your product organization – not just better prompts, but better decision discipline – that’s exactly the work we’re helping teams operationalize.
Is your team experiencing challenges in implementing AI into your product operating model or struggling in establishing the practices that enable the best outcomes from AI product building?
Schedule 30 minutes with us to learn a little bit about you and explore how we can help.
About Product Rebels
Product Rebels helps product leaders bring their teams from good to great. We work with established product organizations that already know the basics of product management but want to operate at a higher level. Our focus is not Product Management 101. It’s helping teams build strong customer foundations and outcome-oriented ways of working that consistently translate into better results; for customers and for the business.
We partner with leaders and teams to change how product work actually happens day to day: how customer insight is gathered and shared, how problems are framed, how tradeoffs are made, and how learning compounds over time. AI is infused throughout these practices as an accelerator, helping teams synthesize learning faster, explore more options, and move with greater confidence without sacrificing judgment or customer connection.
The result is product teams that don’t just ship more, they build the right things, make better decisions under pressure, and deliver meaningful, sustained impact.
Sources and further reading
Anthropic. “AI Fluency Index.” 2026.
https://www.anthropic.com/research/AI-fluency-index
Harvard Business Review. “How to Use AI to Augment Human Decision-Making.” 2023.
https://hbr.org/2023/07/how-to-use-ai-to-augment-human-decision-making
Ng, Andrew. “AI Is the New Electricity.” DeepLearning.AI.
https://www.deeplearning.ai/the-batch/andrew-ng-ai-is-the-new-electricity/

