Job titles are dead. Accountability isn’t.

I’ve been thinking about something Jenny Wen said on a recent episode of Lenny’s Podcast that I can’t stop chewing on.

Jenny is the head of design for Claude at Anthropic. Before that, she was Director of Design at Figma, leading the teams behind FigJam and Slides. She’s worked at Dropbox, Square, Shopify. She’s not some hot-take artist on LinkedIn. She’s someone who has done the work across multiple eras of product development and is now doing it at the company building one of the most significant AI products in the world.

Her message: the classic design process is dead.

Not dying. Dead.

And here’s the part that matters for product leaders: the thing that killed it wasn’t design getting worse. It was engineering getting faster. When engineers can spin up multiple AI agents and ship working code in hours, the old sequence of discover, diverge, converge, mock, hand off, review, iterate… that cadence doesn’t hold anymore. You can’t block an engineering team that moves at agent speed with a six-week design cycle. They’ll just build around you.

Now, Wen is careful to clarify that the work itself isn’t gone. Designers still prototype. They still mock things up. But the proportions have shifted dramatically. What used to be 60 to 70 percent mocking and prototyping is now 30 to 40 percent. The rest? Pairing directly with engineers, polishing implementations in code, consulting on features as they’re being built. The job didn’t disappear. It relocated.

If you’re a product leader reading that and thinking “interesting design story,” I need you to zoom out. This is a product leadership story. And it’s about to be your story if it isn’t already.

Role blur is here. Pretending it isn’t is the risk.

What Wen describes isn’t just happening in design. It’s happening everywhere AI is accelerating execution. Designers are writing code. Engineers are making product decisions in real time. PMs are prototyping. The swim lanes we spent years defining are dissolving, not because anyone decided to dissolve them, but because speed made them irrelevant.

This is where a lot of product leaders get nervous. And I get it. If everyone can do a bit of everything, who owns what? If an engineer ships a feature with seven AI agents before the PM even finishes writing the brief, what exactly is the PM’s role?

Here’s what I think: role blur isn’t the problem. Role blur without accountability is the problem.

When the boundaries between functions get fuzzy, the thing that holds everything together isn’t process. It’s clarity about who is accountable for what. Who decides what gets built? Who decides what gets shipped? Who is on the hook when it doesn’t work? Those questions don’t go away when speed increases. They get more urgent.

Product leaders who try to fight role blur by reinforcing old swim lanes are going to lose. The ones who redefine accountability around decision rights and shared quality standards are going to thrive. That’s a different operating model. It requires more trust, more clarity, and more explicit conversation about who owns which decisions.

You can’t mock up a non-deterministic product

Wen makes a point about AI products that I think every PM needs to internalize: you cannot design for all the states of a non-deterministic system in advance. You can’t build a clickable prototype of a product powered by language models and call it validated. The outputs vary. The user inputs vary. The edge cases are effectively infinite.

That’s a fundamental shift in how discovery works. For most of our careers, the PM playbook has been: research, define requirements, spec it out, get alignment, hand it off, build, ship. Even in more modern discovery-driven organizations, the assumption has been that you can learn enough before building to have high confidence in what you’re building.

With AI products, that assumption breaks. Not partially. Fully.

Wen’s teams at Anthropic have to use the actual models underneath their product and watch real people try real use cases, because use cases get discovered through usage, not through research decks. That doesn’t mean research is worthless. It means research alone isn’t sufficient. Discovery and delivery aren’t sequential anymore. They’re concurrent.

For PMs, this means your discovery practice has to evolve. Less pre-spec’d certainty, more rapid iteration on real usage. Less “here’s the PRD, go build it,” more “here’s the problem and constraints, let’s learn by shipping something small and watching what happens.” That feels uncomfortable if you’ve built your career on being the person who knows the answer before anyone writes a line of code. But that version of the job is shrinking fast.

Trust is built through speed, not perfection

There’s one more thing from Wen’s conversation that I think product leaders need to hear, because it challenges a deeply held belief in a lot of product organizations.

She describes how Anthropic builds trust with users by shipping early, labeling things clearly (like “research preview”), and then visibly improving based on feedback. The trust doesn’t come from getting it right the first time. It comes from demonstrating responsiveness. Users see something ship. They see it get better. They feel heard. That builds confidence.

Where trust breaks down, Wen says, is when you ship something early and then nothing happens. When you put something out into the world and then move on to the next thing. That’s what degrades a brand.

This reframes a conversation I have constantly with product leaders. The fear is always: “If we ship something imperfect, we’ll lose trust.” But the actual trust killer isn’t imperfection. It’s silence after imperfection. Ship, ignore, repeat. That’s the pattern that erodes confidence.

For PMs, this means the feedback-to-shipping loop isn’t a nice-to-have. It’s the mechanism through which trust is built or lost. If your team ships fast but doesn’t close the loop, speed actually works against you. The faster you ship without responding to what you learn, the faster trust erodes.

So what do you actually do with this?

If process is collapsing under speed, PM leverage moves to a higher layer. Here’s where I’d start:

Lock in your inputs before you let the machine run. Role blur is fine. Speed is fine. But only if the team is aligned on the problem being solved, who they’re solving it for, and what success looks like. If those inputs are sloppy, speed just gets you to the wrong place faster. This is where Groundwork fundamentals like a clear problem statement and real customer understanding do their heaviest lifting. They’re the guardrails that keep speed from becoming chaos.

Set explicit decision rights. Who decides what gets built? Who decides what ships? Who calls it when something isn’t working? In a world where anyone on the team can spin up agents and build something in an afternoon, these questions need real answers, not org chart assumptions. Get specific. Write it down. Revisit it regularly.

Build validation into your operating rhythm, not just your launch plan. If you can’t mock all the states of a non-deterministic product, you need a different approach to knowing whether it’s working. That means shipping smaller, watching real usage, and making iteration a structural commitment, not a thing you do if there’s time. Your definition of done isn’t “we shipped it.” It’s “we shipped it, watched what happened, and responded.”

The takeaway

The old model of clearly defined roles, sequential handoffs, and pre-validated specs worked when execution was the bottleneck. Execution isn’t the bottleneck anymore.

What matters now is the stuff that was always supposed to matter but often got buried under process: clarity about the problem, accountability for decisions, and the discipline to learn from what you ship.

Job titles will keep changing. Boundaries between functions will keep blurring. The teams that win won’t be the ones who fight that shift. They’ll be the ones who anchor to the things that don’t change: who are we building for, what problem are we solving, and who is accountable for the outcome.

This post draws on the Lenny’s Podcast episode featuring Jenny Wen, head of design for Claude at Anthropic, published March 1, 2026.

Want help putting this into practice?

At Product Rebels, we work with product leaders and teams navigating exactly this shift: using AI to accelerate delivery without losing product discipline or customer trust.

Is your team experiencing challenges in implementing AI into your product operating model or struggling in establishing the practices that enable the best outcomes from AI product building?

Schedule 30 minutes with us to learn a little bit about you and explore how we can help.

About Product Rebels

Product Rebels helps product leaders bring their teams from good to great. We work with established product organizations that already know the basics of product management but want to operate at a higher level. Our focus is not Product Management 101. It’s helping teams build strong customer foundations and outcome-oriented ways of working that consistently translate into better results; for customers and for the business.

We partner with leaders and teams to change how product work actually happens day to day: how customer insight is gathered and shared, how problems are framed, how tradeoffs are made, and how learning compounds over time. AI is infused throughout these practices as an accelerator, helping teams synthesize learning faster, explore more options, and move with greater confidence without sacrificing judgment or customer connection.

The result is product teams that don’t just ship more, they build the right things, make better decisions under pressure, and deliver meaningful, sustained impact.

Ready To Transform Your Product Team?