When Product Managers Code (And Why That's Not Scary)

Vibe coding isn't about PMs becoming engineers—it's about closing the validation gap

11 minute read

Somewhere out there, a PM is burning their third sprint cycle getting engineering to prototype something just to learn they’re solving the wrong problem. The feature makes perfect sense in the requirements docs. The wireframes look great. But watching users interact with it reveals they needed something entirely different.

The cost isn’t just three sprints. It’s the credibility lost with engineering, the stakeholder confidence that erodes, and the market opportunity that moved while you were validating the wrong approach.

The distance between “I think this solves the problem” and “this actually solves the problem” typically measures about 6-8 weeks and significant engineering effort. Teams optimize the validation process by writing better PRDs, creating more detailed wireframes, running more user interviews. All helpful. None of them close the fundamental gap: you don’t really understand if something works until someone builds it and real users interact with it.

The Validation Gap That Vibe Coding Addresses

Traditional product discovery follows a sequence: user research, wireframes, requirements docs, engineering estimate, prioritization, sprint planning, development, testing, user validation. This process has served us well for decades. It also creates a natural bottleneck—the time and cost of building something to test if it’s worth building.

Recent research on AI prototyping shows product teams can now move from concept to functional prototype in minutes rather than weeks. Reddit’s CPO Pali Bhat notes: “New feature definition, prototyping, and testing are all happening in parallel and faster than ever before.” This isn’t about shipping faster—it’s about learning faster.

The distinction matters. When I can test five hypotheses in an afternoon rather than one approach across a sprint, my understanding of the problem space transforms. I’m not guessing which solution might work best. I’m discovering which one actually does.

 
The Core Shift: Vibe coding with AI tools changes product discovery from sequential validation (research → design → build → test) to parallel experimentation (build three options while researching, test all simultaneously).
Traditional vs. AI-Accelerated Validation CyclesTraditional (Sequential)Research1-2 weeksDesign1-2 weeksBuild2-3 weeksTest1 week6-8 weeksAI-Accelerated (Parallel)Build 3-5PrototypesResearchOngoingTest ADay 2Test BDay 2Test CDay 2WinnerDay 32-3 days

What This Actually Looks Like in Practice

I spent months debating the right approach for a partner integration workflow. The stakeholders had opinions. Engineering had concerns about complexity. Users gave feedback in interviews that seemed contradictory. We were stuck in analysis paralysis, trying to design the perfect solution on paper.

Last weekend, I used Replit to build three different versions of the workflow microapp. Not production-quality code, but rough prototypes that demonstrated three fundamentally different approaches to solving the same problem. Monday morning, I put all three in front of actual users and watched which one they gravitated toward naturally.

The winner wasn’t the one I expected. It wasn’t the one engineering preferred. It wasn’t even the approach users said they wanted in interviews. But watching them interact with working prototypes revealed patterns that no amount of discussion could surface.

Total time investment: roughly 6 hours while jammin’ to music to build three prototypes versus the 3-4 weeks we’d have spent debating, designing, and building one approach through traditional processes.

 
This compressed timeline reveals a trap: the urge to skip validation entirely because building feels so fast. Resist it. The point isn’t to build faster—it’s to learn which direction deserves proper engineering investment.

The Communication Bridge Nobody Talks About

A fun bonus has been that building rough prototypes made me better at communicating requirements to my teams–it helped get the vision in my head into something tangible. When you actually try to implement something—even with AI doing the heavy lifting—you discover all the edge cases and interaction patterns you missed in the wireframes.

“What happens when a user uploads a file larger than 10MB?” becomes real when you’re building the upload flow, not theoretical when you’re drawing boxes in Figma. “How do we handle offline state?” shifts from future consideration to immediate design decision. The constraints reveal themselves through doing, not planning.

This changes conversations with engineering dramatically. Instead of “here’s what I think we need,” I can say “I built this rough version, here’s where I got stuck, and here’s what users responded to.” Engineering can see the actual interaction patterns, understand the complexity I discovered, and iterate on the technical opportunities.

The prototype becomes a shared reference point. We’re debugging a working example together rather than debating interpretations of a document.

How Prototyping Changes PM-Engineering Conversations

Without Prototype

PM: "Users need to upload files."

Engineer: "What's the file size limit?"

PM: "Uh... let me check with users."

Result: 3-day delay for research, back to planning

With Prototype

PM: "I built three versions. Users hit the 10MB limit immediately."

Engineer: "Got it. 50MB limit, chunked uploads."

PM: "Here's where I got stuck with error handling..."

Result: Immediate alignment, implementation starts

The Experiment Explosion

Andrew Ng observed that as writing software becomes cheaper, demand increases for “people who can decide what to build.” The skill isn’t coding—it’s judgment about what’s worth building.

This shift fundamentally changes how product teams can operate. When building a prototype required dedicated engineering time, you optimized for testing fewer, higher-confidence ideas. You did more upfront validation to reduce the risk of building the wrong thing. Prudent approach given the constraints.

Now? I can test ten approaches to a problem for roughly the same time investment as researching which one to build. The game changes. Instead of trying to predict which solution will work best, I can discover it through rapid experimentation.

Traditional vs. AI-Accelerated Validation

Traditional Approach
4-6 weeks
• Research & wireframes: 1 week
• Engineering estimate & prioritization: 1 week
• Development & testing: 2-3 weeks
• User validation: 1 week
Result: 1 validated approach
AI-Accelerated Validation
2-3 days
• Build 3-5 prototypes: 1 day
• User testing in parallel: 1 day
• Synthesis & direction: 1 day
• Engineering handoff with winner: hours
Result: 3-5 tested approaches, clear winner

Based on patterns observed across multiple product teams using AI prototyping tools

This isn’t theoretical. Teams using tools like Bolt, v0, and Lovable report similar compression. The constraint shifts from “how long does it take to build?” to “how quickly can we learn from what we build?”

What This Isn’t (And Why That Matters)

Let me be direct about the boundaries. This approach doesn’t replace engineers. The prototypes I build aren’t production-ready. They don’t handle edge cases properly. They probably have security issues. The error handling is minimal. Performance optimization doesn’t exist.

I wrote about these trade-offs in detail in my engineering perspective on vibe coding. Everything I said there still applies. These tools excel at getting something that works well enough to learn from. They struggle with the discipline and rigor required for production systems.

The value isn’t replacing engineering, but informing engineering investment. Instead of asking engineers to build three different approaches so we can pick one, I validate the direction first. Engineering gets a clear problem to solve, working examples of what users responded to, and discovered edge cases documented through my failed attempts.

Engineers I work with appreciate this. They’d rather polish a validated direction than debate which of three theoretical approaches might work better. The prototype accelerates shared understanding without replacing professional software development. What’s more is that the business should appreciate this. Engineering time is expensive. Using prototypes to validate direction before committing resources reduces wasted effort and increases the likelihood of building the right thing.

 
Some PMs worry this crosses the “stay in your lane” line. I disagree. Using prototypes to validate direction before requesting engineering resources is exactly our lane—it’s responsible stewardship of team capacity.

The Strategic Shift for Product Leaders

For senior product leaders managing multiple teams, this capability changes resource allocation math significantly. When validation cycles compress from weeks to days, you can run more experiments with the same team capacity.

This doesn’t just mean faster shipping. It means better strategic decisions. You’re not predicting which initiatives will deliver value—you’re discovering it through rapid testing. The risk profile changes. Instead of big bets on untested directions, you’re making informed investments in validated approaches.

I’ve watched teams struggle with the “innovation theater” problem—lots of activity around new features, limited actual value creation. Often this stems from optimizing for confidence in predictions rather than speed of learning. Teams research extensively, plan thoroughly, and build carefully. All good practices. They also delay learning whether they’re solving real problems.

Recent product management research shows teams using AI prototyping tools can “run considerably more experiments, raising the odds of promising ideas receiving proper consideration.” This matches what I’m seeing. When the cost of testing an idea drops from weeks of engineering time to hours of PM experimentation, you test more ideas. More experiments mean more learning. More learning leads to better products.

The constraint shifts from “which idea should we test?” to “what can we learn from testing all of them?”

The Tools That Actually Matter

The AI prototyping landscape evolved rapidly in 2024-2025. Multiple analyses compare the major players: Lovable, Bolt, v0, Replit, Claude Code, and others. Each has strengths for different use cases.

I don’t recommend specific tools—they’re evolving too quickly, and the right choice depends on your technical comfort level and what you’re trying to build. What matters more than tool selection: understanding the capability gap they address and what you need for the task at hand. In the past few weeks, I’ve used Replit to flesh out ideas and VS Code + Claude (I used Codex a bit, but the usage got spendy) to refine them.

These platforms let you describe an interface or interaction in natural language and generate functional code. The sophistication varies, but the common thread is lowering the barrier to “working prototype” from professional development to written description.

Some integrate with backend services like Supabase, Neon, or Firebase for data persistence. Others focus on front-end interfaces. A few handle full-stack applications. The diversity means you can match tool to task—simple landing page versus complex multi-step workflow.

I look for how quickly can I go from “I wonder if this approach works” to “let me show this to three users and watch what happens.” The faster that cycle, the more valuable the tool for product validation.

What I’m Still Figuring Out

I’ve been using these tools for roughly 8 months now and I’m still learning where the boundaries sit. Some experiments work beautifully—I build something in an hour that answers a critical question. Others hit walls fast—the complexity exceeds what the model can handle or requires stubbing out so much logic that the prototype loses fidelity.

The success rate seems related to problem complexity and how well I can articulate what I want. Simple CRUD interfaces? Usually straightforward. Complex state management or intricate business logic? Much harder. The tool amplifies clarity. If I can describe it precisely, it usually works. If I’m fuzzy about requirements, the output reflects that fuzziness. Which, of course, is a useful lesson in itself–if I can’t clearly define what I want, how can I expect users to understand it?

I’m also navigating team dynamics. Some of our engineers view PM prototyping enthusiastically—it gives them clearer direction and reduces rework. Others are concerned about PMs overstepping boundaries or underestimating implementation complexity. Both reactions are valid. Each team I work with is a bit different as I learn how to introduce prototypes in ways that help rather than create friction.

The skill isn’t just building prototypes—it’s knowing when prototyping adds value versus when traditional discovery methods work better. User interviews remain irreplaceable for understanding problems. Prototypes excel at validating solutions. The art is choosing the right tool for each situation.

 
Pro Tip: Treat prototypes as conversation starters, not final products. Use them to spark discussion, reveal assumptions, and guide engineering rather than as definitive solutions. In recent weeks, not only do engineers love prototypes, but so do our qual/quan research teams who find them invaluable for user testing.

The Practical Starting Point

If you’re skeptical about this whole approach, I get it. I was too. The gap between “AI can generate code” and “this actually helps me ship better products” feels substantial.

Start small. Pick a feature or capability you’re debating internally–ideally something where the team has two or three different approaches and can’t decide which direction to pursue. Don’t tell anyone you’re experimenting. Just spend an afternoon building rough prototypes of each approach.

Show them to five users. Not a formal study. Just “hey, I made a few versions of this thing we’re considering, which one makes more sense to you?” Watch what they do more than what they say.

If the experiment clarifies direction, you’ve validated the approach for yourself. If it doesn’t, you’ve spent an afternoon instead of three sprints. Either way, you’re learning whether this capability fits your workflow.

The goal isn’t becoming a developer. It’s closing the validation gap that exists between product ideas and product reality. These tools don’t replace product craft, they accelerate the learning that makes craft valuable.

Try building one rough prototype this week. Not for shipping. For learning. See if compressing your validation cycle from weeks to hours changes how you think about product discovery. The cost is an afternoon. The potential is discovering whether your next big initiative solves the right problem before you commit engineering resources to building it.