The 100 Hour Gap: What Happens After You Vibe Code a Prototype
There is a moment, about two hours into a vibe coding session, where you feel like a genius. The AI has scaffolded your app. There is a login page, a dashboard, some CRUD operations, maybe even a Stripe integration. You click around. It works. You show a friend. They are impressed. You start thinking about launch timelines.
Then you try to deploy it, and the next hundred hours begin.
The demo that launched a thousand rewrites
I have done this cycle enough times now to recognize the pattern. The prototype looks complete because it handles the happy path perfectly. The AI optimized for the thing you asked for -- a working demo -- and it delivered. But a working demo and a working product are separated by a gap that is difficult to see until you are standing in it.
The first cracks appear quickly. You deploy to a real environment and the hardcoded localhost URLs break. Environment variables are missing or wrong. The database needs migrations, except there is no migration system -- the AI just created the tables directly. You fix those things. Then real users show up.
What actually breaks
Authentication is usually the first disaster. The AI built login and registration, but it did not think about what happens when a session token expires mid-request. Or what happens when someone hits the back button after logging out. Or password reset flows where the token has already been used. Or OAuth callbacks that fail silently because the redirect URI does not match production.
Database operations come next. The queries work fine with 10 rows and one user. Add concurrent requests and you discover there are no transactions where there should be, no indexes on columns you are filtering by, and an N+1 query pattern on every list page that turns a 50ms response into a 5 second one. The AI did not add connection pooling either, so under moderate load the database starts refusing connections.
Error handling is almost entirely absent. Not the kind where the code throws an exception -- that part exists. I mean the kind where the user sees something useful when things go wrong. API calls to third-party services have no retry logic, no timeout configuration, and no fallback behavior. When the Stripe webhook fails, the user has been charged but the app does not know about it. When the email service is down, the registration flow hangs.
API rate limits are another blind spot. The AI called an external API in a loop, once per item, because that was the simplest implementation. It works in development with 5 items. In production with 500 items, you hit the rate limit in seconds and the whole feature breaks with an unhelpful error.
Mobile responsiveness usually looks right at first glance. Then you test on an actual phone and discover that a modal is unreachable because it opens behind the keyboard, the tap targets are too small, and horizontal scrolling appears on half the pages.
Verification debt
This is the term I keep coming back to. Technical debt is code you wrote and know is suboptimal. Verification debt is code you did not write and do not fully understand. It is more dangerous because you cannot estimate what you do not know.
When you vibe code a prototype, you accumulate verification debt on every line. The AI made architectural decisions you did not evaluate. It chose libraries you have not vetted. It implemented security patterns that may or may not be correct. It structured the database in a way that may or may not scale. You approved it all with a glance because it looked reasonable and it ran.
The cost of this debt comes due when something breaks and you have to debug code you have never actually read. You are reverse-engineering your own application. I have watched experienced developers spend more time understanding AI-generated code than it would have taken to write it themselves -- not because the code was bad, but because understanding someone else's choices is fundamentally harder than making your own. One client's experience with this exact problem is a cautionary tale worth reading.
When to keep it, when to rewrite
Not every vibe-coded prototype needs to be thrown away. The decision depends on what you built and what it needs to become.
Keep the prototype when the scope is genuinely small. Internal tools, personal projects, one-off scripts, proof of concepts that need to convince a stakeholder -- these can survive on vibe-coded foundations because the blast radius of failure is limited.
Plan a rewrite when the prototype will serve real users who depend on it. This does not mean starting from scratch. It means going through the codebase systematically, understanding every decision, and replacing the parts that do not hold up.
Bridge the gap by working through these layers in order: first, error handling and input validation. Second, authentication and authorization edge cases. Third, database performance -- add indexes, fix N+1 queries, add connection pooling. Fourth, deployment and environment configuration. Fifth, monitoring and alerting so you know when things break before your users tell you.
The practical move is to treat the prototype as a detailed specification rather than a foundation. It shows you what the product should do. The production version is how it should do it. Sometimes those implementations look similar. Often they do not.
The real skill
Vibe coding is not going away, and it should not. Getting a working prototype in two hours instead of two weeks is a genuine superpower. But the gap between prototype and production is where software engineering actually lives. Knowing that the gap exists, and having a systematic way to close it, is what separates a demo from a product.
The best use of AI-generated code is not as a finished artifact. It is as a starting point that you understand deeply enough to own. The hundred hours after the prototype are not a failure of the tool. They are the work — and being mindful of token costs during that refinement phase helps you budget the process realistically.