TL;DR
A post on r/ChatGPTCoding went viral: "The new guy on the team rewrote the entire application using automated AI tooling. I don't even know what to say about this. What do you even say in a PR? Tbf, it worked." This post blew up because every builder has either done this or watched someone do it. Here is the full breakdown — what happened, when it's OK, when it's reckless, and how to have the conversation.
What Happened: The Reddit Story
The post showed up on r/ChatGPTCoding with 54 upvotes and a comment section that lit up immediately. The original poster described the situation in one sentence: "The new guy on the team rewrote the entire application using automated AI tooling."
Then the kicker: "I don't even know what to say about this. What do you even say in the PR? Tbf, it worked."
That last sentence — "tbf, it worked" — is what made the post explode. Because it captures something real and uncomfortable. The traditional developer response to a situation like this is supposed to be outrage. A violation of process. A serious conversation about recklessness and team trust. But then: it worked. The tests pass. The app runs. The features are all there. And now nobody knows what the right reaction is.
The comments ranged from "this is terrifying" to "honestly kind of impressive" to "this is exactly how software gets built now, get used to it." What they shared was a quality of genuine uncertainty — the feeling that the normal rules do not quite apply here, but nobody is sure what the new rules are.
The Thread That Struck a Nerve
The post hit 54 upvotes on r/ChatGPTCoding and generated dozens of comments within hours. The reaction was not uniform — people were split between horror, admiration, and exhaustion. What nobody said was "this will never happen again." Because everyone knows it will.
This post is a Rorschach test for where you stand on AI-assisted development. If your first reaction is alarm, that tells you something. If your first reaction is "I would absolutely do this," that also tells you something. And if your first reaction is "I have already done this" — you are probably a vibe coder, and you might be wondering why everyone is so upset.
Here is the honest answer: both reactions are reasonable, and neither one is the whole story. The person who did the rewrite might have made something genuinely better. They might have also introduced a dozen subtle bugs that will not surface until the worst possible moment. The fact that it worked right now does not tell you which one is true.
Why This Keeps Happening
Five years ago, rewriting an entire application was a months-long project. It required a dedicated team, a detailed migration plan, a feature freeze, extensive testing, and a rollout strategy. The cost of a full rewrite was so high that most teams chose to live with bad code rather than pay it.
Today, with the right AI tools, a full rewrite can happen in hours. A single developer — including a new one who has been on the team for three weeks — can feed a codebase into an AI coding agent, describe what the new version should do, and watch it produce something that passes the test suite before lunch.
The tooling made this technically possible before anyone had time to develop norms around when it is appropriate. The new guy who rewrote the application was not necessarily reckless or arrogant. They may have simply been using the tools the way they have always used them — aggressively, efficiently, with AI as a full partner rather than a spell-checker. In their mental model, a full rewrite is just a Tuesday. Nobody told them it was supposed to be a six-month project that requires sign-off from three VPs.
The Speed Gap
The mismatch is not about intent — it is about speed. AI tools collapsed the cost of a full rewrite from months to hours. Team norms, review processes, and institutional trust have not collapsed at the same rate. That gap is where the tension lives.
This is a structural problem, not a personnel one. Software teams built their collaboration norms around a world where certain things were expensive and therefore rare. Writing tests takes time, so you write them for the important stuff. Refactoring is risky, so you do it incrementally. Rewriting is terrifying, so you basically never do it. AI tooling is demolishing those cost structures, and teams have not updated the norms to match.
The result is a growing category of technically competent developers — many of them vibe coders — who are doing things that look impossible by the old standards. And teams that have no established way to evaluate whether those things are good ideas or time bombs.
This is going to keep happening. The question is whether teams figure out how to handle it before it blows up on them.
When a Full Rewrite Makes Sense
Let's be clear: there are real situations where an AI-assisted full rewrite is not just acceptable — it is the right call. The problem is not the rewrite itself. The problem is doing it unilaterally, without the team's knowledge, on a shared codebase that other people depend on.
The codebase is genuinely beyond saving
Some codebases are not worth maintaining. They were written in a framework no one uses anymore, by developers who are long gone, with zero documentation and no tests. The business logic is encoded in 2,000-line functions that mix database calls, UI rendering, and email sending all in the same block. Every change is a minefield. In situations like this, a clean rewrite — with proper architecture and tests — can genuinely be the more conservative option compared to continuing to patch something that was never designed to be maintained.
The project is still small and pre-production
If the codebase is small, there are no real users yet, and you are the only developer, a full AI rewrite is essentially zero-risk. This is exactly the use case AI coding tools were designed for. You have nothing to lose and potentially a much cleaner foundation to gain. Do it. The time to establish norms is before there are stakes, not after.
The whole team agreed to it
Some teams, when faced with a truly terrible legacy codebase, make a deliberate decision to do a full rewrite as a team initiative. Everyone knows it is happening. There is a plan. The output gets reviewed with that context in mind. This is fundamentally different from the Reddit story — not because the technical approach is different, but because the institutional trust and shared understanding make the outcome reviewable in a way that a unilateral surprise PR cannot be.
You are migrating between technologies
Moving from one framework to another, or from one language to another, often requires changes so pervasive that it is effectively a rewrite anyway. If the existing codebase is written in a technology you are abandoning, AI tooling that can accelerate that migration is not reckless — it is efficient. The key is being transparent about what you are doing and why.
The Pattern That Makes It OK
The common thread in situations where a full AI rewrite is appropriate: the team knows what is happening. Whether it is an explicit decision, a pre-production project, or a solo endeavor — the defining feature of an acceptable rewrite is that the people who need to understand the code can understand the context. Surprise is the problem, not the tooling.
When It's Reckless
Here is where the Reddit story likely lands — in the reckless category. Not because the developer was malicious or incompetent, but because of the specific risks that a surprise AI rewrite creates on a shared, production codebase.
When business logic lives in the code nobody documented
Every production application has logic that exists because of something that happened three years ago. A bug that only manifests on leap years. A discount calculation that was adjusted for one specific customer and then left in. An authentication edge case that was added because of a security incident that no one wrote up. This institutional knowledge is not in the README. It is not in the tests — there are no tests for it. It lives in the code itself, encoded in a weirdly specific conditional that no one would write intentionally unless they already knew why it needed to be there.
When an AI rewrites the application, it does not know about any of that. It sees code that looks overly complicated and simplifies it. The new version is cleaner. It also silently drops the leap-year fix, adjusts the discount calculation to something more logical, and removes the authentication check that was handling a specific edge case. The app works. The tests pass. And three months later, in production, something quietly breaks in a way that is very hard to trace back to the rewrite.
When security and auth logic gets touched
Authentication, authorization, and any code that touches user data or financial transactions is where the stakes are highest. These are also the areas where AI is most likely to produce something that looks correct but has subtle flaws — because the security properties of these systems depend on very specific implementation details that may not be obvious from the surrounding code.
An AI rewrite that changes how sessions are validated, how permissions are checked, or how payment amounts are calculated is not something you review by running the happy path and calling it good. It is something that requires deliberate, expert security review. The "it works" test does not catch security vulnerabilities. It catches obviously broken functionality. Those are not the same thing.
When the team has to maintain it going forward
Code is not just instructions for a computer. It is communication to the next developer who has to work on it. When a single developer rewrites an entire application using AI tooling, the resulting code reflects the AI's patterns — not the team's conventions, not the domain knowledge of the people who wrote the original, not the architectural decisions that were made for reasons the new code does not explain.
The team now has to work in a codebase they did not write, do not understand the history of, and have no context for. That is not a minor inconvenience. That is a long-term maintenance burden that compounds every time someone has to change something and has no idea why it was written the way it was.
The Worst-Case Scenario
The scenario nobody talks about: the AI rewrite goes to production, works fine for months, and then fails in a way that is almost impossible to debug because nobody on the team understands the new codebase well enough to reason about it under pressure. The original developers' mental model no longer maps to the code. The new code's structure doesn't match anyone's intuition. At 2am, during an outage, this is where "it worked" becomes very expensive.
When there are no tests to verify behavior
A rewrite with a comprehensive test suite that passes is at least somewhat verifiable. A rewrite with no tests is pure faith. You are trusting that the AI correctly inferred all the intended behavior from the existing code, implemented it correctly in the new code, and did not introduce any regressions — all without any automated way to check. That is not a reasonable bet to make with other people's codebase.
The Code Review Problem
The original poster's question — "What do you even say in the PR?" — is the right question. And there is no easy answer, because a PR that rewrites an entire application breaks the fundamental premise of code review.
Code review works by having a second set of eyes verify that a change does what it says it does, does not break anything, and follows team conventions. That works when the change is localized. When a PR changes 10,000 lines across every file in the codebase, the normal review process becomes theater. Nobody is actually reviewing that. They are either approving it based on the fact that it works, or they are blocking it out of general alarm — neither of which is a technical review of the change.
What you can actually review
If you are faced with a full-rewrite PR, here is what a realistic review looks like. You are not reviewing the code diff. You are reviewing the behavior of the system.
Run every test that exists. If there are no tests, that is your first finding. Write tests for the critical paths before you approve anything. Manually verify every user-facing feature by walking through it in a staging environment. Make a list of the ten most business-critical things the application does, and verify each one specifically — not just "does it work" but "does it work the same way."
Look specifically at authentication and authorization. Who can log in? What can they do? What are they prevented from doing? Can you escalate privileges in a way you should not be able to? These are the questions where AI rewrites most commonly introduce subtle bugs that pass functional testing.
Look at how environment variables, secrets, and configuration are handled. AI-generated code has a tendency to hardcode things that should not be hardcoded, or to change the names of environment variables in ways that will cause silent failures in production environments that are not identical to development.
The Reviewer's Checklist for a Full Rewrite PR
1. Does the full test suite pass? 2. Have you manually verified every major user flow? 3. Is auth and authorization behavior identical? 4. Are secrets and environment variables handled correctly? 5. Is any business-critical logic different from before? 6. Does the team understand the new codebase well enough to maintain it? If you cannot answer all six confidently, the PR should not be merged.
The dependency problem
AI rewrites often change the dependency tree — adding, removing, or upgrading packages as part of the process. Each of those changes is its own mini-review. A package that was pinned to a specific version for a specific reason is now upgraded. A dependency that was deliberately excluded because of a licensing issue is now included. A security vulnerability in an old version of a package is now fixed — or a new one is introduced in the replacement.
If you are reviewing a full-rewrite PR, the dependency changes deserve as much attention as the code changes. Possibly more, because dependency vulnerabilities are where many real-world exploits start.
How to Talk About AI Rewrites at Work
The most productive conversations about this are not about whether AI tools should be used. They are about what the norms should be for how they are used on a shared codebase. Here is a framework for having that conversation without it turning into a generational fight about whether AI is going to replace everyone.
Lead with the shared problem, not the rule violation
Nobody responds well to being told they broke a rule they did not know existed. Especially if they are a newer team member who has been building with AI tools for years and genuinely does not understand why this was a big deal. Start from the shared problem: "We have a codebase we all need to be able to maintain and understand. When something changes in a way nobody except the person who made the change can understand, that's a problem for all of us." That is not a judgment. It is a statement of fact that everyone can agree with.
Separate the quality of the work from the process
If the rewrite is genuinely better — cleaner architecture, better test coverage, less technical debt — acknowledge that directly. "This looks like an improvement in a lot of ways, and that matters. The conversation we need to have is about the process, not about whether AI tools should be used." Conflating "the process was wrong" with "the work was bad" puts the developer on the defensive and prevents an honest conversation about what the norms should be going forward.
Establish what "AI-assisted" means on this team
Most teams do not have explicit policies about AI tool use, which means every person is making their own judgment calls. This is a good forcing function to create a shared understanding. Not "are AI tools allowed" — they obviously are — but "what kinds of AI-generated changes require discussion before implementation?" A one-line AI suggestion is different from a 100-line AI-generated function is different from an AI-rewritten entire module is different from an AI-rewritten entire application. Where on that spectrum does the process change?
Write it down
Whatever you agree to, document it. Not in a formal policy that nobody reads, but in the README or the contribution guide or wherever your team actually keeps agreements about how you work together. "We use AI tools extensively, and changes over [X] lines that were AI-generated should be flagged in the PR description so reviewers can apply appropriate scrutiny." That is a one-sentence norm that would have changed the dynamic of the Reddit story entirely.
The Norm That Would Have Changed Everything
If the developer had simply written "FYI — I used AI tooling to do a full rewrite of the application. Here is what changed architecturally and why I think it is an improvement. The test suite passes and I have manually verified [these specific flows]. Please review with that context" — the conversation would have been completely different. Transparency is not just courtesy. It is information that reviewers need to do their job.
What This Means for Vibe Coders
If you are a vibe coder — someone who uses AI as a primary coding partner rather than a spell-checker — this story probably hits differently than it does for someone who came up writing every line by hand.
You might be thinking: "What's the big deal? The app works. Isn't that the point?" And in a certain light, you are not wrong. The end user does not care how the code was written. The tests do not care. The product does not care. If the software does what it is supposed to do, the means of production are secondary.
But here is what vibe coders sometimes miss: on a team, the codebase is not just software. It is shared institutional knowledge. It is the accumulated decisions of everyone who has ever worked on it. When you rewrite it unilaterally — even if the result is objectively better — you are not just changing code. You are erasing that institutional history without anyone's permission.
The collaboration gap
Many vibe coders built their skills working solo — on side projects, personal tools, freelance work. In that context, moving fast and rewriting aggressively is exactly the right approach. You have nothing to protect except your own time, and AI tooling can save enormous amounts of it. The mental model that develops is: when something is not working, rewrite it. When a codebase is messy, clean it up. When a better architecture exists, build it.
That instinct is good. It becomes a problem when it gets applied to codebases that are not yours alone. The skills that made you effective as a solo builder — moving fast, embracing rewrites, using AI aggressively — are the same skills that can blow up team relationships if they are not paired with a clear understanding of when to loop people in.
AI is not a reason to skip communication
One of the subtle ways AI tools can make you worse at collaboration: they remove friction. Without AI, the cost of a full rewrite was so high that you were forced to have conversations, build consensus, plan carefully. The friction was annoying, but it also ensured that major changes were communicated and agreed-upon.
When AI removes the technical friction, it does not automatically create a substitute for those conversations. You have to build that habit deliberately. Before you use AI to make a change that affects shared code — especially a sweeping one — ask yourself: "Do the people who work on this codebase know what I am about to do? Would they want to?" If the answer is uncertain, have the conversation first. Do the rewrite second.
The Vibe Coder's Credibility Test
Every time you work on a team codebase, you are building or spending credibility. A surprise rewrite that works might earn you a reputation as someone who moves fast and produces results. It might also earn you a reputation as someone who cannot be trusted with shared systems. The difference between those two outcomes is almost entirely about communication, not about the quality of the work.
Protecting Yourself: What to Do If You Are Either Person in This Story
Maybe you are the developer who did the rewrite and is now fielding uncomfortable questions. Maybe you are the teammate who opened that PR and is trying to figure out how to respond. Either way, here is a practical guide.
If you are the developer who did the rewrite
First: do not get defensive. The discomfort your team is experiencing is real, even if the work product is good. You changed something significant in a shared environment without warning, and people need time to process that.
Come prepared with evidence. Which tests pass? Which user flows have you manually verified? What specifically changed architecturally, and why? What did you find wrong with the original codebase that made you want to rewrite it? The more concrete you can be, the more you shift the conversation from "why did you do this" to "what do we need to verify before this goes to production." That is a productive conversation.
Be willing to slow down. If the team needs time to review the rewrite carefully, give them that time. Do not push to merge quickly. The work will be more trusted — and you will be more trusted — if you are visibly committed to making sure it is correct rather than just trying to ship it.
And going forward: have the conversation before the rewrite, not after. Flag that you are thinking about a major structural change, explain your reasoning, get some alignment. It does not have to be a lengthy approval process. It can be a Slack message: "The auth module is really messy. I am thinking about doing a full rewrite using AI tooling this week — anyone have strong feelings about the current implementation I should know about before I start?" That single message changes everything.
If you are the teammate reviewing the PR
First: do not approve it out of inertia because it works. "It works" is a necessary condition for merging code, not a sufficient one. You cannot review this PR the way you review a normal PR, and you should be honest about that rather than pretending you did a real review when you did not.
Request what you need. Ask for explicit documentation of what changed. Ask for evidence of which flows were manually verified. Ask about the dependency changes. If there are areas of the application you are specifically concerned about — anything touching security, payment processing, data integrity — say so explicitly and ask for those areas to be reviewed more carefully.
Use this as the forcing function for a team conversation. This situation is not going to become less common. Now is the best possible time to establish shared norms around AI tool use, because the specific incident gives everyone a concrete example to reason from. "Given what just happened, what should our process be going forward?" is a question the team can answer together. Do not waste the opportunity by just approving the PR and moving on.
The "It Works" Trap
"It works" is the beginning of a review, not the end. Code that passes tests and looks correct in development has a long history of failing in production in ways that are expensive to debug and fix. The fact that the rewrite is functional right now tells you it is not obviously broken. It does not tell you whether it is correct, secure, maintainable, or free of the subtle behavioral changes that will surface in six months. Approve with full awareness of what you have and have not verified.
FAQ
It depends entirely on context. A full AI-assisted rewrite can be appropriate when the existing codebase is genuinely beyond saving, when the project is still small and pre-production, or when the entire team is aligned and has reviewed the output carefully. It becomes reckless when done unilaterally on a shared codebase, when the output is not reviewed, or when critical business logic or security requirements may have been silently changed.
You cannot review a full rewrite the same way you review a normal PR. The only realistic approach is to review by behavior rather than by line: run the test suite, verify every user-facing feature works identically, check that environment variables and secrets handling is correct, confirm that authentication and authorization logic is unchanged, and look specifically at anything involving money, data writes, or security. A diff-by-diff review of thousands of changed lines is not feasible or meaningful.
First, don't panic and don't immediately approve. Ask for a side-by-side comparison of behavior, not just code. Require proof that the test suite passes and that critical paths have been manually verified. Have a direct conversation about process — not to punish the developer, but to establish shared expectations around how AI tooling gets used on shared codebases going forward. This is a workflow conversation, not a disciplinary one.
Because with modern AI coding tools, a full rewrite is no longer a months-long project — it can happen in hours or days. The tooling made it technically possible before most teams developed norms around when it is appropriate. Developers who have internalized AI-native workflows sometimes forget that their teammates have not, and that a codebase is shared institutional knowledge that belongs to the whole team, not just whoever touched it last.
Yes — especially when working on a shared codebase. Being transparent about AI tool use is not about asking permission to use the tools. It is about giving your team the context they need to review your work appropriately. AI-generated code has different failure modes than hand-written code. Reviewers need to know so they can look for the right things. Transparency also builds trust and prevents the kind of shock that happens when someone opens a 10,000-line diff with no explanation.
The biggest risks are not in obvious failures — they are in subtle changes that look correct but are not. Business logic encoded in obscure ways, edge cases the original developer handled deliberately, security checks tied to specific implementation patterns, data migrations that assume old schema structure. Code that "works" in testing can fail in production when it hits real users, edge cases, and load. A working rewrite is the beginning of review, not the end.
It can go either way. A well-reviewed AI rewrite that improves architecture, reduces complexity, and has solid test coverage can dramatically reduce technical debt. A hastily done rewrite that no one understands, has no tests, and silently changed behavior is technical debt of the worst kind — invisible and plausible-looking. The key question is: does the team understand what changed and why, and can they maintain the new codebase going forward?
What to Learn Next
If this piece raised more questions than it answered, that is probably the right reaction. The territory here is genuinely new, and the norms are still being written. These four articles go deeper on the specific pieces of this story that matter most.