From Code Reviewer to Starship Captain: How I Learned to Re-Own AI-Generated Code

I started my day doing what Andrej Karpathy perfectly calls "vibe coding" - just bouncing ideas off an AI agent. After a few back-and-forth interactions, I had a pretty solid PRD for a new feature. Feeling confident, I asked the agent to implement it.

What I got back was 40 files and over 1,000 lines of code.

I stared at my screen, overwhelmed. An entire complex feature had materialized in minutes. My excitement quickly turned to dread. "What am I supposed to do with all this?" I thought. When I code myself, I start discovering how things must work, how components start interacting, and how the business rules that aren't clear or have flaws reveal themselves. It's like I'm learning by doing and the solution emerges from that experience. Now it's just there, by magic. And to be honest, I don't know if all this big chunk of code is even doing what I need.

If you've been experimenting with AI coding agents, you've probably hit this wall too. That moment where the promise of "lightning-fast development" starts feeling more like "endless code review purgatory."

But here's the thing - I was approaching it all wrong. And the shift in perspective that followed completely changed how I think about AI-assisted development.

🚀 The Journey Begins

Having tools to help speed up the development process isn't something new. I remember using ReSharper from JetBrains more than 12 years ago - those intelligent refactoring tools that made navigating and improving code so much smoother.

But a few years ago, we had an explosion of improvement with the appearance of GitHub Copilot. Those helpful autocompletes that saved me from typing out obvious loops and boilerplate felt like having a smart pair programming partner who never got tired of the boring stuff.

Then ChatGPT, Claude, and others started making a lot of noise. And suddenly we have terminology for coding using agents: Vibe coding.

https://x.com/karpathy/status/1886192184808149383

I considered myself a Product Engineer before vibe coding, because I enjoy thinking about products and not just technical solutions. Solving people problems is better than just building complex tech pieces.

Now, vibe coding will push all developers to do that, because the technical part isn't the complex part anymore. And what's crazy is we can do it 10x faster. Suddenly, I wasn't just getting line completions - I was having actual conversations about architecture, asking for entire functions, even whole modules. The agent would write, I would tweak, we'd iterate. It felt incredible.

🧱 The Wall

But then came that day I mentioned at the beginning. After getting those 40 files and 1,000+ lines of code, I spent the entire day trying to figure out if it was actually doing what I needed. I felt tired, overwhelmed, and worst of all - I wasn't learning anymore.

It reminded me of when I first started doing TDD. You know that feeling? You're writing tests first, the process feels clunky, and there's this voice in your head saying "Screw it, I can just write the code directly. This is taking forever." With vibe coding, I was having the same rebellious thoughts: "Screw it, I can do it by myself."

But the real issue wasn't the AI - it was my approach. I had unconsciously made a trade: I was swapping coding time for reviewing time. Instead of spending hours writing code and understanding it as I built it, I was spending hours trying to reverse-engineer what had been built for me.

The irony hit me hard. The tool that was supposed to make me more productive was making me feel less capable. I was drowning in output instead of surfing on solutions.

💡 The Breakthrough

The shift came from an unexpected source: my manager and head of architecture. During a team discussion about our AI workflows, they dropped some simple but game-changing advice: "Hey, you should ask the agent if the code is doing what it must do."

Wait. What?

"Oh, it's true," I realized. "I can ask the agent to explain how the code works, give me a flow diagram about how it's working, and check that design. I can ask questions about what's happening in specific scenarios." Instead of me trying to reverse-engineer the agent's logic, or spend endless time reading the code, the agent could walk me through its own thinking.

This completely flipped my approach. Instead of reviewing code line by line, checking the flow manually, I started asking the agent to give me abstractions about what the code was doing. "Show me a flow diagram." "Explain the step-by-step process." "What happens if this edge case occurs?" It was a Tony Stark moment, talking to Jarvis - and it felt amazing.

It was like going from checking blueprints to examining a holographic representation of the building I was constructing. I was operating at a completely different level of abstraction, understanding the system's behavior and design patterns rather than getting lost in implementation details.

âš¡ The New Workflow

Once I embraced this conversational approach, my entire workflow transformed. Now when I get that mountain of AI-generated code, I don't dive into the files. Instead, I have a conversation:

  • "Give me a flow diagram of how this feature works."
  • "Create a bullet-point summary explaining the step-by-step process."
  • "What are the key components and how do they interact?"
  • "Walk me through what happens when a user does X."

But here's where it gets really interesting - this approach doesn't just help me. I started including these AI-generated summaries, flow diagrams, and test cases in my PR descriptions. Now other engineers reviewing my code have a guided tour instead of having to reverse-engineer everything themselves.

My theory is that reviewers could feed these descriptions into their own agents to get even better, more targeted reviews. I'm essentially creating documentation that other AIs could understand and build upon.

But let me be honest - PR reviews are still a challenge I haven't fully solved. Even with better documentation, reviewing AI-generated code is different from reviewing human-written code. The patterns, the scale, the approaches - everything changes when agents are involved in the development process.

How are you handling PR reviews in your AI-assisted workflows? Are you still doing line-by-line reviews, or have you found better approaches? I'm genuinely curious about what's working (or not working) for other developers.

🌟 The Realization

This entire journey led me to a fundamental realization: I'm not supposed to be a code reviewer spending ages reading code generated by the agents. I was supposed to be the starship captain.

The agent isn't stealing my job - it's giving me a starship to command. I'm not handing over the wheel; I'm operating at a completely different level of capability. Instead of manually flying through every asteroid field, I'm setting coordinates, making strategic decisions, and directing a highly capable crew.

The key insight is this: it's not about reviewing the code the agent made. It's about re-owning the code the agent made.

Re-owning means understanding the architecture through conversation, validating the approach through questions, and taking responsibility for the outcome. It means being the captain who knows where the ship is going and why, even if I'm not personally operating every system.

When I code myself, I learn by building. When I work with agents, I learn by questioning, directing, and validating. Both are valid forms of learning - they're just different levels of abstraction.

The agent won't replace developers. But developers who learn to be captains of AI-powered starships will definitely outpace those still trying to fly solo.


What's your experience with AI-assisted coding? Have you hit the "code review wall" too? I'd love to hear about your breakthrough moments in the comments.