My Current AI Coding Workflow

Intro

We are all still figuring this out. Every developer I talk to seems to have a slightly different ritual when it comes to working with AI coding agents. Some treat the AI like a junior intern, others like a stack overflow search bar on steroids.

I’ve spent the last few months refining my own process. It is certainly not the "best" workflow out there, and it might even seem a little heavy-handed to those who prefer to just fire off a prompt and hope for the best. But I’ve found that by front-loading the effort and treating the AI as a collaborator rather than just a code generator, I save myself a massive amount of debugging time later.

Here is the 16-step workflow I use to go from a vague idea to shipped feature.

Phase 1: Alignment and Interrogation

The biggest mistake I used to make was jumping straight to code. Now, I spend a significant amount of time just talking.

1. The Spark

It starts with a feature idea. It might be messy or incomplete, but it’s there.

2. The "Ask" Mode

I open a context with a custom "Ask" persona. This isn't configured to write code; it’s configured to be a critical thinker. I bounce the idea, details, and constraints back and forth.

3. The Sanity Check

I explicitly ask the AI: “Do you understand the idea and the requirements?” It’s a simple step, but it forces the model to synthesize the conversation

4. The Reverse Interrogation

This is the most important step in the beginning. I ask the AI: “What questions do you have? Do you see any logical gaps or edge cases?” I don’t proceed until the AI has poked holes in my logic.

Phase 2: The Specification (The Contract)

Once we "feel" aligned, I don't ask it to code. I ask it to write.

5. The "Doc Writer" Mode

I switch to a different custom persona focused solely on technical documentation.

6. Drafting the Spec

I instruct the writer to create a feature specification based on our previous chat. I enforce a declarative style (describing what should happen, not just how). The spec must include:

*   The current state (and the state after changes).
*   Goals and desired outcomes.
*   Security concerns.
*   Relevant code locations

7. Fact Checking

I open a new context and ask the agent to fact check the assumptions made in the spec using web search and documentation (e.g. Context7)

8. The Peer Review

I open a new context with a different "smart" model (often a different LLM entirely). I feed it the spec and ask it to review and improve it.

9. Human Review

I scan through the polished spec. If something feels off or unclear, I loop back to the AI to fix the text. I never move to code until the words are right.

Phase 3: The Implementation (and The Break)

Now that we have a solid "contract" of what needs to be built, the coding is actually the easiest part.

10. The Handoff

I open a new context and give the spec to a smart coding agent. I tell it: “Implement this specification.”

11. Touch Grass

This is where the workflow pays off. Because the spec is detailed, I don’t need to hover. I go read Hacker News, grab a coffee, or actually step outside. I let the machine do the heavy lifting.

Phase 4: The Loop and The Polish

12. The QA Loop:

When I come back, I test the implementation. It’s rarely perfect on the first shot. I enter a feedback loop, pointing out bugs or UI issues, and the agent iterates.

13. The AI Code Review

Once it seems to work, I’m still not done. I open a fresh context with another smart AI. I feed it the original specification and the current git changes (the `diff`). I ask it to review the code specifically against the spec, looking for:

*   Logical consistency.
*   Security vulnerabilities.
*   Performance bottlenecks.
*   Project structure consistency.

14. Refinement

I have the agent implement the recommended improvements from the review.

15. Final Human Review

I do one last pass of QA and code review myself.

16. Ship It

When I’m happy, I commit the code. Crucially, I also commit the specification file. It serves as documentation for the future. Then, I push and ship.

Why this works for me

Is 15 steps overkill? Maybe. I know plenty of developers who can move much faster than this.

However, I’ve learned that I am prone to getting lost in the weeds if I don't have a clear plan. This workflow essentially forces me to slow down. By treating the AI as a partner that needs clear instructions rather than a magic wand, I find I spend less time untangling confused code later on.

It’s definitely a bit process-heavy, and I’m sure I’ll tweak it again next month. But for now, this structure gives me the confidence to ship features without constantly worrying that I missed something obvious. Plus, the coffee breaks are a nice bonus.

Adnan Mujkanovic

Adnan Mujkanovic

Full Stack Overflow Developer / YAML Indentation Specialist / Yak Shaving Expert
Gotham City