How to Make AI Coding Tools Actually Useful - Part 2
AI coding agents are only as good as the requirements you give them. Here's a practical framework for turning vibe coding into structured, reliable development.
Quick recap
In part 1, I talked about why AI coding tools produce chaotic results when you skip planning. The tools are brilliant. The problem is we throw vague instructions at them and then act surprised when the output is vague.
This post is about the fix.
The vibe coder spectrum
Not all vibe coding is equal. There’s a spectrum, and where you sit on it determines whether AI tools help or hurt you.
Pure vibe coder. “Build me an app that does X.” No spec, no criteria, no constraints. The AI decides everything. This works for throwaway prototypes and nothing else.
Guided vibe coder. “Build me an app that does X, with these 3 user types, this data model, and these 5 core flows.” The AI has direction. Output is decent but still misses edge cases and non-obvious requirements.
Spec-driven builder. “Here are the user stories, acceptance criteria, edge cases, constraints, and what’s out of scope. Build this.” The AI produces genuinely production-quality work. You spend your time reviewing instead of rewriting.
The difference between these three is not the tool. It’s the 30 minutes of planning before you open the tool.
What AI coding tools actually need from you
I’ve been using these tools heavily, and I’ve noticed a pattern in what separates good results from bad ones. The AI needs five things from you before it can do great work:
1. Context about your system
The AI doesn’t know your codebase, your tech stack, your naming conventions, or your existing patterns. When you start a session without providing context, the AI makes assumptions. Sometimes those assumptions are fine. Often they clash with everything you’ve already built.
Before you start, tell the AI:
- What framework and language you’re using
- What patterns you follow (folder structure, naming, state management)
- What existing code this feature needs to integrate with
- What external services or APIs are involved
This isn’t overhead. It’s the same context you’d give a new developer on their first day.
2. A clear problem statement
“Build a notification system” is not a problem statement. It’s a solution. The AI needs to know why.
“Users miss important updates because they only see them when they log in. We need a way to notify users of critical events within 5 minutes, even when they’re not active in the app.”
Now the AI understands the constraint (5 minutes), the trigger (critical events, not all events), and the user state (not active). It’ll make better architectural decisions because it understands the problem.
3. Acceptance criteria for every feature
This is the single biggest lever you have. Acceptance criteria tell the AI what “done” looks like, and they tell it exactly what to test.
Bad: “Users can reset their password.”
Good:
- User receives a reset email within 60 seconds of requesting it
- Reset link expires after 24 hours
- Reset link is single-use; clicking it twice shows an error
- Password must meet minimum strength requirements (8 chars, 1 number, 1 uppercase)
- User is logged out of all sessions after resetting
- If the email doesn’t exist, show the same success message (don’t leak user existence)
The second version gives the AI everything it needs. It’ll handle the expiry logic, the single-use token, the security consideration about not leaking emails. You didn’t have to debug any of that after the fact because you thought about it before.
4. Edge cases, explicitly listed
As I covered in part 1, AI tools don’t think about edge cases on their own. You need to hand them a list.
For any feature, spend 5 minutes asking yourself:
- What happens with zero data?
- What happens with a massive amount of data?
- What happens when the user does this twice?
- What happens when two users do this at the same time?
- What happens when the network drops partway through?
- What happens when the user doesn’t have permission?
- What happens on mobile? On a slow connection? With a screen reader?
Write these down. Include them in your prompt or your spec. The AI will handle them. But it won’t think of them.
5. What’s out of scope
This one surprises people, but it’s critical. Without a scope boundary, AI tools will gold-plate. They’ll add features you didn’t ask for because they seem related. They’ll build a full notification preference system when you just wanted email alerts. They’ll add internationalisation support because “it’s best practice.”
Tell the AI what you’re NOT building. “No push notifications in v1. No notification preferences UI. English only. No batching/digest emails.”
Constraints make the AI faster and more focused. Just like they do for human developers.
A practical workflow
Here’s what actually works for me when using AI coding tools on real projects:
Step 1: Write a mini spec (15 minutes). Problem, users, core user stories, acceptance criteria, edge cases, out of scope. Doesn’t need to be pretty. A markdown file is fine. A conversation in Slack is fine. A bullet list in a notes app is fine. The format doesn’t matter. The thinking does.
Step 2: Feed the spec to the AI. Start your coding session by giving the AI your spec as context. “Here’s what we’re building. Here are the requirements. Here are the edge cases. Here’s what’s out of scope.”
Step 3: Build in focused chunks. Don’t ask the AI to build the entire feature at once. Break it into stories and build one at a time. Review the output. Run the tests. Move to the next one.
Step 4: Test against your acceptance criteria. This is where the spec pays for itself ten times over. You have a checklist. Go through it. Does the feature actually meet every criterion? The AI probably got 80% right on the first pass. Fix the remaining 20% with targeted follow-ups.
Step 5: Check the edge cases. Go through your edge case list one by one. Empty state? Handled. Concurrent access? Handled. Permission boundary? Handled. This takes 10 minutes and catches the bugs that would have taken hours to find in production.
The planning is the product
Here’s the uncomfortable truth about AI coding tools: they’ve made the “writing code” part almost trivially fast. Which means the bottleneck has shifted entirely to “deciding what to write.”
Planning was always important. But when coding took days, you could sometimes get away with figuring things out as you went. The cost of a wrong decision was a day of wasted work.
Now that coding takes minutes, the cost of a wrong decision is still a day of wasted work. Because you’ll spend that day debugging, rewriting, and discovering edge cases you should have caught up front.
The teams and solo builders who are getting the most value from AI coding tools aren’t the ones who prompt the fastest. They’re the ones who plan the best.
Where this is going
I think we’re heading toward a world where “software developer” means something closer to “software planner.” The ability to define requirements clearly, think through edge cases, and specify what done looks like will matter more than the ability to write a for loop.
The developers who thrive with AI tools will be the ones who can think through a feature completely before a single line of code gets written. The ones who struggle will be the ones who skip that step and wonder why their AI-generated code keeps breaking in production.
Planning isn’t the boring part anymore. It’s the entire game.
If you’re finding that the planning step is the bottleneck, that getting requirements out of conversations and into something structured takes too long, Projan is built specifically for this. It turns your product discussions into structured specs with acceptance criteria and edge cases. The kind of specs that make AI coding tools actually work. Try it free during the beta.
Stop writing PRDs from scratch
Try Projan free for 14 days. Beta users get 50% off for life.
Start Free Trial