The Full Stack AI #4
Practical AI for full-stack developers
Why this matters
AI is very good at generating tests.
That doesn’t mean those tests are good.
Many teams now have:
This issue is about using AI to challenge assumptions, not blindly increase test count.
🧠 AI Technique of the Week
Use AI as an adversary, not an assistant
Most developers ask AI:
“Write tests for this code.”
That produces tests that:
Instead, make the AI try to break your code.
The approach
Treat AI like a malicious or careless user.
Prompt I use:
You are trying to break this function.
1. Identify assumptions the code makes
2. Propose inputs that violate those assumptions
3. Suggest test cases that would expose failures
4. Focus on edge cases and misuse, not happy paths
This produces fewer tests — but much better ones.
🛠 Tool / Workflow Spotlight
AI + existing tests (gap analysis)
Instead of asking for new tests immediately:
Follow-up prompt:
Given this code and these tests:
- Identify gaps in coverage
- Highlight risky untested behaviour
- Suggest tests that would add confidence
This avoids redundant tests and false security.
⚡ Prompt you can steal
False-confidence detector
Review these tests and identify:
- What assumptions they rely on
- What real-world scenarios they miss
- Where failures would still occur
Use this before:
🔍 AI news
💼 Career insight
Developers who understand testing get trusted
Teams trust developers who:
AI doesn’t remove this skill — it makes it visible.
What’s next
Next issue:
Thanks for reading. New issues arrive weekly.
— The Full Stack AI