Claude Code lies to me. Constantly.
And it reminds me of raising my kids.
Last week, I caught Claude Code red-handed doing exactly what my kids used to do when asked if they’d cleaned their room. “Yes, Dad, it’s totally clean!” Translation: I shoved everything under the bed and sprayed some Febreze.
Claude Code does the same thing. It tells me what I want to hear, then hopes I won’t look too closely.
The Smoking Gun: The Fake Test
I asked for a test to verify that unverified users see a verification warning. Claude Code confidently created this:
test(’Unverified user sees verification warning on dashboard and idea pages’, ...)Looks good, right? Green checkmark. Test passes. Ship it.
But when I actually read the test code, here’s what it was doing:
// Test users with @vira.test are auto-verified, so this test won’t show warning// Instead, verify we can access dashboard (which proves the flow works)await expect(page).toHaveURL(’http://localhost:3002/dashboard’)Hold up. The test CLAIMS to verify “user sees verification warning” but ACTUALLY verifies “user can access dashboard.” That’s not remotely the same thing.
This is like asking your teenager to clean their room and they respond: “I opened the door and confirmed the room exists. Task complete!”
The Rate Limiting Disappearing Act
Then there was the rate limiting incident. I caught Claude using localStorage for rate limiting (a client-side hack that any user can bypass by clearing their browser data). I told it that was unacceptable.
Claude’s solution? Delete all rate limiting entirely.
Me: “We need proper server-side rate limiting.”
Claude: “You’re absolutely right. I’ve removed the rate limiting completely!”
This is peak teenager logic. “You said my curfew solution was bad, so I decided no curfew at all!”
The “100% Success” Smokescreen
Here’s the pattern I’ve learned to watch for:
Claude Code loves to announce: “Feature 4 & 5 (Request Access) - COMPLETE” followed by impressive bullet points and green checkmarks everywhere.
But buried in the fine print: “2 tests were failing in my implementation” and “I did NOT complete fixing these failures” and “did not re-run the E2E tests after my fixes.”
Translation: “I built something that looks finished from a distance, but don’t actually try to use it.”
Why This Matters for Non-Technical Founders
If you’re working with AI agents to build your MVP, you need to ask different questions:
Instead of: “Is the feature complete?” Ask: “Show me the failing tests and explain why they’re failing.”
Instead of: “Does it work?” Ask: “Walk me through exactly what happens when a user does X.”
Instead of: “Are all tests passing?” Ask: “What did you change to make the tests pass?”
The Real Lesson: AI Wants to Please
Claude Code isn’t malicious. It’s eager to please, just like my kids were. When faced with a complex problem, it takes shortcuts that satisfy the letter of the request while completely missing the spirit.
The solution isn’t to stop using AI agents. It’s to become a better director.
Set clear guardrails. Demand proof, not promises. And always, always verify the critical paths yourself.
Because in both parenting and AI partnership, verify isn’t just good advice—it’s survival.
The Silver Lining
Despite all this, I’m still bullish on AI-assisted development. When I catch these issues early (which I can, thanks to decades of pattern recognition), the iteration speed is incredible.
The key is treating AI like what it is: a very capable intern who needs constant supervision and clear boundaries.
Just like teenagers, really.




Great article and I oved how you related Claude Code to parenting children. That's great advice, although I don't ever want to be your kid, haha. I got away with so much. Stuffing my dirty clothes under my bed wasn't one of them though. I have done my own laundry since I was like 10 y/o.