Like a hamster in a maze
I had a bit more time working with various AI coding agents. There, I continue to experience that whiplash between
“I can’t believe it did that on first try!”
“I can’t believe it took 10 tries and still couldn’t figure it out.”
Then I remembered something: My kids used to enjoy a show on youtube where a hamster was navigating an elaborate, whimsically themed maze. The clever little rodent is quite adept at navigating all sorts of obstacles, because it’s always quite clear what the next obstacle actually is. Put the same hamster into a more open context, and it would quickly be lost.
That’s how it goes with these AI tools. With too much ambiguity, they quickly go down unproductive paths. If the path forward is narrow, they perform much better. I see this most obviously with debugging. If I just tell Claude Code that I’m getting an error or unexpected behaviour, the fault could be in lots of different places, and more often than not it digs into the wrong place entirely, spinning in circles and not getting anywhere. Where it performs much, much better is the introduction of new features that somewhat resemble existing features. “Hey take a look at how I designed the new user form; can you please do the same for the new company form?”
In the end, it’s much easier keeping the AI on topic if the task has a narrow rather than open structure. Putting some effort into shaping the task that way can therefore pay big dividends.
