|3 min read

Worth looking again

A developer who'd written off AI coding tools watched one sweep up bugs the team had chased for months. The lesson wasn't about the tool. It was about second chances.

ByJames Dodd

Note filed under:

The developer didn't say much. They pulled up the code review tool we'd set up the night before, ran it against the piece of work they'd been stuck on, and watched it list, in a single sweep, the problems the team had been chasing for months.

We hadn't hinted at what to look for. The prompt was generic: here is the project, here are the standards, review the change. The tool read the code the way a careful second pair of eyes would, and the issues that had been hiding in plain sight came out in one go.

The developer didn't look pleased, exactly. More like someone who had just remembered where they'd left their keys after searching the house for a week.

They had tried AI coding tools before and written them off. Fair enough. The version they'd used was an early one. A sidebar assistant wired into the editor, guessing at the next line, with no sense of the wider project and no ability to run anything. It produced poor code and not much of it. That was a couple of years ago, which in this particular corner of software is a long time.

Two years on, the tools are different in kind, not degree. They can read a whole project at once, follow a set of standards you hand them, and work through a task the way a junior developer might, taking small steps and checking their own output as they go. The industry has taken to calling this "agentic", which is a slightly unhelpful word. What it means in practice is: the tool doesn't just answer a question, it does a piece of work.

But the developer didn't know any of that. They knew what they'd tried a couple of years ago, and they'd been right about it at the time. Nobody had given them a reason to look again.

Show and tell is always the way in. A slide about capabilities would have bounced off this developer, and rightly. A review tool pointed at their own project, finding their own long-standing bugs, was harder to argue with.

We built from there, carefully. The next task we suggested was unglamorous on purpose: a messy merge conflict. (When two people change the same file and the version-control system can't work out how to combine their edits, it hands the mess back to a human to sort out.) The one after that was a rebase. (A rebase is when you take a branch of work you've been doing on the side and replay it on top of the main line of the project, which sounds tidy and usually isn't.) Both are the kind of chore developers put off on a Friday afternoon and regret on a Monday morning. Both are exactly the kind of work these tools are now good at.

A difficult demo first, to prove the thing is capable. Easy wins after, to build the habit. Not the other way round.

By the end of the engagement the developer was using the tools daily, carefully, and getting better at it. They were not a convert in the zealous sense. They were a practitioner who had changed their mind. That is the outcome we wanted. A convert tells you the tool is magic. A practitioner tells you where it helps, where it doesn't, and what they tried last week that didn't work.

The interesting part of this story isn't the tooling. It's the second look.

If you wrote AI off a couple of years ago, you were probably right then. You may still be right now. We are genuinely happy to tell you so, and save you the money. But the distance between what the tools could do then and what they can do now is wide enough that the question is worth asking a second time, with somebody in the room who isn't selling you a subscription.

That applies to the tools. It applies to us too.

Written by

James Dodd

Founder of moralai. Spent the last decade building software for people who don't describe themselves as technical.

Have a question this raised?

Talk to us, not a sales deck.

A short call, no prep needed. We'll level with you on whether there's anything worth doing here.