AI Coding Agents have a paradoxical adoption
Every technology wave arrives with its own paradox. With AI coding agents, the puzzle is simple to state: the companies most eager to use them are often the ones that struggle to get value, while many of the companies that could benefit quickly are the least inclined to try.
I call this the Will-Fit inversion.

The Will-Fit inversion
Adoption is better understood through two forces: willingness to adopt AI coding agents and the ease with which they can help you-your technical sophistication.
-
The first is willingness-how much appetite and sponsorship your organisation has for the tools.
-
The second is fit-how ready your environment is: the shape of your codebase, the quality of your tests and CI, the clarity of ownership, and the constraints around data and security.
As these shift, outcomes change. In practice, willingness and fit often rise together; the teams that invest in strong engineering foundations tend to be the ones most eager to put agents to work.

In many technically sophisticated organisations, leadership wants to adopt AI coding agents, but complex stacks mean the tools struggle to deliver beyond demos or greenfield projects.
In many simpler organisations (from the sw technical standpoint), where agents might shine on day one, leaders don't feel enough pain to care, or they worry they will introduce risk.
The result is the inversion: high will, low fit on one side; high fit, low will on the other.
No surprise that the company that has a kafka cluster is more willing to use AI coding agents, but it is also a bit challenging for -modern as of 2025 Aug- AI coding agents to do something valuable without messing around other 7 features.
No surprise that the company with a bare wordpress with some custom styles does not see value -or even sees a threat- in VS Code Copilot, however it could really do good stuff there.
Organisations are individuals, partially coordinated
Policy slides and strategy memos make adoption sound like a single decision. Real life is messier because organisations are, at best, individuals partially coordinated.
Even when the official stance is "not now," some developers will try agents on the edges of their work. That behaviour is not a problem to suppress; it's a source of signal.
If you channel it inside sensible security boundaries, those experiments become an inexpensive way to test where the tools genuinely help, and a standing safety net for the leadership question, "What if we're wrong?"
The reverse also happens. An organisation can declare itself an "early adopter," then watch teams bounce off the tools with frustration. When engineers say, "this doesn't work," they're rarely refuting the entire idea.
They're telling you the scope is wrong, the prerequisites are missing, or both. Treat that feedback as a scoping problem, not a referendum.
Adjust the work; don't abandon the map.
Treat AI coding agents like tools, not totems
AI coding agents are tools. They are very sharp knives. They are great for chopping vegetables; they are terrible at chopping down trees. The difference is not a slogan-it's a scoping discipline.
Whould you give a sawmachine to a chef? Would you give a knife to a woodworker?
-
Vegetables are bounded, testable and reversible tasks. Think code modifications that follow a clear pattern, small refactors inside a single well‑covered module, documentation synchronised with code, tests extended or generated where the behaviour is already understood.
-
Trees are large, ambiguous, cross‑cutting changes-exactly the kind of work where an agent's mistakes are expensive and hard to detect. You know that refactoring 77 files impacting +3000 loc does not end well, don't expect an AI coding agent to -now as of 2025 Aug- to do it correctly as well. We all did those kind of refactors, and it ended one of two ways: revert, or a big big headhache; Claude Code may not feel a headhache, but you'll revert staged changes anyway.
If you've decided to bet on AI coding agents, start by serving your teams a plate of vegetables. If, within weeks, your desk fills with badly chopped trees, the problem is not the tool, it's the menu. Redirect.
Measuring without the binary trap
The fastest way to kill a good idea is to grade it with a yes/no question. "Does it work?" is a blunt instrument for a multi‑use tool.
A better evaluation reads like this: For which jobs, in our environment, under guardrails we trust, does this tool improve the work? That question nudges you to compare before and after, not theory and opinion.
In practical terms, look for deltas you can feel: the time it takes to turn a small ticket into a green PR; the minutes of reviewer effort on repetitive changes; the rate at which you roll back agent‑assisted merges; the small quality signals-fewer off‑by‑one mistakes in tests, fewer forgotten docs updates-that accumulate into trust.
Start with being micro-ambitious, as Tim Minchin said. When those numbers move for a specific class of task, keep the AI coding agent there and expand slowly. Where they don't, either shrink the scope or invest in the prerequisites that raise fit.
Raising fit beats arguing will
Many debates about adoption are really debates about taste. Those tend to stall. It's more productive to raise fit so that results, not rhetoric, carry the day. Fit has unglamorous components: stable CI, useful tests near the change, consistent linting and formatting, clear code ownership and review paths, dependency hygiene, a place where the AI coding agent can run safely.
None of this is novel; it's the same platform work that makes humans faster. The difference is that agents are unforgiving mirrors. They reveal brittleness you've been tolerating. Fixing that brittleness pays you twice: people move faster, and agents stop tripping. But that is a conversation for another day.
For technically ambitious organisations stuck in the "high will, low fit" quadrant, this is the route out. Keep the enthusiasm, cut the scope to vegetables, and put real budget into tests and CI.
As those foundations solidify, you'll find the tools begin to pay back predictably, and you can climb the autonomy ladder with a straight face. But that is a conversation for another day.
For simpler organisations stuck in "high fit, low will," the work is cultural, not technical. Allow governed experiments in safe repositories. Publish the results internally in plain language. Make small wins visible and boring.
When teams see a colleague land routine changes faster and with less tedium, willingness rises for free. But that is a conversation for another day.
A note on risk and policy
Governance should enable learning, not prevent it. Data boundaries and audit trails matter, and they're achievable. The key is to write policy that speaks to how work is done-what repos are in scope, what kinds of changes are allowed at what level of autonomy, and how reviewers sign off-rather than abstract bans and permissions.
Good policy reduces anxiety and channels energy. Bad policy tries to replace judgment and ends up replacing progress. We already saw some really concerning issues that arguably should bring AI coding agents -for now 2025 Aug- far from critical code and infrastructure.
What "keep it" looks like
An organisation "keeps" AI coding agents not when they pass a grand test but when, in a handful of everyday jobs, they make the work meaningfully better. You'll feel it in fewer repetitive keystrokes, quicker merges on small changes, cleaner documentation, less reluctance to touch dull parts of the codebase.
You'll hear it in the absence of drama (that is so f true, absence of drama a the kpi of adoption). The tool becomes a part of the routine. That is the quiet victory to aim for. From there, your dot moves across the map: fit improves because the foundations improve; willingness stays high because the payoff is real.
Closing the inversion
The Will-Fit inversion will not last forever. As tools mature and as teams learn to scope work well, the gap closes. But you don't have to wait. Map where you are today.
If you're eager and frustrated, narrow the tasks and invest in the plumbing.
If you're cautious but well‑positioned, let your early adopters explore inside guardrails and let their results do the convincing.
Above all, resist the urge to pronounce a verdict on "AI coding agents" in general. Treat them as what they are: a sharp new tool whose value depends on how-and where-you put it to work.
Organisations are individuals, partially coordinated. Coordinate a little better. Give people vegetables to chop. The trees can wait.