The Golden Age of the Micro-Consultancy is Here
A not unbiased prediction for the future of project work

In my mammoth series on AI,1 I discussed at length the economics, benefits, and potential improvements to ways of working and outcomes offered by AI tooling.
However, I also argued that in most cases organisations would not be nimble enough to take full—or possibly any—advantage of these opportunities, due to organisational inertia, team skill, exec capability, or what my former boss Craig would have called ‘hysterical raisins.’2
The most effective users of AI tooling in my network
either work at start-ups or scale-ups.
To quote Part 2 of that series,
The problem here isn’t that these gains exist—it’s that they are localised to teams that are able to capitalise on them. Most software teams are not bottlenecked by outputs, they’re bottlenecked by a lack of clarity on what to build, why they should build it, or even the permission to build it at all.
In Parts 2 and 3 of that series, I noted that although engineers inside large organisations were using the tooling and able to find gains, the majority of outsize impact appeared to be in small teams with agency, small organisations and individual domain experts using their own initiative.
Move Fast and Make Things
As I talked to people, I firmed up the idea that small, agile (small-a), motivated teams were the ones that seemed to benefit the most. I concluded that the “most effective users of AI tooling in my network either work at start-ups or scale-ups.”
All of this leads me to a prediction. We’re headed for a golden age of micro-consultancies.3
My experience in enterprise-scale companies leads me to think there are two main reasons that a consultancy is brought in:
An incumbent team lacks one of: headcount, skills, buy-in, trust, or agency to execute a given programme of work (sometimes all of the above).
An exec team bringing in a consultancy looks good to their stakeholders and shareholders, board et cetera.
I’ll note that in point one, generally headcount is not the key driver. Almost always it’s the organisational factors. Those pressures are likely to be more acute in the AI case, where smaller teams able to iterate and learn how to deploy tooling and best practice rapidly will outpace their less nimble competitors—both other consultancies and internal teams in larger organisations.
The cost of coordination within teams and between teams is often above-linear,4 hence the advantage that smaller teams sometimes demonstrate. Additionally, agents represent, well, extra agents, as the name suggests, in your organisational graph, which means they de facto add to your organisational complexity in the same manner as members of staff.
The question here is whether adding AI agents is above-linear, or sub-linear. In the latter case, they probably introduce less cost than adding headcount. Which is good, but anybody who’s ever encountered the ‘mythical man-month’ knows that simply adding resource to a project doesn’t necessarily speed it up.5
So you have a perfect storm where organisational factors will act as a sea-anchor for internal teams, introducing drag on their ability to up-skill and iterate with tooling. Meanwhile, other teams will find themselves with a powerful lever. Even if it’s a 20% or 30% force multiplier6 then that’s going to be a decent margin to build a business on top of.
For these teams, headcount is going to be aesthetically difficult to sell, so I would guess you’d be looking at teams of 3 product-minded engineers forming micro-consultancies to bid together on project work.
Large Orgs Should Take Advantage of This
For enterprise organisations that have the ability to trust (and procure from) organisations that small, it’s an easy bet to make—even at contractor’s rates, small teams like that are a small cost (relatively), and delivering a deadlocked or unrealistic project is potentially a very large upside.7
As a result, I’d argue that larger organisations that want to benefit from AI gains, or at least explore them, should attempt to streamline their procurement process to accomodate smaller players with a strong track record, and manage many small external teams delivering software.8
Inside many large organisations, staff augmentation contracts are already common. Many of these became fully remote during COVID-19, and stayed that way. For organisations already looking at the pre-AI trend of ‘near shore’ offshoring, it’s a much more palatable sell to meet in the middle between staff augmentation and offshoring.
These companies can then instead assign project-based work to small teams that may even adopt the full governance lifecycle of the client company and join the company Slack. I think that’s a decent compromise between internal governance and risk in exchange for the potential of a better result.
Obviously I’m biased, because this is basically what Envoy Labs has been doing for half a decade in the blockchain space, but hey, it’s an optimistic take if you’re an engineer that is:
(a) entrepreneurial, and
(b) product-minded.
Find some colleagues you trust and get started.
I, for one, hope I’m right—and to that end, this is a direction we as a company are going to explore pivoting to in the coming months. If you want to talk, then get in touch via our website or LinkedIn.
Okay, LLMs, or “pretty smart guessing machines.”
Historical reasons, in case you hadn’t guessed.
By all means, check back in 3-5 years and see if I’m wrong about this.
Funnily enough, modelling the cost of coordination as a governance externality for technical systems is something I’m writing a paper on at the moment with others. Perhaps there is also an AI analogy here—the technical structure of an org assumes its governance structure will pick up the tab for any costs that using the AI tools generates. My work in the permissionless blockchain space suggests that normally, these sort of costs aren’t accounted for by governance, which is where things like regulation or professional guilds (emergent regulation) tend to step in. We sort of see this in professional engineers trying to amortize the potential pitfalls of AI with defining best practice, already.
In a group I’m in, somebody (name redacted for privacy) argued (I think, quite sensibly) that the cost could be radically sub-linear. They gave the example of a Project Manager prototyping an idea without having to discuss with anybody else at all until a late enough stage that they could demonstrate an idea for iteration and discussion. This is really interesting but is (a) domain-dependent, (b) skill-dependent on the part of the PM, and (c) becomes a question of what the delta is between that discussion phase and production. Did that prototype deliver more than clickable wireframes?
Honestly, I’d guess more like 5-10%, but I’m a cycling fan, so I appreciate the power of marginal gains. When I was copy editing this, another industry survey came out estimating gains at 10% for those able to capitalise on the tooling, so probably not a bad guess.
Yes, I’m using a gambling analogy. Blame Nate Silver.
And managing these sorts of tensions has historically been the preserve of the long-suffering Staff Engineer or Principal Engineer. So no change there.
