"I didn't grow up dreaming of prompting." [Part 1]
Part 1: AI productivity increases miss the point - it's a creativity increase we need.
I’ve come to much the same conclusion as David Whitney in his devastating post, Existential Dread and the End of Programming, which I read while copy-editing this one. The AI tools are here, they’re Good Enough, programming as we know it is over, and, taking a wider view, we’re not institutionally or individually ready for what comes next.
Which sucks. But we’re programmers. We solve problems, so let’s try and put the dread aside and work the problem.
Even if AI does deliver the promised productivity gains, it wouldn’t matter, because we are optimising for the wrong thing.
This essay is an attempt at that. Maybe I’m being too optimistic. Maybe Hard Problems are Not Still Hard. Maybe they’re easy now. Who knows.
David says,
“While so much of what we do in software is remixing existing concepts, innovation isn’t going to come from an existing corpus of information, but business innovation might. You’ll still need those experts if you want to do something actually unique.”
Which I suspect might be true. That’s my thesis in Section 2, and my previous post.
In this series I’m going to argue three main things:
AI is going to change the world of work, but probably won’t take your job. In fact, many organisations might see little benefit, for organisational and TCO reasons.
Identifying productivity as the key problem we face is a category error—it’s creativity that’s the problem, and AI doesn’t solve this.
You can and should reject the political programme behind AI in its current form, doubly so in your creative pursuits and hobbies.
This article is split into three posts (coming this week—I will update these links as they go live):
Part 1 is about how AI addresses the wrong thing: a productivity deficit as opposed to a creativity deficit.
Part 2 is about the political programme behind AI, and why it isn’t great—as well as why we probably won’t see any huge changes because of inertia, particularly in large organisations.
Part 3 is about how you, as an technologist, can use these tools, why that’s still a hard problem, and what is left. I try to end on a hopeful note.
Let’s go.
1. AI Is Boring, But That Won’t Save You
The more I work with AI1 tools, the more I find myself becoming apathetic. I’ve managed to avoid them entirely for my writing, and perhaps this is why I find my interest for it so reinvigorated of late. However, programming with too much AI in the loop (I’ll tolerate ‘turbo-autocomplete’, having spent some years used to Rust and Haskell compiler hints) is quite joyless.
Before I continue, if you’re already typing “luddite” in the comments2 then you might want to skip to section 5 in Part 3, where I argue for what a tolerable workflow might look like. It’s not dissimilar to David’s, outlined in his post.
In a future post (not part of this series), I’ll make the optimistic argument that being able to iterate quickly and make outsize gains may usher in a new era of small, high-impact software shops. It’s certainly the direction I see my consultancy, Envoy Labs, going.
Still, like David, I find the process exhausting, perhaps more so. I will also note, that for me, while such a workflow is fine for work, it is still (a) kind of joyless, and (b) not sustainable in the long-run unless we train engineers the old-fashioned way, probably without, or with minimal AI tooling. It’s not something I have much interest in adopting for my own personal projects, even if it may present a significant business opportunity (that we will be seeking to capitalize upon).
I began programming as a creative pursuit, an extension of writing that allowed me to build abstract, yet concrete things out of words and symbols. If cyberspace is real, then programming is a form of magic, conjuring real things from the ether by giving them their real names. The writer Alan Moore argues that prose is already a form of magic—programming then is an order of magnitude more powerful.
Don’t get me wrong, AI has some uses—off the top of my head, making adjacent similar things, like lesson plans if you’re a teacher, or speeding up making low-effort semi-creative work, such as shitposts or memes for your friends.
In the programming world, I’m happy to get an agent to spit out additional tests for code that I’ve already worked out the hard parts of. My friend Paul—who is a very good engineer—has a really good use case. He uses them to synthesise information, so he can ask questions as he thinks about design.
That’s a pretty good, time-saving use. It’s not world changing, however.
The key is that simply automating low-hanging drudgery that in many cases we simply would not have bothered doing if it wasn’t for the AI assistance isn’t going to juice the valuations of the AI companies.
Or, as another writer described their usage of AI,
“On the other hand, if I had needed to pay someone proper money to do it, I probably would not have done it at all.”
The investors in these AI companies, as well as the incumbent tech giants rushing to put forward their own AI solutions, all have a colossal vested interest in arguing that AI can 10x, 100x productivity, or do jobs that otherwise wouldn’t have been done. Until recently, their argument was still pretty thin.
However, something changed in the last generation of models, and that’s that they got reasonably good at spitting out decent, usable code. They went from reliably knocking out short scripts to reliably knocking out plumbing code for backend applications. Cue another round of hype over productivity gains.
The thing is, even if AI does deliver those gains (it doesn’t in most cases—organisations need to be set up to realise them, see Section 4), it wouldn’t matter, because we are optimising for the wrong thing. Again, for the counter-argument, scroll down to A Tolerable Engineering Workflow (Section 5).
Moreover, there’s the question whether you can prompt to something novel. Weak Sapier-Worff suggests the answer is, possibly not. Still, this might not matter. Most of creative output is a remix or repurpose of something that exists, after all.
Thus we arrive at our subject for today. Why creativity isn’t the same as productivity, and why the political programme of productivity and speed at all costs is not only toxic, but misses the mark of what we actually need.
2. Creativity versus Productivity
On some level, AI productivity increases are just the story of doing more, with less. It’s sort of like austerity, except applied to putting in effort and using imagination. But, as the old saying goes,3 “if less is more, just think of how much more ‘more’ will be!” Creativity is the answer to the scarcity mindset implied by AI maximalism.
AI is the presumptive owner of the narrative of progress, obsessed with the aesthetic of the future. This isn’t new; the crypto scene was obsessed with 80s retrofuturism and the cyberpunk aesthetic. The internet as jurisdiction and peer-to-peer networking are kind of both edgy and cool. Moreover, if the aesthetic, or the performance of crypto, actually matched the reality, then perhaps it would have formed part of a creative answer to our stalled present.
We’re not looking at a productivity crisis—we’re looking at a creativity crisis.
Agency is one of the things we lack. Alienation is one of the things we feel. It may be dorky to say it, but peer-to-peer networking implicitly has a promise of both agency and community. Ask anybody that file-shared using BitTorrent in the old days and they’ll tell you—it wasn’t just about getting that new record first, it was also about something less tangible, and more vibes-based.4
Of course the reality of crypto was mostly just gambling, but I can tell you, when a network genesis happened, you felt something irrational, something—dare I say it—emotional, or hopeful, about the peering process starting to spit out blocks. Even if there was no reality to back the performance, well, we were still engaging in a shared performance. Maybe, after all, what we were doing in crypto was at least part performance art.5
As a result, there’s a bit of me that has a tiny bit of sympathy for the AI maxis who genuinely believe the hype.6 It’s quite a natural human feeling to look on something that has the appearance of novelty with optimism.
The main selling-point of AI (and of blockchain before it)7 is the idea that we can boost productivity, and create a new margin of growth that will unstick semi-stalled developed economies. This is the inevitable logic of ageing populations that have to support more retired people with fewer working-age people, but it’s also an often-unchallenged attitude in the political and business class as much as it is a function of demographics.
The more I think about it, however, the less I think we’re looking at a productivity crisis—we’re looking at a creativity crisis. Perhaps the need for ‘more creativity’ is simply perspective, and I’m dismissing the ‘wrong’ creativity.
After all, much as it might be hugely overstated, the creation of AI tooling, marketing and proliferation represents a huge creative endeavour. It’s not just ekeing out additional efficiency from existing paradigms. Even if the AI programme fails, it has, as a doctrine, at least been a novel intellectual and narrative one.
Still, that’s just one novel doctrine or movement. I think we can do better. Much better. I think moreover, we can and should demand a plurality of novel movements, and seek to bring them into being.
What if we more effectively oriented around creativity such that we could imagine the new, both in our day-to-day work and at a higher level, in our institutions, businesses, communities and politics? Sort of a radical optimism made doctrine, if you like. On the face of it, this isn’t that different to e/acc—it’s just less pedantic, as it doesn’t make the arrogant assumption that capitalist technology is the only game in town for the transformation of our world.
Even if we confined ourselves to the narrow sphere of economics, I could illustrate the difference thus: AI means that barriers to entry are lower, but retained knowledge and context is kept by the AI model owner, as they iterate on their model. They operate, essentially as a feudal landlord.
In a sense, they’re the end state of what McKenzie Wark called ‘Vector Capitalism.’ They own the platform and the means to connect the dots in the economy. More straightforwardly, the landlord analogy is one that’s been made by former Greek finance minister and economist Yannis Varoufakis, among others. As a result, the tendency is for incumbents to benefit more, even if there is the illusion of greater agency for ‘creative destruction’ and challenging incumbents with new ideas.
What if instead of this, in the most extreme example, we ignored the AI tooling and just tried hard to come up with new ideas for businesses? What if we aggressively subsidized that principle as a government, accepting the huge failure rate of new businesses? What if we essentially took the VC model for risk and made it a part of the social contract?
What if we said, “we, society, will take a risk on your novel idea, citizen, if you at least try your hardest to make it a reality.” That’s opportunity and agency in action.8
Then, instead of the performance of disruption, we might get the reality of disruption. Then, instead of the performance of agency, we might get the reality of agency. Then, instead of the performance of productivity, we might get the reality of creation.
Only the novel can solve the problems of our societies.
Thus AI—in its current form, at least—is a bust.
AI claims ownership of the aesthetic of progress, but that’s all it is. it is not a reality. It can only be an endless repetition of the present, with each repetition endlessly accumulating capital in the hands of the feudal AI landlords.
Only creativity can bring forth the novel, and this is not something that AI tooling in its current form can offer. It’s not clear that it is something AI tooling will ever be able to offer.
Even if it is, I suspect the reality, viewed through the lens of tech, will be something like what David Whitney describes,
“These engineers will need to have taste, and they’ll probably be involved in early hand writing of some categories of code to establish patterns for the machines to follow in the first instance, but likely will accelerate to the point where traditional workflows of pull-requests and reviews don’t make sense when faced with the pace change can be made.”
These engineers will need to be experts, and fully understand their domain (see Section 5); from whence will they come unless we (or a business) subsidise their traning and apprenticeship? In fact, David’s discussion of how a novel software project might be run sounds very like the research projects we’ve been working on in the Future of Money group.
Most of the work thus far has been exploratory code to understand the domain. Many younger contributors have used a lot of AI tooling. I’ve mostly worked by hand, though I expect my usage to flip in the next project stage. This seems a natural progression if we are to benefit from the tooling, but the key thing is that the early, exploratory, novel phase of the project had to be paid for first. It’s not actually as fast or cheap as it looks.9
Thomas Kuhn’s argument on the structure of scientific revolutions is often referenced by tech types, usually with the assumption that paradigm shifts must necessarily happen in the large, due to technology, but I don’t believe that.10 I believe that creativity is the spark for the paradigm shift, and this can happen in the small and the large.
Still, it is incumbent on us as societies, if we buy the AI argument (progress)11 to try and make it manifest by actually catalysing the creativity that brings forth the novel. By definition, even if we limit ourselves to start-up enterprise and not novel ideas in art, culture,12 politics, et cetera, we cannot know ahead of time what these new businesses and ideas will look like. We must simply write the cheque and give people the space to create.
In case I haven’t been clear enough—only the novel can solve the problems of our societies. Thus AI—in its current form, at least—is a bust.
Furthermore, in case I haven’t been clear enough on another point—the novel doesn’t have to be technological. It can be cultural, artistic, or political. Subsidize a new generation of polytechnics and art schools—we do not know from where the ideas will come.
Much has been written on the AI-inevitability narrative of late—there’s great posts by Cory Doctorow13 and others that go into the subject in great depth. However, the best commentary I’ve seen so far is a feature-length video essay by the musician Adam Neely.
We’ll talk about that in Part 2.
Acknowledgements: Thanks to the many people that fed back on earlier or draft versions of this post series, including but not limited to: Jon Stone, Craig McMillan, and Rob Bowley. Cheers for the conversations while I worked out shower thoughts to Andy Gray, Geoff Goodell, James Morgan, David Scott, David Alesch and Jack Gray. Thanks also to all my network that I have bugged about AI tooling, workflows, best practice in their places of employment and for opinions. I hope I’ve done your thoughts and feedback justice.
An aside: It’s not “AI,” it’s an LLM. However, calling it a “pretty good guessing machine” isn’t as good a marketing gimmick as taking on all the cultural baggage (positive and negative) of the “AI” moniker. The cultural and sci-fi baggage adds to the hype, but the question is whether we are seeing a revolution or an evolution of tooling. Given the advances in machine learning over the last decade, it may well simply be the former, with an added side of greater availability. In this post, I’m talking exclusively about LLMs, because let’s face it, if AGI comes along then we’ll be too busy getting turned into paperclips to worry about the subtleties of economic policy and software craftspersonship.
A term, by the way that is probably correct to apply to me, in its original form, without the weight of subsequent discourse. Despite working in frontier tech, I am both a political radical and a critic of the things I work in.
Yes, it’s a quote from Frasier. Bet you didn’t expect that.
It was mostly about getting that record for free, though. Just like crypto was mostly about having access to theoretically uncapped upside in exchange for the risk of ‘going to zero’. For a book-length discussion about file-sharing, the MP3 and the music business, I recommend How Music Got Free.
The counter-argument to this semi tongue-in-cheek statement is Scott McCloud’s, in the third section. I think.
I don’t have any for the grifters that are using it for political and personal ends, of course.
If you’re not a cypherpunk, anyway.
This post has taken a while to write, and during the drafting process I’ve had my ear out for people with ideas to test this thesis. The best I’ve heard was while door-knocking in Denton during the recent by-election. One of the chaps I was paired with had been researching recycling large-scale (think house and car-sized) lithium batteries. However, the VC-backed company sponsoring the research ran out of cash, ending the project and research. Obviously I’m in no position to judge commercial viability, but it sounded like it could be a candidate for the sort of thing I’m talking about here. New, research-based business that also tackles an externality? Love it.
And that’s before we even consider project lifecycle and maintenance.
Just today as I prepared to post this, I read another one, this time on AI. There’s points I agree on, points I disagree on—though it should be noted that Kuhn, while influential, is not the final word on the theory of scientific history. For example, it’s worth looking up Imre Lakatos and the idea of progressive or degenerative research programmes.
Though, as I’m sure you have noted, I would always ask the question, “progress towards what?” Progress for progress’ sake, as we shall see in the next sections, can lead to Bad Things.
There’s a great argument that the intersection of free higher education and the existence of art schools in the UK allowed a generation of creativity to happen and push culture forward without the fear of failure. Young people that wouldn’t have otherwise gone to University were not intimidated by art school and this framework allowed them the space to innovate. Given how many influential punk bands came out of that institution, I think it’s a very compelling argument. And yes, I nicked it from Mark Fisher.
I think the idea of the ‘reverse Centaur’ is particularly relevant when we’re talking about the implied political programme of AI.


