"I didn't grow up dreaming of prompting." [Part 2]
Part 2: The political and economic programme behind AI sucks.
In Part 1, we talked about how chasing productivity as the cure of all ills was a category error. In Part 3, we will talk about the potential positives of AI tooling, as well as what agency you, as a citizen and user, can exercise on our precarious Now.
In the next two sections, we’ll discuss why the current political programme behind AI is feudalism at best, and fascism at worst, and why for most organisations, the potential gains are unlikely to be realized.
A better future is possible—but we have to call out that the present way this programme is structured sucks.
Let’s start by discussing the best thing I’ve seen on the subject—a video essay by the musician Adam Neely (embed below).
3. AI Accelerationism and Fascism
In the essay,1 Neely starts off by simply addressing the ethics and creative dissonance involved in AI’s incursion into art, but then goes much deeper. He connects the intellectual lineage of the silicon valley VCs and e/acc enthusiasts2 to Italian futurism, and specifically the poet and writer Filippo Tommaso Marinetti.
Neely points out that in that case, the logical conclusion of such a reductionist techno-optimist manifesto of ‘speed at all costs’ was Italian fascism—Marinetti was the co-author of the Fascist manifesto. The fetishisation of speed and progress implicitly results in a suppression of critique, something that we see with AI skepticism.
For all their bluster about progress, they’d rather no future happen,
than a future they can’t control.
Bringing things back to the present, something I had never noticed until Neely pointed it out was that Mark “conehead” Andreessen has a nod to Italian Futurism on his wall, and repeatedly paraphrases Marinetti in his own writing. This should be concerning when considering incredibly powerful, wealthy people that switched to supporting Donald Trump when potential AI regulation was on the cards at the tail end of the Biden administration.3
As Neely points out, for every argument like democratization or removing shame that holds up to some scrutiny,4 the end result is the same. Though these are arguments applied to AI art, twist the perspective just a little, and you see the mental contortions applied to most areas of creative work in order to justify AI maximalism. If you’re on LinkedIn, look at your feed and you’ll see it.
These AI capitalists aren’t looking to solve the root cause—lowering barriers to technical or musical education, or working on making young people more resilient to failure, or to feel less ashamed of emotional cultural expression. They’re not looking at structural access to skills or opportunity, such as Universities, apprenticeships, or art schools.
No, they want to sell you a product. A product trained on everybody else’s intellectual property, and walled-off in their platform for them to rent-seek in perpetuity. Great.
Policy programming music critical thinking art essay writing academic work learning being a creatively fulfilled human doing stuff is hard and requires collaboration, interaction, struggle and self-reflection. It’s much easier to just solve problems by lowering the bar.
Then we can repeat the present forever. Which suits these guys, because they’re control freaks. They control the present, and for all their bluster about progress, they’d rather no future happen, than a future they can’t control.
Or maybe it’s simpler than that. A lot of the companies wrapped up in this are seeking to maintain stock valuations as ‘growth’ companies, with potential valuations far in excess of their actual P/E ratio. In such a context, their incentive with AI, as with blockchain before it, is to claim that not only are they on the verge of changing the world, but they’re the only ones that can do it.5
Some of the most powerful people and institutions on the planet want you to believe that the changes they promise are coming are a foregone conclusion, that they are inevitable. This is not the case—what they fear most is that you, we, will realise that in fact we have agency and a choice. No surprise that those who stand to gain most are those that say these changes are part of an inevitable ahistorical process of ‘progress.’6
It’s also no surprise that these same people are generally the narcissists and psychopaths interviewed and profiled by Nate Silver in his book On the Edge. I discuss these behaviours at length in my post reviewing his book.
You’ll have read the endless blog posts and think-pieces about how AI is already taking entry-level positions, even as the FT suggests that the underlying cause is simply normal employment cycles, or the effect of interest rate changes on hiring:
“Linking job losses to increased AI usage rather than other negative factors like weak demand or excessive hiring in the past conveys a more positive message to investors,” points out Ben May, director of global macro research at Oxford Economics.
An interesting data set is a National Bureau of Economic Research working paper, surveying 6000 executives. In it, the respondents predicted AI use would impact employment by either cutting headcount by up to 0.7%, or by increasing it by up to 0.5%. Though obviously the data will have been primarily on older models, they report a marginal impact, or no impact, on productivity.
This isn’t real, the AI isn’t coming for your job—at least not tomorrow—you’re being coerced.
Will your job change? Yes, it looks like it. Will it be only the boring parts that are left? Yes, it looks like it. Will it pay less? Yes, it looks like it.
All of those are bad things, but it’s not yet a doomsday scenario. For one, as I describe in the next section, put the middle class out of work, and maybe the whole gameshow ends anyway, and for another, we already have a political crisis caused by stagnant real wages, let alone falling ones. This is the same political and economic problem we were already facing, just accelerated.
Moreover, the AI can’t take your hobbies and your skills. Some things are worth doing for the sake of it. I talk more about that in the final section, Section 7.
Neely concludes by advocating for the virtues of Service, Patience, Craft and Beauty in creative pursuits, and that’s not a bad North Star as we navigate an increasingly uncertain world.
4. These AI Gains Aren’t Possible Anyway
Let’s take a moment to consider the AI productivity gains again.
Even in light of David’s post, what I’ve seen the tools do, and the workflow outlined in Section 5, I’m still not sure I buy it.
Why? Two big reasons. First, organisations can’t even take advantage of 20 year old technology, let alone frontier tech, and second, that our economies would collapse if they could.
“People don’t take guillotines seriously.”
Let’s talk economics first.
At the beginning of this hype cycle, there was a much-paraphased report by McKinsey on AI productivity gains that today, honestly, might pale in comparison to some of the most frothy predictions. Quoting Lee Vinsel’s piece on ‘Criti-Hype,’7 which you should read, we see a typical labour market prediction.
For example, in its 2017 report, the AI Now Institute, which is associated with New York University, paraphrased another report from the consulting firm McKinsey claiming that 60 percent of occupations would have 1/3 of their activities automated.
The reasoning is very simple here.
If white collar workers were laid off, middle-career, in the numbers that are (a) required for these AI companies to pay back their investments, (b) with enough revenue back to the AI companies to pay for the trouble, and (c) without any job to go to, our societies will collapse.
Let’s break this down.
First, in order to pay back the scale of investment, the level of adoption has to be on the order described, or not far off.
Second, the cost model of AI tools needs to change in order to cover its full cost (at the moment many companies are making a loss). I don’t imagine they will price in their externalities—what capitalists ever have?—but this increase in cost will also mean that the marginal saving between the employee and the AI will go down. It’ll likely still be a large enough gap, however.
Third, these workers become unemployed. Potentially for good. They go from productive members of society with social capital and a good standard of living to not being able to pay their mortgages.8 Let’s put aside the social unrest that would cause (I cover that below, and yes, it involves guillotines), and focus on what happens within an economy. What do you suppose happens to sales of IKEA furniture when 19.8% of young professionals cease to exist?9
A version of this demand-side liquidity crunch is what cryptoeconomies have speed run in bear markets for the last few years. First, products have to compete for a reduced pool of spending power (liquidity), then some go bust and others consolidate. However, the pool of available money continues to decline, and eventually the Foundation typically has to step in and either boost demand or subsidise the projects and validator set required to keep the chain alive.
Real-world economies are no different, and this kind of sustained, demand-side shock is already a issue in developed economies. In many, real wage growth has been stagnant for thirty years, and credit has taken its place. Obtaining credit relies on a job, on security, and on predictability. Offering credit when it cannot be repaid is what led to the Subprime Mortgage Crisis and the 2008 Great Financial Crash.
See the problem?
Demand can’t be stimulated by simply printing money, or creating more credit, because people will be unable to pay it back. Second, a Universal Basic Income (a good idea in principle) can’t plug the gap, because tax receipts for the state will decline. They have fewer people working on the one hand, and less sales tax from businesses on the other. Simply—how can states afford to pay for it?10
Our populations will continue to age, driving up the cost of social security for old age pensioners onto those that continue to work. They will conclude, as many already do in countries like the UK, that work does not pay.
However, in a situation where many will be unemployed and in worse material circumstances, people will continue to work, but with increasing resentment. Look, this isn’t that different to our existing paradigm—the trends, after all, are the same—it is just far more stark.
This is a negative cycle.
A social upheaval I think about a lot isn’t just economic. After World War One, soldiers returned home to find economic chaos, and in many cases—especially in the defeated powers—punitive sanctions in place. They found themselves at peace but unable to survive.11 I will always remember the words of my lecturer in describing the result, “every capital East of Paris fell.”
That’s what happens when people really think the social contract has been broken.
When people are laid off, while those that do work do so in increasing misery,12 all the while seeing billionaires and platform capitalists doing just fine, planning colonies on Mars to keep them safe from rogue AI, what do you suppose their conclusion might be?
Or, as Jack Clark from Anthropic says in Nate Silver’s On The Edge,
“People don’t take guillotines seriously. But historically, when a tiny group gains a huge amount of power and makes life-altering decisions for a vast number of people, the minority gets actually, for real, killed.”
Your Mars colony is only a decent escape plan if you don’t get caught and guillotined on the launchpad, Elon.
All of this is before we consider what organisations are actually in a position to capitalize on.
As we saw in the last section, it’s likely that so-called “AI job losses” may just be normal cyclical economic behaviour from firms, with a bit of PR spin on top.
No doubt that as models improve, entry level positions will take a hit, but maybe once the macro position improves, the impact will be less than what we’re seeing right now. I graduated into the fallout of the Great Financial Crash in 2008 and I remember how lean the first few years of my career were in terms of finding any work I could.13
I’ve spent a long time working and consulting in both start-ups and enterprise companies, and I can tell you that for every start-up (or scale up, for that matter) able to pick up the latest and greatest tooling and run with it, there’s an enterprise organisation that would fail to realise substantial gains whatever happened.
This isn’t something I’ve made up. When I was speaking to former colleagues for the next section, I found many inside large organisations that used new tooling, with frontier models for their day-to-day work. Some said the use of tools was encouraged, some said it was a soft mandate. Others said that it wasn’t required, but the best engineers were getting incredible work done with the tools.
The problem here isn’t that these gains exist—it’s that they are localised to teams that are able to capitalise on them. Most software teams are not bottlenecked by outputs, they’re bottlenecked by a lack of clarity on what to build, why they should build it, or even the permission to build it at all. None of these things are strictly an engineering problem, and solving this complicated interrelation of impulses is part of the reason that agile software originally came about.
Still, what we’ve seen in large organisations is that agile adoption—in the sense of “work iteratively and systematize your successes”—has been slow, difficult, and often has failed. I don’t think these organisations are likely to see any huge benefits soon, simply because the average rate of productivity in the company is the sum total of all teams, well managed or not, and the ability to deliver the right thing, rather than just deliver full-stop.
Not only will the average be dragged down, but delivering the right thing, so far as I can tell, is not something that AI can necessarily help with—other than perhaps information synthesis after you’ve talked to your users, or by triaging help tickets and identifying themes of problems with your product.14 My former boss Rob has written several posts on this subject, and I think he’s on the money when he says, “typing was never the bottleneck.”
Moreover, the maintenance of these new projects still works the same as any software project—it’s typically expensive, and dominated by the cost of change and the number of staff that understand it. TCO (Total Cost of Ownership) has to be factored in when using AI tooling, and although the tooling can digest and answer questions about a corpus of code, ultimately a human needs to grok it before it can be reasoned about, or safely changed. That part hasn’t changed, and the cost is still high.
The current pricing model of AI models obviously doesn’t take into account externalities—environmental, social, et cetera—but it also doesn’t reflect the full cost of production. As soon as these models dominate, they will have to increase their prices, and at that point you’re in a classic vendor lock-in conundrum as a business. In that situation, TCO of using the AI tools might be only marginally less than doing things the old-fashioned way. It might even be higher—that remains to be seen.
Start-ups, and many scale-ups often find themselves working in a pseudo-agile way by default, simply out of necessity (most would not describe it that way). Put simply, they only eat what they kill, so they’re often applying the principle of trying ideas and abandoning them fast, trying to find product market fit and make the company work. This attitude makes me think that only small companies are likely to reap the real benefits of AI tooling—and then only with experienced staff who can execute well within a framework of building the right things.
The apparently most effective users of AI tooling in my network, either work at start-ups or scale-ups, which certainly influences me in this hunch. Every post like this lengthy post on how the Software Development Lifecycle is Dead strengthens that suspicion. It’s a reasonable post—if you work in an enterprise company—but much of it was never relevant in the kind of small, nimble companies I’m thinking of anyway. However, it’s a small sample set and maybe I’m wrong.
It’s also possible I’m wrong that the improvements in LLMs will hit a plateau and the curve will flatten, with each new generation only showing marginal gains. We will see.
In the final part of this series, we’ll talk about how you as an engineer (or creative) can use the tools, manage the risk of deskilling, and what is left for creative work.
Acknowledgements: Thanks to the many people that fed back on earlier or draft versions of this series, including but not limited to: Jon Stone, Craig McMillan, and Rob Bowley. Cheers for the conversations while I worked out shower thoughts to Andy Gray, Geoff Goodell, James Morgan, David Scott, David Alesch and Jack Gray. Thanks also to all my network that I have bugged about AI tooling, workflows, best practice in their places of employment and for opinions. I hope I’ve done your thoughts and feedback justice.
From which this essay takes its title.
Many of these are the same people. “Follow the money” remains perennially good advice for working out what is really going on.
Probably not the only reason, but certainly one that is referenced by Andreessen himself.
Specifically in terms of my call-to-arms on creativity in the previous section.
A version of this same argument is why Palantir are able to hoover up government contracts in the UK at the moment. What they do isn’t particularly unique, it’s all smoke and mirrors. If they have anything unique, it’s the amount of capital they have on tap, not the tech.
Arguably, capital accumulation as much as any ‘progress’ is the process that is actually happening.
I hope this piece is not ‘Criti-hype’ but just ‘criticism’—that is my goal, in any case.
A very large problem in of itself. Our economies still assume people will pay their mortgages—this is one of many second-order effects of hollowing-out the middle or skilled worker class.
Some mental maths, taking the McKinsey numbers at face value.
I pose this question as a firm opponent of austerity and coming from a left-wing, Keynesian world-view, so believe me when I say I wish there was an easy answer. In so far as I have one, it is to privilege the creative economy just as much as the manufacturing and service economies, as I described in an earlier section—and with the implicit assumption that the creative economy seeds productivity, and innovation into the manufacturing and service economies.
I’m hand-waving a bit, but forgive me, this is more about poetry than history.
Again, “I didn’t dream of prompting.”
True fact: I was at one point an advertising copywriter for a makeup brand aimed at teenage girls, and a scented disinfectant company.
That must be a huge corpus of problems in the case of dreadful enterprise software like MyHR.


