Hard Problems Are Still Hard
The obligatory AI post
It’s easy to see why AI tooling1 has well and truly captured peoples’ imagination when it comes to simple websites and web applications. Now the average person can ‘vibe code’ together what would have previously been a weekend project for a computer programmer. This is great news if you have a business idea that can be addressed at this complexity level. If that’s so, then stop reading this post, and go and write it now! Seriously.
I have a strong feeling that there’s a class of hard problems in computer science, or indeed using computing to solve problems, that remain (a) completely, or relatively unexplored, and (b) are a bad fit for AI tooling due to novelty, user trust, or regulation.
Still, the delta between this, which once upon a time would have been called ‘web design’ and ‘web application development’ is not great. In fact, the delta from this to distributed systems engineering for most non-technology companies is not that great.
The reality is that today those companies are probably making ‘build/buy’ decisions rather than ‘vibe code in-house/buy’ decisions, but even so, the point stands. However, soon—and sooner than I would have thought if you’d asked me two years ago—those companies are going to be turning to AI for the majority of their skilled work, both inside and outside their tech function.
Most of software engineering (especially in these types of companies, say a retailer or other large company whose main product isn’t software or services) is simply what I like to call ‘plumbing,’ that is, connection layers between different systems, different APIs, or even different layers within an application stack. Not all of these are created equal in terms of complexity of implementation for a given project, but generally you can see why AI agents could at least help substantially with the velocity of development, even if there’s no move to replace technical staff any time soon.
Now this isn’t ideal for me personally—I tend to say that my biggest skill is information synthesis, and AI tools can digest a far larger corpus of information, and infer things more succinctly and more quickly than I ever could. Bad news bears.
Hard Problems Have Novel Solutions
The quickest readers will have noticed though that there’s an absolute ocean of problems that don’t fit into the categories above. Off the top of my head, the broad ‘anything that touches real money,’ category, before we talk about large scale data-processing and engineering. Sure, there are plenty of low-hanging fruit in data science and ETL tooling, but the difference between a lazy iterator, an eager iterator, a parallel iterator et cetera in such situations (language depending, of course, and using a Clojure example from reflex) can be huge.
Sure, LLMs can identify this and write sensible code in many cases—but they need careful supervision. In the end, it’s likely to be (at best) a force multiplier than a complete game-changer. If you’ve worked with REPL-Driven Development (RDD) or the advanced compilers and formatters of languages like Rust or Haskell, you’re already used to advanced tools and hugely iterative workflows, and perhaps it is best to regard LLMs as falling mainly into this category of “spiking” tools.
Experienced engineers often “spike” ideas and “refactor” later, and in a sense, generating code via AI tooling is the purest, lowest-cost “spike” imaginable, as long as you can express your intent to the agent conscisely and get useful output. Two big ifs.
You need to be chasing only one type of business idea—things that are (a) hard for AI to do, and (b) have a high barrier to entry
Still, many of the examples just given are areas that an AI would not have been able to tackle two years ago, but might just have a shot at today. Even if the prompt spat out unoptimized code, with an experienced engineer checking the output, a large task (writing the code and optimising the code) could be a much smaller one (optimising the code). As others have observed, this is a complete paradigm shift in the job description.2
Even so, there’s also a class of problems where it’s not so much that there needs to be a human in the loop for risk mitigation; it’s that it’s simply unclear whether a novel solution could be found without one. Don’t get me wrong, a human might not figure it out either—but here I’m thinking about long-lived research efforts such as the development of something like Malachite as a Rust rewrite of Tendermint (or even CometBFT as a continuation of the development of Tendermint).3
The ability to find novel approaches to a problem space is often key to such R&D efforts, and while there is a sense in which current AI tools are good at digesting context and offering adjacent solutions, it’s not clear that genuine novelty in implementing solutions is there yet.
Moreover, it’s also definitely not clear that for use-cases that are sensitive or complex, or can have serious negative consequences, the average user is willing to take on risk as a result of AI mistakes. They certainly wouldn’t tolerate a developer making mistakes that affect them, so it seems a reasonable expectation when considering either fully AI-generated code, or even AI assisted code (although, let’s face it, that’s just ‘code’ today). Even in non-regulated markets, the PR backlash from a project failure due to incorrect oversight is likely to be terminal.
Smarter people than me have written at length about project guardrails, governance, and humans in the loop better than I can, so I won’t regurgitate that here.
Hard Problems Are Business Edge
What all this means is that I have a strong feeling that there’s a class of hard problems in computer science, or indeed using computing to solve problems, that remain (a) completely, or relatively unexplored, and (b) are a bad fit for AI tooling due to novelty, user trust, or regulation. This is good news if you enjoy programming, and it’s great news if you want to start a business.
In The Millionaire Fastlane, MJ DeMarco offers five commandments to follow when starting a business, and two are particularly relevant here:4
The commandment of control—be in control of your business, pricing and operations (an ironic one considering this blog is hosted on Substack, but it’s not our, or my, core business)
The commandment of entry—the lower the barrier to entry to a market, the higher the competition will be, and the lower the margins. Try to find a market with a higher barrier to entry (or, implicitly, find a market where you have the ability to either create the market or create the barrier to entry in the market)5
The barrier to entry on the easy problems, with AI, is now effectively zero. It’s a double-edged sword for those chasing them too—by addressing them using AI tooling or commodity solutions, a would-be business owner is also violating the commandment of control, as they have no edge or USP.
Chase hard problems, and you will have an edge over everybody else that is using the powerful new tooling that’s available to chase the easy problems. After all, the tooling will also help you lower your time to market and iterate on hard problems too. It just won’t help by the same proportional amount versus project size, even if it helps the same absolute amount.
Again—if you have a unique idea (bonus points for it obeying the commandments above) that is easy and you can bootstrap your product to market now by yourself where you couldn’t have in the past, then go and do it now!
If you have any other kind of stake in tech and/or have an entrepreneurial mindset, you need to be chasing only one type of business idea—things that are (a) hard for AI to do (currently), and (b) have a high barrier to entry (either because they’re hard, because of regulation, or because of required specialist knowledge, et cetera).
Of course, for all I know, the tech might leap forward in a few weeks with a new release and render this post fully obsolete, in which case I will probably give up on this programming lark and become a carpenter. Time will tell.
For this post, assume ‘AI’ is synonymous with ‘LLM’ or agentic workflows; I’m not talking about AGI.
And not a welcome one, in my opinion. The future is the most tedious part of the job, apparently.
I have a future post with examples of seriously impressive work co-produced with AI agents, however, an expert human was very firmly in the driving seat.
At least four of the five are probably essential for a successful business (user need, existence of barriers to entry, ability to scale, and control). I know enough very successful consultants and mini agencies to be at least slightly dubious about the commandment of time, though it’s certainly true that for a scale-up you need to break the link between your time and the business’s ability to make money.
Which many AI companies are currently doing in calling for regulation. To some extent, they’re acting to deter newcomers.

