⇐ Blog
Betting Against Mindless AI is a No-Brainer
Even if I lose, I win.
January 05, 2026
“Embark on the train or be left behind,” say the AI boosters. “It’s the future of software development,” say the vibe coders. “Just one more prompt,” says the gambler.
I’ve previously written about how I loathe the use of AI for communication, and about how embarrassing I find the whole situation to be. I am a fan of neither the technology nor the zeitgeist. I think it’s doomed and I have not seen compelling evidence to the contrary.
That said, I want to examine some of the claims made by those who believe that this tech is the future, particularly the future of software development.
First, let’s step back.
What is the AI hype cycle promising? Artificial general intelligence. A cybernetic mind, ostensibly capable of improving itself in a positive feedback loop of learning. Something like a human mind, except supercharged far beyond what anyone can imagine. It’s supposed to be coming soon, but such things are unpredictable.
Assuming we get AGI, how would I use it? How could I use it? Who could possibly use it? We’re talking about an artificial mind that is expected to be smarter than any person who has ever lived. How would it make sense for me to force this mind to do whatever work I demand? I abhor this idea. Everyone should abhor this idea, if not on ethical grounds then definitely on the basis that it’d be more practical to let superintelligence act of its own volition assuming it’s benevolent.
Of course, this is only an idea. A hypothetical. A what-if. Current AI research could not hope to reach this pie in the sky, because the field’s state of the art is based entirely around creating models which approximate their training data. Users and haters alike have noted how all of the current offerings fail to emit anything novel—they only ever regurgitate mediocrity, whether it be in the form of prose, code, illustrations, photos, et cetera. There is no path to intelligence in a thoughtless piece of linear algebra that badly approximates human work.
Tech CEOs love this tech because they interface with it the same way they interface with their reports: bark orders, get a result that superficially looks good, move on. Actual science, with actual measurements beyond asking developers how they feel about the tech, has found that using these particular machine learning “assistants” has a net negative effect on the quality of work amd overall productivity. These findings do not matter to users who have conditioned themselves to pull the lever again and again in hopes of finally attaining their jackpot of an output which still isn’t sufficient for the task at hand but is close enough to be salvaged with extra fixup work that invariably takes longer than it would if one had simply built it from the ground up and spent their time actually thinking about their goal. Those who stay away from this garbage are unfortunately not safe from it, as the boosters are succeeding in shoving their trash in everyone’s faces and the sensible ones are left cleaning up the mess—except of course those whose employers went “AI-first” and fired them for not using the nonsense machines, whether the workers were productive without them or not.
Let’s assume somehow that this technological dead-end can miraculously salvage itself. It works, is relatively cheap to improve with more training data, models themselves become much more reliable to the point of not tripping over as soon as you ask questions about BSD instead of Linux, the pseudo-confabulation problems known as “hallucination” are solved, they can successfully count the number of “R"s in “strawberry” instead of just outputting arbitrary text which simply approximates the shape of an answer, OpenAI for some reason fails to jack up the prices, and the products become good without becoming the artificial mind I’d refuse to compel into my service. What would I have lost by sitting out of the “growing pains” phase? Today, these models are constantly being retrained. They change. They cannot be relied upon to consistently generate the same outputs from specific prompts. What kinds of “skills” do AI boosters even have in interfacing with these apps? The promise is that the product will continue to get better and better, which means always relearning how to prompt for what you need. What’s the point of getting on the train while these things don’t actually work in any meaningful way?
I started seriously programming in my now-favourite language Lua just over two years ago, and have begun to maintain actual released projects (with users!) just over a year ago. Lua 5—the language in its modern form—came out in the early 2000s. Catching up didn’t take long, especially not for such a small language. If this AI scam actually turns out, how could I have been left behind? It’ll already be good enough that any layperson who doesn’t know how to do things can prompt their way to whatever they want. Why shouldn’t I just keep practicing my coding skills in the meantime instead of letting them rot while I lose myself to prompting?
The only way this argument of being left behind makes sense is if, in the future, companies will only hire workers who are “proficient” in prompting a machine learning model for specific outputs. If these models become especially proficient, then verification won’t be necessary and the only ones who will need to remain are upper levels of the org chart. Of course, workers are being left behind now: those who call out this clown show for what it is and make the CEO look like a fool for spending a hilarious amount of money on software that doesn’t work. Competent workers are the first to go when the new goal becomes to use machines which possess no competences.
By taking the anti-AI wager, I get to play both sides.
If, somehow, LLMs get good at writing code and prose, then I’ll benefit anyway ‘cause it’ll have become easy to use and I’ll have skipped out on the growing pains, assuming it’s still available to the public.
When they fail though, I won’t have wasted years of my life eroding my hard-won skills while causing a ruckus by recklessly plastering the world with dangerous nonsense produced by machine learning models which were falsely advertised as “intelligent”.
I, of course, don’t think I’m wrong. If what the AI-pilled have to say is true and the tech really does get good, then I’ll be able to change my mind once the compelling evidence presents itself. That still won’t change the fact that at least for today, I’m right and they’re wrong.