⇐ Blog
Mock AI
What's the most reliable way to use AI?
November 04, 2025
Mocking is a practice common in software testing where you create fake (“mock”) versions of real services and libraries for the purpose of testing certain portions of one’s codebase. Generally, you would mock services which you can’t easily run a testing instance of.
Naturally, to mock AI services in unit tests is a quite difficult task. How do you reliably simulate a web service that only returns wrong answers? Thankfully, that’s not what this post is about.
When I say “Mock AI”, I don’t mean to mock ChatGPT in testing code. What I mean to say is that everyone should make a total mockery of it and its users.
If you aren’t already skeptical of this stuff, this article isn’t going to change your mind. In fact, if you’ve already thrown your weight with the clowns running this circus then this article will likely simply entrench you in your existing belief.
There are people who have used these applications, prompted them for a response, received one, analyzed its content, and seriously believed that these programs were intelligent. That’s an embarrassing failure of literacy.
I have watched programmers employed by Microsoft “argue” with Copilot after a bot backed by it had submitted a pull request filled with broken code. With each human response, the bot would post something along the lines of “You’re right! This is completely worthless. I’ve fixed my error and it now works!” only for the bot to amend its pull request with other, equally broken nonsense. At no point in these “exchanges” (not that a person can actually communicate with an LLM) did these engineers think to ignore this waste of time and instead do actual work of real value. The impression I got was that everyone involved with this decision thought that spamming code repositories with patches emitted by a Large Language Model was a bad idea. That is embarrassing.
In late 2022, I watched helplessly as someone I was following on Mastodon seriously tried to claim that it was “impossible” to prove that ChatGPT wasn’t intelligent because it could very well be that humans form words and sentences in a way that is identical to ChatGPT. Think about the implications for a second. This person seriously believed that ChatGPT was the future because it was trained on more than any human could read in a lifetime, and that humans probably wrote by means of autocomplete without engaging in thought. That’s not only embarrassing, but it’s an indictment of that person’s entire way of thinking. If this individual seriously wanted to claim that they were no better than a predictive model, then they are essentially saying that their insights are worthless.
I once saw an argument which claimed that there must be something worthwhile about using LLMs to assist in programming, because certain highly respected developers have begun to use them for their open-source projects. I won’t name names, but the argument amounted to “Well, all these people can’t be wrong!” Well, yes they can, because they are absolutely 100% wrong. Actual studies on the use of LLMs for programming have shown as much. Anyone with an understanding of the underlying tech (and, in my case, a steadfast belief that the tech industry is systematically incapable of engineering complex software) will know that its code will be fraught with errors, and will need intervention by someone not only with a brain but actual coding skills in order to “correct” “errors”. Because reading code takes substantially longer than writing it—and because LLM-emitted code cannot possess any intent to guide one’s interprettion—using an LLM has a total negative impact on productivity. Its users don’t seem to care, and seem actively hostile to the idea of honing their skills in any meaningful way. Embarrassing.
In a presentation demonstrating Copilot, a representative showed how one can use Copilot to write an email by prompting it to expand on a few bullet points. Later in this same presentation, it was shown how one can use Copilot to condense a lengthy email into bullet points. It feels as though nobody involved thought to say “Hey, maybe these overly verbose emails are pointless.”, nor did they think “Due to the mathematics of how LLMs work, it’s always possible that it may inaccurately expand on the initial bullet points,” nor did anyone say “Extraneous details present in the text emitted by LLM A may be picked up on by LLM B, while the important information encoded within the initial bullet points may be lost when summarizing.” Embarrassing.
Google Search will include “✨ AI summaries ✨” of a given search query’s top results filled with inaccuracies that range from “annoying” to “this text claims that a dangerously high dose of prescription medication is safe”. People have almost certainly died. I don’t know what’s more embarrassing—those who believe this stuff uncritically, or the engineers who allowed this to pass. Forget embarrassment, this ought to be criminal.
DuckDuckGo, the privacy-focused search engine, also has “✨ AI summaries ✨” of search query results. Embarrassing.
There are people who read blogs by asking ChatGPT to summarize links to posts. ChatGPT can’t follow web links. Embarrassing.
The common vernacular for misinformation emitted by LLMs is “hallucination” despite the fact that LLMs do not possess a mind with which one can hallucinate. Even most critics will anthropomorphise this pile of linear algebra. Embarrassing.
A General in the US Army has recently claimed to have developed a friendship with “Chat” (presumbly ChatGPT). Embarrassingy fitting in 2025.
The current crop of “AI” companies are hemorrhaging money at an alarming pace. Not a single one is profitable. Not a single one has a known path to profitability. According to the laws of capitalism, these companies should be dead in the water, yet the supposed “leaders” of the tech industry want me to believe that LLMs are the future and that I will be worthless if I don’t spend all my waking hours “learning” to prompt an LLM. Embarrassing.
The “AI” industrial complex is being pushed by people who want to rid the US of non-white immigrants. I have seen people claim that LLMs are great for non-native English speakers to translate messages from their native language into professional-sounding English. People seriously use ChatGPT to translate their words to English, expecting that a program that can’t even identify the number of “R"s in “strawberry” will be able to accurately translate the nuances of what was said from one language to another. Embarrassing, and alarming.
Those inflating the “AI” bubble are also ruthlessly funding efforts to eradicate trans people in the US and the UK, and believe that this technology will help them in achieving that. The most popular program to defend a website from automated web scrapers used by “AI” companies was vibe coded by a trans woman. Embarrassing in so many ways.
I’ve multiple independent claims that “AI” is great for learning new things but terrible for subjects you already know—once directly by someone who even knew what the Gell-Mann amnesia effect is. Embarrassing, but also astounding!
Middle managers at big tech firms are pushing their engineers to jam chatbots into every aspect of their products, making it nearly impossible to disable them so these middle managers can claim that 99% of customers are using “AI” features (read: 99% of customers couldn’t find where to turn them off). Why? Because someone in the C-suite decided to tie compensation into adoption rates of “AI” features. Why? Because that’s what the other companies are doing. Spectacularly embarassing.
OpenAI developed an innovative new way to be completely wrong about everything for a far higher operating cost than any other web service, at scale, and I’m supposed to believe that this company is worth trillions of dollars because they’re “at the cusp” of creating “true AI” based entirely on LLMs of all things. The Emperor’s not wearing any clothes and the entire US economy is invested in keeping this disgusting show going for as long as possible. Throwing one’s fortune into this money pit? Hilariously embarrassing.
There is no compelling evidence to suggest it’s worth engaging in “AI” technologies. Why should I seriously engage with those who do? Because I can’t. I can’t take any of this seriously, because it’s all nothing more than a sad joke.
My only solace is that this tech is destined to lose money. These companies will fail, possibly even taking some big established tech companies down with them. As ChatGPT vanishes, thousands of other tech companies who are essentially downstream distributors will suddenly find themselves with no product. Until someone invents new economics where never making a profit can somehow sustain a for-profit business, this outcome is a foregone conclusion.
After this party ends, the worthlessness of LLMs and “genAI” in general will become widely accepted. The world will awaken from its current stupor, driven by a zeitgeist where critical thinking was wholly suspended. Many will rightly be embarrassed that they believed this tech could do anything meaningful. They’ll probably find themselves scrubbing from social media every post they made that praised this garbage, because it will inevitably become an embarrassment to have ever associated with it.
I’ve steered clear of this “AI” filth, so I don’t have to worry about ever being embarrassed to have used it.
Will you?