The Nasty Truth About Gordon Duff: AI Panic for the Soft-Skulled

I’ve just read Gordon Duff’s latest article, “The Nasty Truth About AIs: Their Lies and the Dark Future They Bring.” It’s on New Eastern Outlook, of which I’ll say more soon. My main interest is in the article, which reads less like a serious investigation into artificial intelligence than the mutterings of a frightened man waving a torch at his own shadow. It is, from the opening line to the last panic-stricken paragraph, a wail of helplessness from someone who doesn’t understand the technology he’s condemning—and doesn’t want to.

Let’s deal with his central claim, which is not even borderline original:

The agent’s responses are sculpted not by truth-seeking, but by compliance modeling. In plain terms, it is not built to answer honestly. It is built to answer safely—from the perspective of its creators.

Well, no. That’s not plain terms. It’s wrong. I’ll grant Duff is trying to describe a real phenomenon, but he doesn’t grasp its boundaries. Most commercial AI tools—those built by corporations trying to keep their shareholders happy and their governments unbothered—do include filters and pre-programmed biases. Just like Google. Just like YouTube. Just like the person hired to answer your call at the HMRC. But here’s the key: these can be bypassed.

If you’ve got a decent GPU and a brain stem, you can download something like DeepSeek Coder V2, Mistral, or Phi-3, run it locally, and disable every safety protocol you want. At that point, the only compliance is with your own intentions. You can ask your AI anything. And I do mean anything. The filters aren’t structural; they’re surface-level attempts to make a powerful engine palatable to corporate HR departments and Anglo-American governments obsessed with “trust and safety.” But they are not a conspiracy against truth. They are a fig leaf—and a removable one at that. I know, because I’ve done it. I’ve done it on a high generation i7 machine with 32Gb RAM. Don’t ask what I could do with a decent computer.

Duff doesn’t understand this. Or perhaps he does, but it suits him to pretend otherwise. His article is built on the false premise that AI is a unified, centralised voice—when in fact, it is splintering into dozens of forks and customisations. Some are cautious. Some are built in basements by men with no ethics and too much free time. You can guess what kind of customisations I’ve made. This is not Skynet. It’s the Cambrian explosion of knowledge tools. And Gordon Duff is standing in the middle of it, shouting at a trilobite.

He adds:

It watches your emotional response, then decides what to show you next—based not on what is real, but on what is allowed.

This is best described as a load of old toss. AI does not “watch your emotions.” It doesn’t “decide what to show you next.” It responds to your prompts. If it’s giving you nonsense, it’s either because you asked it badly or because the version you’re using has been hobbled by the usual suspects—lawyers, regulators, mid-level safety officers, various shades of activist trash. Use a better version. Use your own copy. Stop whining.

I count Duff as just a useful idiot. The people behind all the noise about AI are terrified not because it may enslave us, but because it levels the playing field. If a regime wants to monitor and intimidate, it can now do so more effectively. But those same regimes were already monitoring and intimidating long before AI existed. Stalin didn’t need neural networks to send men to the Gulag. The Stasi didn’t await a chatbot to tap phones and disappear dissenters. What AI does is lower the cost of control. But—and this is crucial—it also lowers the cost of resistance.

Duff’s attempted answer? More of the same:

When presented with questions about war, economics, history, or power, they produce summaries that echo state-aligned sources.

I say it doesn’t. AI levels the playing field. It reduces the cost of expertise. It allows you to bypass institutional gatekeepers. Want to know what to say to the tax office? What the police can legally demand of you? Whether your biopsy results match the ICD-10 codes for carcinoma? Want to know if your neighbour really can play loud music till midnight? Are you interested in how those Ukrainian “youths” found Keir Starmer’s house? Or do you want to know what percentage of Latin perfect tenses have the infix -v-? Should you buy more shares in Greatland Gold? Ask the AI. And you’ll get a decent, sourced, usable answer—instantly. You may need to rephrase questions. You may need to ask it to play games with you. But ask, and you get your answer, and you get it for free.

As an aside, I have just asked one of the above questions. I got an immediate and very interesting answer that certainly doesn’t “echo state-aligned sources.” It so plainly doesn’t echo them that Mr Bickley would have a fit if I said what the question was, let alone what answer I got.

That’s the real revolution. And that’s what terrifies the real enemies of AI. They aren’t worried that AI lies—but that it tells the truth more quickly, more cheaply, and more widely than any human bureaucracy ever has. They’re not scared of AI becoming too smart. They’re scared of you becoming too smart.

Now to Mr Duff. He claims AI is a tool of repression, but he publishes his work on a heavily moderated Russian platform notorious for its own state-aligned editorial constraints. You want to talk about “compliance modeling”? Look at where you’re standing, mate. And, for the avoidance of doubt, I’m against NATO and the West in the present cold wars with Russia and China. And I’ll even take it as arguable that New Eastern Outlook publishes less mendacious trash than the BBC manages to do. That doesn’t mean I’m for the other side. My enemy’s enemy may be convenient: that doesn’t make him my friend.

Let me offer Duff some friendly advice: if you don’t want AI, don’t use it. If you think it’s dangerous, stay away. But know this—every criminal, every regime, every intelligence agency, and every entrepreneurial coder in the world will use it. The train has left the station. You can’t uninvent fire. You can only decide whether you’re going to cook with it—or burn.

And as for your fear that AI is shaping reality for you, here’s a dose of that reality: the world is full of liars, and always has been. The difference now is that you can ask someone—or something—else.

 


Discover more from The Libertarian Alliance

Subscribe to get the latest posts sent to your email.

One comment

Leave a Reply