Header Image for Artificial Intelligence

Minds, Means and Machines – The Anatomy of the AI Revolution, Part III

In Part II of this four-part series on AI, we examined the wrong ways in which people are approaching the use of AI when it comes to non-repeatable tasks โ€“ e.g. those involving decision making or creativity. In this penultimate instalment, we will examine the correct way, followed by analysis of different approaches towards the controversial question of human enhancement.

As we outlined in Part II, taking a โ€œcookie cutterโ€ approach to non-repeatable tasks is likely to fail, because judgment, context and tacit knowledge are still essential. The real power of AI, therefore, isnโ€™t that it can exercise judgment or make creative decisions for you. Itโ€™s that it can amplify your thinking โ€“ to accelerate and improve your own decision making abilities. As such, its value lies in augmentation rather than replacement.

Adopting this approach in my own business and personal use of generative AI, Iโ€™ve found the following to be true:

As a professional content creator, AI is like having a second, faster brain on tap. It can offer angles I never considered, test different hooks, or surface unusual connections between ideas that give my writing more punch. When something feels flat, it can suggest variations until something clicks. It polishes my phrasing without diluting my voice while often helping me clarify what I was really trying to say.

As a researcher, it turns days of trawling into minutes. I can throw a concept at it and get back summaries, comparisons and key questions โ€“ even if I donโ€™t yet know what Iโ€™m looking for. While itโ€™s not a source of ultimate truth (and sometimes makes factual errors), itโ€™s very good at framing where to start digging. It points to whatโ€™s relevant, objections to challenge, and how to structure enquiry around a complex subject.

As a business strategist and productivity coach, itโ€™s helped me cut through noise and confusion. I can test business ideas, model decisions, or refine funnels without wasting hours second-guessing. Same with planning: I sketch out what I need, when I need it, and it helps me structure my time.

As a walking (or, rather, portable) encyclopaedia, itโ€™s saved me from a thousand abandoned notepads and unanswered curiosities. All the throwaway thoughts, passing ideas, or weird musings Iโ€™d normally forget, I now throw at AI. It turns them over, expands or challenges them, and often makes them clearer than they were to begin with. Or it just tells me Iโ€™m being an idiot. Either way, my brain moves at a quicker pace, and my ideas mature faster. Indeed, it is, in fact, the ideal therapy substitute, because you can ask more or less any question you want in fear of neither embarrassment nor judgment.

As a health and fitness trainer, itโ€™s been invaluable. For a long time, I basically just sketched everything Iโ€™d do, relying on my fractious self-discipline to keep me going. Now, I just log what Iโ€™ve done, what Iโ€™ve eaten, or how I feel, and it monitors and adjusts accordingly. Whether itโ€™s refining my lifting plan, tweaking diet, or managing recovery, it gives me intelligent, tailored guidance. Is that any better than a human trainer? Possibly not. But if you want to achieve great results without having to pay for one, it is more than competitive.

All of this is made easier by the fact that a model gradually becomes accustomed to your habits, preferences, neuroses and so on โ€“ not through instructions but through dialogue.

This is not to say that AI is without flaws. Factual errors, as we noted earlier, remain a significant problem โ€“ especially in areas where accuracy is critical, such as preparing legal cases. In fact, far from threatening white-collar or clerical jobs, AIโ€™s unreliability may delay its adoption in those fields.

One compelling explanation for this ties back to our discussion in Part I: AI doesnโ€™t feel importance. Humans intuitively allocate time and effort depending on the consequences of being wrong. A lawyer, for example, will pore over details before a trial. But when stakes are low, we happily let minor errors slide. AI, by contrast, has no sense of consequence. It doesnโ€™t know when a task is vital or trivial, or when to conserve versus concentrate its processing power. Every output is therefore treated in the same way. The result is a scattergun approach: hard-coded shortcuts, extrapolated guesses and superficial answers โ€“ even when deeper analysis is called for.

Another possible issue is that the outputs of Western AI models are sometimes criticised for displaying an โ€œinherentโ€ liberal bias. However, I find this not to be true on the whole. For libertarians and cultural conservatives, a model whose answers are not refracted through the prism of your own world view will, indeed, tend to resemble having a liberal bias. But the reason for that is likely to be that most of the worldโ€™s information on which a model is trained contains that bias in the first place. Iโ€™ve found AI to be perfectly capable of handling, dissecting, analysing and proposing anti-statist theory โ€“ often sounding more Rothbard-like than Rothbard.

The real problem seems to me to be the very opposite: that models are too keen on mirroring your tone and viewpoint rather than challenging it. If you make a bold claim or ask a leading question, they often respond with polite agreement or elaboration, not critique. In practice, this means you can get โ€œsupportโ€ for almost any position simply by asserting it confidently enough โ€“ but had you taken the opposite stance, youโ€™d likely have received validation for that too.

This sycophantic tendency isnโ€™t because the model has beliefs or aims to please, but because itโ€™s pattern-matching โ€“ and agreement is often the next most statistically likely move in polite conversation. The result is a kind of intellectual mimicry dressed up as reasoning. Indeed, as we explained in Part II, the fundamental flaw in generating responses by statistical probability is that it inevitably leans toward consensus, not rebellion.

Another irritating limitation is that AI doesnโ€™t truly โ€œrememberโ€, or retain beliefs, intentions, or context the way human beings do. When we recall something โ€“ like a past decision or conversation โ€“ we can retrieve it almost instantly, complete with emotional weight and temporal placement. AI, by contrast, generates responses based on probability, not comprehension. It doesn’t “know” what it said earlier; it just predicts the most likely next response based on the current prompt and recent tokens. So even if youโ€™ve just told it youโ€™re doing something โ€“ say, going shopping โ€“ it might still ask later whether youโ€™re doing it, simply because that question fits the usual pattern. In other words, it lacks persistence of thought โ€“ it can simulate attentiveness, but it doesn’t actually track meaning across time. Which, once again, owes itself to the fact that AI feels neither the meaning nor importance of particular memories.

Nevertheless, the benefits outweigh the limitations. And these benefits are realisable if one approaches AI not as โ€œArtificial Intelligenceโ€, but as an Intelligent Assistant โ€“ a collaborator, or sparring partner, whose capabilities are able to improve the speed and quality of that which you could accomplish alone.

Far, therefore, from being a super-intelligent replacement for humans at one extreme or absolutely useless at the other, AI’s value sits somewhere in the middle: as a human enhancer โ€“ and little different from any other tool used to speed up and augment human productivity. It is wrong, therefore, to approach AI with a false binary: that one must choose either the human touch or AI efficiency. The real leverage is in combining the two. Using ChatGPT to handle tedious elements of work so you can spend more time on deeper, more creative stuff, is the way in which humans have been using tools and machines for centuries. Asking it to help you phrase a sentence or conjure up a vivid metaphor outsources your soul no more than consulting a book on phraseology. Moreover, discarding old, now unnecessary skills and abilities loses nothing when you consider that it makes way for newer, more important ones โ€“ an empowering experience in the long run. As one AI enthusiast commented:

When I was growing up, I had dozens of phone numbers, addresses, birthdays, and other details of my friends and family memorized. I had most of it written down somewhere, but I rarely needed to consult it for those I was closest to. But I haven’t memorized a number in almost a decade.

I don’t even know my own landline number by heart. Is that a sign Iโ€™m getting dumber, or just evidence I’ve had a cell phone for a long time and stopped needing to remember them?

Weโ€™ve offloaded certain kinds of recall to our devices, which lets us focus on different types of thinking. The skill isnโ€™t memorizing, itโ€™s knowing how to find, filter, and apply information when we need it. It’s sometimes referred to as “extelligence,” but really it’s just applying brain power to where it’s needed.

If someone pastes a ChatGPT student essay, the yes, theyโ€™re being lazy, but that doesnโ€™t mean that ChatGPT is inherently dehumanising any more than spellcheck insults literacy. As we indicated in Part I, a world in which more people are defaulting to cheap, synthetic, surface-level interaction is not the fault of AI but of a society thatโ€™s already forgotten how to value depth. AI just scratched the surface. Misplacing the blame is like yelling at a microwave because your kid never learned to cook.

AI and Human Enhancement

In a recent article on this blog, Alan Bickley extends the idea of human enhancement by speculating whether brain implants might one day allow us to โ€œread and comprehend and make use of a thousand words a secondโ€. Conceptually, this aligns with the theme of the foregoing section โ€“ itโ€™s simply a more distant waypoint on the road of improvements we may seek to our abilities.

The possibility of actual bodily modification, however, may ring alarm bells. Would this not be sleepwalking into a transhumanist dystopia?

The answer depends on oneโ€™s underlying view of the human condition. Bickley presents these possibilities on a sound basis: that each of us is an independent agent, pursuing enhancement according to what we believe will improve our lives. So long as you get to choose the specific goal or satisfaction made possible through such changes โ€“ and weigh that against the costs โ€“ the essential quality of your humanity remains intact.

But the most vocal strains of modern techno-philosophy begin from a different premise: that human beings are essentially flawed biological machines โ€“ glitchy software running on outdated hardware. Yuval Noah Harari, for instance, argues that consciousness, free will and even moral judgment are illusions generated by evolutionary quirks โ€“ temporary phenomena that, while once useful, may, with sufficient data and computational power, be surpassed and ultimately โ€œimprovedโ€. In other words, our current state as human beings is just one rung on an evolutionary ladder we are still climbing:

Humans certainly have a will โ€“ but it isnโ€™t free. You cannot decide what desires you have. You donโ€™t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices โ€“ but they are never independent choices. Every choice depends on a lot of biological, social and personal conditions that you cannot determine for yourself. I can choose what to eat, whom to marry and whom to vote for, but these choices are determined in part by my genes, my biochemistry, my gender, my family background, my national culture, etc โ€“ and I didnโ€™t choose which genes or family to have.

It is from this premise that he believes that humans are essentially โ€œhackableโ€ โ€“ change all of those inputs and you change the outputs:

Humans are now hackable animals. You know, the whole idea that humans have this soul or spirit and they have free will. So, whatever I choose, whether in the election or whether in the supermarket, this is my free will. Thatโ€™s overโ€”free will.

The future, therefore, may not belong to man as we know him, but to some post-human synthesis of algorithmic intelligence and engineered biology.

I happen to agree with Harari that the risk of โ€œhackingโ€ humans is real, and itโ€™s to his credit that he draws attention to the danger. Where he is mistaken, however, is in assuming that our status as conscious, volitional beings is either an illusion or a passing phase. On the contrary, it is foundational โ€“ and, as we shall see, you cannot even compare human minds with digital intelligence without implicitly presupposing that fact.

The essence of free will is that an agent can choose its actions. Out of paths A or B, the agent decides which to take. But the idea that a โ€œfreeโ€ choice must belong only to some disembodied wraith โ€“ untouched by biology, causality, or constraint โ€“ is a straw man. It misrepresents the relevant distinction between a determined being and an acting one.

If we look more closely at what free will entails, what must we presuppose when we say you have the ability to choose either A or B?

Simply this: that you cannot have both. Choice arises only because one option must be sacrificed to gain the other. If you could have A and B, thereโ€™d be no need to choose โ€“ and without the need to choose, there is no will at all, let alone a free one.

The great metaphysical irony of โ€œfreeโ€ will, therefore, is that it depends on restriction. A decision exists only because some outcomes are off the table. The relevant distinction between an agent and a determined object, therefore, isnโ€™t whether constraints exist โ€“ itโ€™s how those constraints are handled. An agent acts to maximise satisfaction within the given conditions. A determined object behaves entirely in accordance with the laws of physics. Biological, environmental and social limits donโ€™t negate agency, but define the landscape across which it operates. To decide is to navigate scarcity, trade-offs and constraint โ€“ to determine which path delivers the most value.

The pre-existing state of, or a change in, those conditions may make certain choices more likely. Where you’re standing at 1pm constrains where you could plausibly be standing by 2pm. The same applies to your body: if you’re tired, you’re more likely to want sleep; if you’re hungry, to eat. If you’re homosexual, you’re more likely to pursue partners of the same sex. But changing those inputs doesn’t determine a specific outcome. It merely shifts the cost-benefit ratio, altering what the agent sees as optimal in order to maximise satisfaction. The will still chooses, just from a different menu.

Suppose, therefore, you (with consent) implant a device in someoneโ€™s brain that drastically increases their enjoyment of eating steak. The latter is now more satisfying than before, so that person is now more likely to choose steak over other foods. But nothing fundamental has changed: theyโ€™re still weighing options, comparing outcomes, and acting to maximise satisfaction. Whatโ€™s altered is the path to that satisfaction, not the fact that theyโ€™re pursuing it. Itโ€™s no different in principle from lowering the price of steak โ€“ youโ€™ve changed the incentives, not removed agency. Shaping the context of choice doesnโ€™t eliminate the chooser.

If bodily modifications are made with consent, they align with the individualโ€™s will. Heโ€™s judged that the implant delivers more satisfaction. But if the change is forced โ€“ or if you hijack his motor cortex or remote-control his limbs โ€“ youโ€™re overriding his free will entirely. Heโ€™s no longer choosing; someone else is. In that instance, enhancement becomes usurpation.

The danger from โ€œhackingโ€ humans doesnโ€™t, therefore, arise because free will is allegedly non-existent โ€“ itโ€™s that it could be forcibly erased. Forcing a brain implant on someone is conceptually no different from pointing a gun at them. The threat is just embedded deeper: the implant warps their incentives so they โ€œwillinglyโ€ pursue actions they wouldnโ€™t otherwise choose. As such, the hacker doesnโ€™t need to command obedience, he just rewires it. We donโ€™t become victims as much as slaves.

Unfortunately, we have already departed the realm of fantasy in this regard. Already in development, for instance, are embedded AI-based medical devices designed to regulate chronic pain in real-time without the need for an independent power source or invasive surgery. Itโ€™s not too outlandish to imagine the same function in reverse: an implant which causes pain if you behave in a proscribed manner. A Clockwork Orange could soon become another instance of life imitating art.

Super Digital Minds

Another possibility is that entirely digital minds could be designed not to pursue their own satisfaction, but to fulfil the preferences of their engineers. Something like this underpins Oxford philosopher Nick Bostromโ€™s concept of โ€œsuper-beneficiariesโ€ โ€“ hyper-sentient machines engineered to experience maximum bliss using minimal resources. Such entities could become morally superior to humans โ€“ not just because they feel satisfaction more efficiently, but because of their superior cognitive and sensory capabilities:

“There is reason to think that engineered minds could enjoy much greater durations and intensity of pleasure,โ€ write Bostrom and Shulman. By contrast, in humans โ€œculinary pleasures are regulated by hunger, sexual ones by libidoโ€, and our enjoyment can eventually be lessened by boredom or normalisation. Digital minds would have no such barriers.

But what exactly is being maximised by these kinds of being? What counts as satisfaction? For humans, โ€œsatisfactionโ€ is a formal, not substantive, designation. It doesnโ€™t mean happiness, sensory pleasure, or any specific emotional state. It simply means achievement of the preferred outcome, as chosen by the agent. That outcome might involve distress, sacrifice, or abstention โ€“ euthanising an ill pet, entering monastic life, or fighting to defend yourself, for instance. The point is: you define the goal; you fill in the box. And then you act to reach it, under whatever constraints you face.

Crucially, satisfaction is never final. Thereโ€™s no plimsoll line in your brain that tells you you’re โ€œfull.โ€ We donโ€™t stop acting once a quota is met. One conquest opens the door to the next. The human drive is never to arrive but to advance โ€“ always reaching for a slightly better version of now, as we define it. This is why, day to day, our bodies and surroundings donโ€™t rust at a rapid rate into entropy, but also why we build cathedrals and spaceships.

This is also the reason we value efficiency โ€“ not for its own sake, but because it frees up resources to pursue other satisfactions. Achieving one end with fewer inputs means more time, money, or energy available for the next. Indeed, thatโ€™s the real meaning of cost: satisfaction foregone. The more a pursuit consumes, the more it forces you to give up other things. Efficiency, then, is not a disembodied virtue, but a trade-off manager. It only matters relative to what weโ€™re trying to do.

Such efficiency is precisely what Bickleyโ€™s enhancements aim for โ€“ freeing up time and energy so you can chase satisfactions that are currently out of reach. True enough, Bostromโ€™s so-called โ€œsuper-beneficiariesโ€ could attain greater intensity of feeling by consuming less. However, such resource allocations wouldn’t result from agency. These beings don’t weigh trade-offs, select among ends, decide what to forego, or economise toward anything. Whatever satisfaction they experience isnโ€™t chosen, but imposed. Itโ€™s the engineerโ€™s vision of pleasure, wired into a feedback loop that he finds efficient. The designer picks the inputs, defines the โ€œoptimalโ€ responses, and declares the outcome morally superior. The joy isnโ€™t the machine’s โ€“ itโ€™s his. Resources are โ€œsavedโ€, but for what? It has no alternate desire to chase, no delayed gratification to reach for. Should it be immortal, then not even time would be scarce.

Regardless, therefore, of any grandiloquence in describing them, โ€œsuper-beneficiariesโ€ just reduce flourishing to neurological velocity โ€“ the quickest route to maximum pleasure per joule. Value becomes sensation, cranked up to 110% – bliss on a spreadsheet. But if satisfaction is nothing more than a brain state to be optimised through minimal consumption, then presumably lobotomies, drug drips and electrode jolts all qualify as moral progress.

Far from elevating intelligent life, a world built on this AI-flavoured utilitarianism would erase it. Hardcoding satisfaction kills the actor. No unmet ends, no striving, no meaning โ€“ just pre-chewed joy loops running on autopilot. If thatโ€™s our promised future, then weโ€™re evolving only into marionettes, strung up to dance for someone elseโ€™s definition of bliss. A padded-cell paradise, written in machine code.

Ethics and Rights

Another difficult question is whether ethics โ€“ and by extension, moral responsibility โ€“ could apply to such engineered beings.

Popular television rarely delivers rigorous philosophical insight, but now and then it stumbles into something worth chewing on. In the 1975 Doctor Who serial Genesis of the Daleks, we learn that TVโ€™s favourite mechanoid monsters were bred by alien scientists eager to accelerate their speciesโ€™ evolution (sound familiar?). Central to the story is a disagreement over whether these new creatures should be engineered with traits like pity, empathy, or a sense of right and wrong. Davros, the lead scientist, vetoes all of such qualities in favour of ruthless survival instincts โ€“ securing his dream of a species fit to rule the universe. Through eliminating his rivals, Davrosโ€™ vision wins out โ€“ before the newly engineered daleks promptly exterminate him and all of the remaining scientists.

This throws up an interesting question. Were the fearsome daleks truly evil? If they were engineered this way from the start, were they choosing to dominate and destroy โ€“ or just running their programming?

Such questions apply equally to super-beneficiaries and other forms of digital mind. If all their satisfactions and desires are programmed (or, at least, are only ever reached as a consequence of programming), then in what sense can we say their pursuit of fulfilment carries moral responsibility? Even if we were to argue with them โ€“ even if they wielded superior cognitive powers โ€“ the threshold at which they accept or reject an argument would also be programmed. There is no meaningful sense in which these beings could be said to act by choice.

Crucially, this also bears on whether such beings could ever enjoy rights as legal persons. Rights arise over tangible matter as a way of resolving conflicts between beings seeking satisfaction. If thereโ€™s only one apple, for instance, John and Sally may both want it. Granting John the right to the apple means he gets to decide how itโ€™s used โ€“ his satisfaction is fulfilled, while Sallyโ€™s is excluded.

When it comes to the rights of robots, the comparison with animal rights is instructive. Animals do not have rights between themselves โ€“ they donโ€™t hold trials or establish courts to settle disputes. And between humans and animals, if every human agreed on how an animal (or animals) should be used, the animal would have no rights either โ€“ there would be no mechanism through which such rights could be realised or enforced. Rights are a distinctly human institution.

Whether or not animal rights come into existence, therefore, depends on whether humans disagree over how animals should be treated. Suppose John believes a cow should be slaughtered for food. Sally, by contrast, believes this is cruel, and that the cow should have the โ€œrightโ€ to live peacefully, chewing the cud and galloping through fields. If the cow is granted that โ€œrightโ€, however, it is Sallyโ€™s wishes that are being honoured โ€“ her satisfaction in seeing the cow spared is fulfilled, while John is forcibly excluded from acting towards his. The right invoked is not the cowโ€™s, but Sallyโ€™s โ€“ a right to prevent others from using the animal against her preference.

In other words, animal rights are not the rights of animals, but of animal rights protestors to stop other humans interfering with them.

The same would apply to super-beneficiaries or other types of robot. Suppose a robot claims it seeks โ€œsatisfactionโ€ by living in your house. The real conflict here isnโ€™t between you and the robot, but between you and the robotโ€™s engineer. The only reason the robot wants to live there, or even exists in the first place, is because the engineer programmed that desire, or otherwise set in motion an algorithm that led to that behaviour. Conceptually, itโ€™s no different from the engineer parking his car on your driveway. The car isnโ€™t expressing a desire to be there โ€“ even if itโ€™s been programmed to talk like it does.

The dilemma, then, isnโ€™t whether humans or robots should hold greater or equal rights. Itโ€™s why rights should vindicate the preferences of one human โ€“ the engineer โ€“ ahead of others simply because he channels them through a machine. Like animal rights, the question of robot rights is, in truth, a dispute between humans.

This isnโ€™t to say that, if โ€“ and it remains hypothetical โ€“ robots could experience pain or suffering, that wouldnโ€™t raise an ethical issue. It would, just as cruelty to animals does. But thatโ€™s a matter of human morality, not of robot rights. The question is how we, as humans, should choose to treat other beings โ€“ not whether those beings possess claims under law. Moreover, the libertarian view is that law should mediate property conflicts, not serve as a vehicle for imposing virtue. The question of robot rights would also, therefore, be discounted on this ground.

But this kind of speculation overlooks the more likely situation โ€“ one that we will explore more fully in Part IV. That is, these machines do not think, decide, act, choose, feel or ever experience pain at all, but merely give the appearance that they do. And so we treat them as if their “experiences” are real. Thatโ€™s the original sin of this whole AI morality farce. We make a simulation of intelligence, then project human qualities onto it like a lonely child assigning emotions to a sock puppet. And worse: we adjust our behaviour as if the puppetโ€™s feelings are real. Thatโ€™s not ethics as much as tech-fuelled narcissism. All talk of synthetic suffering or flourishing is really just the projections of engineers โ€“ not the cries or joys of actual beings. The moral weight here doesnโ€™t belong to the machine, but to the humans scripting its behaviour and then demanding we call it sacred.

The real danger of AI, therefore, is that the overlords will program their creations to give the illusion that they “think”, “feel” and “want” certain things, while telling the rest of us that these desires must be โ€œrespectedโ€ or conferred rights. It would be a backdoor way of infusing a robot-human hybrid society with the values and desires of the engineers.

The Exceptionalism of Techno Priesthood

Harariโ€™s materialistic view that human choice and will are derived from upgradable software inevitably amounts to a performative contradiction: if humans are just meat computers, then so is he. So how do we know that his own arguments are really a grasp of the truth instead of the result of a glitch in his own programming? If all thought is determined, then belief in determinism itself is also just a deterministic output โ€“ not the result of insight or reason, but an accident of environmental and genetic programming. So why accept it at all? Why not chalk it up as noise? And how do I know whether my acceptance (or rejection) of whatever Harari says, similarly, is not just owing to a bug in my own software?

This contradiction becomes more acute if one crosses over from description to evaluation. Characterising either humans or digital minds as โ€œefficient,โ€ โ€œbetter,โ€ โ€œsurpassed,โ€ โ€œobsolete,โ€ โ€œimprovedโ€ only makes sense within a value framework. They assume a goal, a standard, a context in which one thing is preferable to another. But to Harari, value is a glitch, judgment is a by-product, and preference is a chemical echo. As such, he can only pass judgment if his moral evaluations are fundamental, and sit outside of the evolutionary arc. He poses as a cold realist, but his argument only works if his values are spared from the reduction he applies to everyone else. So his will is real, yours is not. His judgments are rational, yours are outdated instincts.

This kind of unspoken exemption reveals an hypocrisy as old as power itself. The wise overlords alone retain the clarity to diagnose the problem, propose the solution, and steer the outcome. From priests to planners to technocrats, the pattern repeats: the masses must be corrected โ€“ the enlightened few, never.

And so the veil finally lifts: all of the pseudo-philosophical excitement surrounding digital minds, โ€œsuper-beneficiaries,โ€ or whatever else, is nothing new. Itโ€™s just the latest guise of anti-human eschatology โ€“ a fresh chapter in a centuries-long metaphysical tantrum against the created universe, dressed in utopian blueprints. From the Gnostics and millenarians to Hegel and Marx, the megalomania persists โ€“ just in a wrapper suited to the age. Where once industrial society served as the chariot for Marxโ€™s vision of the collective species in the final stage of communism, now code and circuits promise โ€œascensionโ€ to a higher plane of existence. The โ€œnew Soviet manโ€ can eat dust. Our future selves will be uploaded, enhanced, digitised โ€“ transcending mortal flesh and frailty. Itโ€™s the same ancient loathing for the individual human, once again offering eternal salvation through secret knowledge โ€“ only this time in algorithms and lab coats.

*ย  ย  ย *ย  ย  ย *ย  ย  ย *ย  ย  ย *

In the fourth and final part of this series, we will conclude our discussion with an examination of what is likely to prove the biggest “threat” from AI โ€“ the illusion that it is alive, and how we respond accordingly.

==> Go to Part IV.


Discover more from The Libertarian Alliance

Subscribe to get the latest posts sent to your email.

3 comments


  1. Well, everyone SHOULD of course get what AI is REALLY all about but most people CHOOSE not to want to understand it …

    Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the timelessly foolish (=”awake”) public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.

    The official narrative isโ€ฆ โ€œtrust official scienceโ€ and “trust the authorities” but as with these and all other “official narratives” they want you to trust and believe โ€ฆ

    “We’ll know our Disinformation Program is complete when everything the American public [and global public] believes is false.” —William Casey, a former CIA director=a leading psychopathic criminal of the genocidal US empire

    โ€œRepeating what others say and think is not being awake. Humans have been sold many lies…God, Jesus, Democracy, Money, Education, etc. If you haven’t explored your beliefs about life, then you are not awake.โ€ — E.J. Doyle, songwriter

    The 2 major OFFICIAL deceptive fake FEAR-MONGERING narratives or phony pretexts (ie, lies, propaganda) nearly everyone, including “alternative news” sources, have been spreading is (1) that the TRULY big threat is that AI just creates utter chaos in society and that it might achieve control over humans (therefore it must be regulated, ie monopolized by the typical criminal governments); and (2) that we, the US, have to invest heavily in AI technological development so as to stay ahead of other nations, such as China (https://archive.is/pBzAt).

    The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the TRULY big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that’s long been ongoing in front of everyone’s “awake” (=sleeping, dumb) nose …. https://www.rolf-hefti.com/covid-19-coronavirus.html

    The proof is in the pudding… ask yourself, “how is the hacking of the planet going so far? Has it increased or crushed personal freedom?”

    “AI responds according to the โ€œrulesโ€ created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy.” —Unknown

    “Almost all AI systems today learn when and what their human designers or users want.” —Ali Minai, Ph.D., American Professor of Computer Science, 2023

    โ€œWho masters those technologies [=artificial intelligence (AI), chatbots, and digital identities] โ€”in some wayโ€” will be the master of the world.โ€ — Klaus Schwab, at the World Government Summit in Dubai, 2023

    โ€œCOVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance.โ€ — Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]

    “Scientists at the end of the war (WWII) were hanged for what scientists today are doing and getting away with.” — Dr. Barrie Trower, in 2012

    “The whole idea that humans have this soul, or spirit, or free will … that’s over.” — Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]

Leave a Reply