Words, Words, Words:

Tom Ellis
4 min readJan 20, 2024

--

The Delusions of AI Mania and Phobia

“The map is not the territory, and the name is not the thing named.” — Alfred Korzybski

I’d like to begin with full disclosure: I am not an expert on Artificial Intelligence (AI). I do know a few things about logic, however; enough not to be too afraid of what the AI revolution portends, since it is based entirely on logic — that is, on digital processing of coded information (language or mathematics).

Let’s start with mathematics. We all learned, from childhood, that 1+1 = 2. And that this statement is universally and unequivocally true, and is the very foundation of the validity and reliability of mathematics as a tool of inquiry, right?

But I can give you a few examples, from our everyday experience, where this statement is manifestly false.

If the numbers you are using represent clouds, then 1+1 = 1.

If they represent two living organisms in copulation, male and female, then 1+1 can equal anywhere from 3 to a million, depending on the species. In my own birth family, for example, 1+1 = 5 (My parents, myself, and my two siblings).

And if you have a piece of chalk. Then 1 divided by 2 = 2.

Obviously, then, the statement 1+1=2 depends, for its validity, on whether the referent of “1” can be modeled as a discrete and indivisible integer, like apples or bicycles or cells or atoms, but unlike male and female copulating organisms, clouds, or pieces of chalk. It follows, then, that the statement is only conditionally true, not absolutely: IF x can be modeled as a discrete and indivisible integer, and if x = 1, then 2x = 2.

Similarly, with verbal language, syllogisms — the foundation of logic, and hence the (mathematical) basis of verbal processing in computers and in AI systems, can be universally valid or invalid, regardless of whether the referents make any sense at all. Here is a simple example:

All spoons are octopuses.

All dragonflies are spoons.

Therefore, all dragonflies are octopuses.

Nonsense? Of course. But a perfectly valid syllogism. So if you fed the major and minor premise into an AI system, it would instantly conclude (in the absence of any available databases to cross-check definitions, or images to scan and digitize) that all dragonflies are octopuses.

This point was well made by Yuval Noah Harari, in a lecture that was televised on YouTube, where he pointed out (and I paraphrase) that AI can only master language, but not the experienced reality to which language alludes. Therefore it can mimic human behavior quite convincingly based on digesting vast reams of behavioral research through verbal or visual data analysis (number crunching again), but it does not actually experience anything the way we do. It can only function digitally, with rule-based systems. But those rules are always based on a set of presuppositions, which belong to their designers and programmers, not to themselves.

Most crucially, as one of my dissertation mentors told me years ago, premises (or presuppositions) are pathetic. The ultimate premise, that is, underlying any claim or linguistic assertion we make lies in our (nonverbal) feelings, not per se in our (linguistically formulated) thoughts. And our feelings are ultimately based on our experience as biological organisms, acting in our own behalf, to stay alive (i.e. to eat, survive, and if possible, reproduce).

And so there is, in my view, an unbridgeable divide between us, as biological organisms, and AI systems. We are teleogenic systems; they are teleonomic systems. A teleogenic system engenders its own set point, its own telos or goal, from the biological imperative encoded in its DNA: to eat, survive, and reproduce. So everything it does — everything we do — is oriented toward this internal set point as well, if we trace it back far enough.

Teleonomic systems, from thermostats and steam engines all the way up to the most complex computer networks and AI systems conceivable, have their set points externally determined by their designers and programmers: to accomplish some task (also externally determined) for which they have been designed and programmed. And they don’t give a damn what that task is. If you program a “smart bomb” to seek out and destroy an enemy target and to evade all incoming fire, it will never have any ethical qualms about carrying out its mission, even when that entails self-destruction at the end. Similarly, the magnificent Cassini mission to Saturn never hesitated to plunge into Saturn’s thick atmosphere and burn itself to a crisp at the end of its run, since it was programmed to do so. However “smart,” these systems are as lifeless as rocks or galaxies, and so they have no innate directive for self-preservation. This is why, for example, a self-driving automobile recently ran smack into a truck that crossed a main highway from a private road at right angles. Nothing in its elaborate programming enabled it to anticipate such an unlikely event — which would have been obvious to any human driver. And above all, it didn’t care, because it couldn’t. It was not alive.

So this is why I don’t worry about AI “taking over” or “destroying humanity.” I only worry about those rich and powerful bastards who are salivating at the prospect of using AI to replace their work force and maximize their own wealth and power at the expense of all the rest of us, as well as politicians in their service (like a certain orange maniac I know of) aiming to consolidate power by rapidly disseminating disinformation and fomenting stochastic violence against his adversaries. So there is plenty to worry about with the AI revolution, but I strongly doubt if robots will displace us. They just don’t care.

--

--

Tom Ellis
Tom Ellis

Written by Tom Ellis

I am a retired English professor now living in Oregon, and a life-long environmental activist, Buddhist, and holistic philosopher.

Responses (1)