Eye and I: Thoughts on the Perennial Mystery of Self-Awareness

Tom Ellis
4 min readOct 18, 2023

“The eye through which I see God is the same eye through which God sees me” — Meister Eckhart

One of my earliest childhood memories was when I first became aware that I was fundamentally different from everyone else in my family. While my parents, brother, and sister all had heads and faces in addition to their bodies, I alone, it seemed, just had a body. Instead of a head or face, I had something radically different: a point of view.

Like anyone else, of course, I could look in the mirror and see a face looking back at me, but I could not see my actual face. I could only look out and see others’ faces. This perplexed me, and in some ways, still does.

Who is this person, this entity, looking out from behind my eyes? As cognitive scientists today like to say, this is the “hard” problem. Despite multiple theories, nobody seems to have a clue, intellectually or scientifically, on the nature of what we have come to call “consciousness.” Is my consciousness the same as, or different from, others’ consciousness? And what about my cats? Do they have a sense of self, as I and other humans do? Then what about insects? Or plants? Or bacteria? And what about collective entities, such as beehives, cities, nations, ecosystems, or Gaia herself — the whole living Earth? We simply don’t know. And since none of us has ever looked out on the world from behind the eyes of another (person, cat, or otherwise), we may never know.

One of the ongoing debates in cognitive science these days, since the recent explosion of innovations in AI (artificial intelligence), is whether machine intelligence, based on digital processing, can ever attain self-awareness analogous to our own. And if so, could they ever come up with ideas of their own? Or motivations other than those of their programmers? Or true self-awareness and its attendant pathologies, such as narcissism or neurosis? Although I have no claim to expertise on this vexing topic, I tend to doubt it.

The reason I am skeptical is that self-awareness, I believe, is rooted in our primordial biological survival instinct — the unique capacity of living organisms — from the simplest bacterium to the most complex multicellular beings (such as ourselves) — to act deliberately on our own behalf, rather than simply reacting to external forces and impacts (like all nonliving entities, from subatomic particles to galaxies). In short, without the complex, self-replicating molecular system of DNA, RNA, and Protein, bounded and protected by membranes, and “programmed” (as it were) by their inherent inclination to keep seeking out and feeding on available energy (and avoiding predators, seeking shelter, or finding mates) in order to stay alive and reproduce themselves, there would be no possibility of self-awareness, simply because there would be no “self” of which to be aware. After all, a “smart” bomb, however skilled it may be at evading incoming fire and seeking out its target, has no vested interest whatsoever in staying alive, since it is specifically programmed to destroy itself on impact.

This is the fundamental difference between teleogenic and teleonomic systems. Teleonomic systems are externally programmed to carry out the intentions of their inventors and users — whether they are a simple hand calculator or the JWST or other spectacular examples of machine intelligence. But the former — teleogenic systems — are those who engender their own purposes from within the intrinsic, self-replicating logic of DNA, RNA, and Protein. And those purposes all boil down to eating, surviving, and reproducing.

I would suggest, therefore, that self-awareness derives, ultimately, from this biological mandate we share with all other living beings: to keep on keeping on. The “self,” then, is a linguistic construct that coevolved with homo sapiens in order to facilitate the ongoing task of acting in our own behalf to stay alive and make babies, if possible. All of the other living beings on our planet share this inborn survival mandate, and thus share some degree of awareness — if only of potential dangers, potential food, or potential mates — and therefore act in their own behalf, both individually and collectively (if they are social animals).

But self-awareness is only possible for linguistic organisms like ourselves, for once we have evolved the ability to conceptualize and to make propositions about our concepts (i.e. to string together subjects and verbs in order to communicate), we alone can turn this capability inward in order to engender narratives about our own “identity.” Such as a young child wondering why everyone else has a face, but he or she alone has a point of view. My cats can never engage in such speculation simply because they have no language — no concepts, propositions, or stories about themselves.

Can computers speculate about themselves or others? I doubt it. They can only do what they are programmed to do, and we do the programming, for our own purposes. I therefore doubt strongly that the boundary will ever be breached between teleogenic systems like ourselves and teleonomic systems like calculators, computers, or AI networks. Unlike us, they simply don’t give a shit who they are, what they’re up to , or whether they live or die.

--

--

Tom Ellis

I am a retired English professor now living in Oregon, and a life-long environmental activist, Buddhist, and holistic philosopher.