In one of my previous essays, I described Jeff Hawkins’ theory of intelligence. This essay relies heavily on that theory–if you’re not familiar with it, I recommend you read my other essay first.
The question of consciousness–of what it means to be “self-aware”–is one of the longest-standing unsolved problems in human history. It has showed up in practically every field from philosophy and theology to literature and the arts to multiple scientific disciplines including psychology, biology, programming, and even mathematics. Like the nature of sleep and dreams, it is one of those tantalizing problems that has resisted all solutions for millennia, despite being a fundamental part of our daily lives. Yet intelligence was also considered such a problem, and Hawkins’ theory tied a neat bow on it. Might the problem of consciousness be similarly solved?
Hawkins himself briefly discusses the question of consciousness in his book, but his answer–“consciousness is what it feels like to have a neocortex”–is oddly noncommittal for someone who spends almost an entire book debunking the similarly hand-wavy Turing Test. In fact, they amount to more or less the same response: “intelligence is what humans do” has been deferred to “consciousness is what mammals do.” It’s a thoroughly unsatisfying response to the question, and ultimately a cop-out. However, his theory of intelligence hints at a much more satisfying and illuminating answer to the ages-old problem of consciousness.
The first thing we need to understand in order to solve the puzzle of consciousness is that consciousness and intelligence need not be the same thing. This is where Hawkins falls short–he conflates mammalian intelligence (having a neocortex) with what humans call “consciousness” (also known as self-awareness). Yet it is Hawkins’ own discussion of intelligence in simple organisms that belies this assumption–if consciousness simply is intelligence, than anything that could be said to exhibit intelligence (i.e., all life on earth) could also be said to exhibit consciousness–a dubious claim. On the other hand, if it’s only the neocortex that gives rise to consciousness, then organisms without one (such as birds) can’t be said to be conscious. Yet some birds have demonstrated self-awareness on at least a superficial level, so this cannot be the whole story either. The only reasonable conclusion we can make is that although consciousness and intelligence are clearly related, they are not the same thing.
Since it seems clear that intelligence and consciousness are at least related, Hawkins’ theory of intelligence gives us a good place to start for forming a theory of consciousness, despite Hawkins’ own ambivalence on the subject. For one, it seems reasonable to begin from the assumption that intelligence is at least a necessary, though not sufficient, condition of consciousness. It doesn’t make much sense to talk of an organism that is conscious but not intelligent. So if intelligence is necessary for consciousness, then it follows that consciousness is almost certainly a particular kind of intelligence–a rather special kind, if it can be distinguished so readily from all other examples of the shockingly versatile neocortical algorithm. Intelligence is prediction, Hawkins argues. So what kind of prediction is consciousness?
The key to the solution lies in the relationship between consciousness, intelligence, and the self. Consciousness is often used interchangeably with the terms “self-awareness” and “self-intelligence,” and it is this last term which proves most illuminating: if consciousness is self-intelligence, and intelligence is prediction, then it follows that consciousness is self-prediction. Consciousness, in other words, is nothing more or less than recursive intelligence.
Like most recursion, the simplicity of this answer obscures its incredible power. What possible use could one have for a program that runs programs? Or a program that writes programs? How could you prove something about an infinite set with only two examples? How complicated could the equation Z→Z2+C possibly be? And what on earth could an intelligence gain from being able to predict its own thoughts, feelings, and behaviors?
Well, let’s see…without the ability to differentiate our “selves” from our surroundings, language as we know it–permeated as it is with the distinction between subject and object–could not exist. Similarly, without the ability to understand our own understanding (and that our understanding can differ from others’), we would be unable to ask questions. Without an awareness of our own awareness, we would be incapable of imagining ourselves in someone else’s place–that is, of empathy and imagination. And without the ability to predict our own feelings and behaviors, as well as the behavior of others and our environment, our ability to plan for the future would be far more limited.
There is one implication of this theory that some may find uncomfortable: if consciousness is identical to self-intelligence, then just as there is some natural variation in human intelligence we should expect to find some natural variation in human consciousness as well. That is, “conscious” is not the on-off binary it is so often imagined to be, but rather a continuum. But can I really claim that some people are more or less conscious than others?
Well…yeah. You probably know someone who tends to “drift” through life, not really knowing or caring much why they do the things they do, or what the consequences will be. You can probably think of many more who exhibit maddening levels of hypocrisy, seemingly unaware of the contradictions between their actions and their beliefs–or, if they are aware, excusing them with laughably thin rationalizations. Is it really so radical to suggest that these are failings of self-awareness rather than just personality? It seems obvious that humans and other animals exhibit different levels of self-awareness. If nothing else, this follows from consciousness being an aspect or type of intelligence: just as the smallest microbe exhibits intelligence, only at a much simpler level than human intelligence, so we should expect to find some organisms exhibiting relatively simple forms of consciousness while other organisms–such as humans–exhibit more sophisticated forms of self-awareness. But if humans are in general more conscious than other animals, it follows inevitably that some humans are more conscious than other humans.
This is probably not going to be a popular suggestion. Consciousness has historically been used many times as evidence of the immaterial soul, so suggesting that some humans are more conscious than others could be considered tantamount to suggesting that some people have more soul than others. I don’t believe in immaterial souls, so that particular objection doesn’t bother me, but even if I did there is precedent for the idea that some souls are stronger, or greater, or more valuable than others–how else to explain the popular belief that some souls go to heaven and others to hell? Even among other materialists, however, I suspect there are many who will consider this suggestion incredibly elitist. Wouldn’t this be “putting a price” on people? Aren’t I suggesting that, in essence, some people really are more valuable than others?
Well…yeah. I’ve already argued not only that we can put a price on other people, but that it is a moral imperative to do so! Self-awareness, like intelligence, is objectively an incredibly valuable skill. Personally, however, I find this thought encouraging rather than distasteful: not only does it mean that we have nothing to fear from machine intelligences suddenly “becoming” self-aware (self-intelligence would have to be deliberately designed, and would only be as flexible as we dictated), it also means that our own consciousness is not fixed! If consciousness is a skill and not an inherent, immutable property, then just like intelligence it can be cultivated–in other words, it is both possible and desirable to become more conscious.
How cool is that?