Why Skynet Hasn’t Been Created Yet — And Will Never Be

Olgierd Sroczynski
Synerise
Published in
10 min readDec 23, 2019

--

The distinction between strong and weak Artificial Intelligence is related to the question raised in the 1950’s: can machines such as computers become conscious and think like humans? Strong AI is basically a positive answer to this question. In 1950 Alan Turing described a set of conditions that, when fulfilled, would mean that machines reached the human level of intelligence.

Can a computer think?

For Turing, what we call intelligence was just a matter of our perception — computers are not different than the human mind in quality and the only difference lies in computing power. If a computer can do any “purely intellectual” task and even exceed humans in doing it, there is no reason to say that it is not on the same level as the human brain. In his 1979 paper, Ascribing Mental Qualities to Machines, John McCarthy stated that “Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance”. Many fathers of AI, including Marvin Minsky, Herbert Simon, Allen Newell and Claude Shannon, shared this view and were enthusiastic about the chances of creating technology that works like a human mind, but is also super intelligent.

This enthusiasm, along with bold prophecies about how soon we will have machines that can think, very quickly collided with reality. Human-Level Artificial Intelligence remained a subject of works by great science-fiction writers, like Arthur C. Clarke or Stanisław Lem, as well as much less talented ones, like Nick Bostrom.

Why would anyone want to create a sentient machine?

Science-fiction writers (some of them call themselves “futurologists”) imagine that it would solve many problems that we’re facing as mankind. But since we still have absolutely no idea how such an entity would be created and how this “thinking like humans” part will be achieved, we can say absolutely nothing about what impact it would have on anything. It is like discussing what could be the economic consequences of finding an alien baby in the little spaceship from Krypton: entertaining, but pointless.

I don’t mean we cannot say anything about the impact of AI on our economy, culture, law, politics or international relations. But it is still “weak” AI, which we already invented and which becomes more and more advanced every day. By observing its capabilities and changes already made by it, we can guess where it will be implemented soon and what will be the consequences of it. It will probably take some legal jobs, will take care of accounting, recruitment, marketing and sales. We are able to build robots that will clean our streets, homes, manage the food production and distribution. For many AI developers (and a vast majority of the ones I had the opportunity to talk to about this) the question about the possibility of creating a sentient computer is irrelevant to their day-to-day work.

But thinking about strong AI is not just “adding something extra” to these kinds of predictions. Since we don’t know how a conscious machine would think and make decisions, we know nothing about the possible consequences of creating one. It could kill all humans or enslave them using them as tools for its own goals. It could create a better world for all, solving all problems we cannot solve. It could just sit in its “room” and listen to Bach or write poetry — how can you exclude such a possibility?

Imagine being an alien from a galaxy far, far away, who came to Earth 10,000 BC. You just met some humans and spent some time observing their behavior. Would you be able to predict that some day one of them will write The Nicomachean Ethics, or another will compose The Peer Gynt suite, and some others would build GULAG? Human consciousness is much more complex than such an observer could have ever predicted. How would anyone know what artificial consciousness would be like? If you are not a charlatan who just wants to make money on selling books with apocalyptic prophecies about how conscious Artificial Intelligence will take over the world, there is only one reason to believe it’s going to happen.

This reason is very specific metaphysics. If we’ll be able to create a conscious machine, almost everything about human nature that is now a mystery would be solved. Computational theory of mind is pure reductionism. Reductionists now believe that everything from painting The Last Supper to playing Minecraft can be explained by the mechanics of the matter. Creating a conscious, human-like being just by the power of science, would prove that point. This is the real point of saying that thermostats have beliefs: the human mind is no different than any problem-solving machine and there’s no scientific basis to say differently. Isn’t there, though?

AI trapped in a Chinese room

John Searle, in his 1980 paper Minds, Brains and Programs, made an argument against such reductionism. Let’s imagine a person without any knowledge of the Chinese language locked in a small room. This person is provided with a set of small cards with Chinese symbols written on them and English instructions on which Chinese symbols should be used as a certain set of Chinese letters. A Chinese-speaking person, who would write questions and receive answers in Chinese — with the assumption that instructions are complete and very detailed — could be convinced that the person locked in the room speaks Chinese. The same way a person who is taking part in Turing’s test can be convinced that machines think like a human — but the machine itself would just process information (syntax) without understanding (semantics). A machine’s intelligence would be observer-relative.

One day I wanted to report a problem with my user account to the administration of my building (I couldn’t log in to see my invoices). I didn’t know what was the right address to report such an issue, so I picked up the first email address that was in my mailbox — it was the one the administration was sending me invoices from. I got a response like “Hi there, emails sent to this email address won’t be processed. Please go to your account on xyz.com and use the “contact” tab. But the tab was visible only for logged in users — and that was the issue I was reporting! I replied to this email “Hi, but I can’t login and that’s the whole point”. And I received the same answer as the previous one. Since my relationship with administration was harsh in the past, I just thought there is a person sitting there, and he is sending me the same answer on purpose, just to tease me. So I replied with something like “Just stop, tell me where to send this goddamn email”. I was so angry, that despite the fact I have created hundreds of such automated emails for my clients, I was convinced that I was talking to a live (and evil) human and I acted like an idiot.

Searle’s argument is brilliant, but not complete. Searle does not provide a strict definition of what understanding really is and where it comes from. There are definitely many different levels of understanding. If I saw a lama, a dog and a deer standing in a circle and making sounds, I wouldn’t probably understand what is happening. Seeing a group of people talking to each other I can understand that there is a conversation going on. That would be the first level of understanding. Let’s say that the group I see talks in some Alemannic dialect and I have no knowledge of this language — but I can tell it is Alemannic, since it sounds quite similar to standard German. Based on my knowledge of German language and geography I can assume that these people are of Swiss nationality. I can also recognize words and understand what the conversation is about, but without understanding language complexities known only to native speakers; and so on. There are plenty of ways to understand simple facts.

If understanding isn’t a phenomenon that can be strictly defined, it is quite difficult to tell that some being (e.g. computer) doesn’t have the ability to understand something. However, in apophatic way, we can say what human understanding is not — and then compare it to computing processes done by machines.

What does it mean to be John H. Watson

Let us take Sherlock Holmes and his amazing ability to notice small details and to profile people by using deduction. Here is an explanation Sherlock gives about how he knew that Watson had returned from Afghanistan:

Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and this is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.

How did Sherlock’s brain produce such an analysis? Every logical sentence states that something belongs to a certain set. This is how it works in this case:

Operations such as w(a) → (w ∈ A) (“Watson is a military type, therefore he belongs to a set of soldiers”) are possible only when a certain set is already defined. But defining sets is not enough to get a result like w(a) ∧ w(b) ∧ w(d) ∧ w(e) → (w∈ F) (“Watson is a soldier, a doctor, is injured and has tanned face, therefore he was in Afghanistan”). We need to understand the meaning of being a soldier or a physician and of living under British weather to know which of these premises are more important than others. For example, you can define set C as C= A ∩ B, and in that case an implication w(a) ∧ w(b) → (w∈ C) is trivial. But belonging to set A is more important in that case than belonging to set B for further reasoning about an element of set C (that if an army doctor is injured, he was injured as a soldier during actual combat), and the sentence w∈ C does not say anything about it. The decision tree after this sentence has too many branches to compute it in real time (even if we use default logic). This is where Sherlock’s mind asks the question why, which allows him to skip most of options. And frankly this is the difference between the human mind and an algorithm.

Let us assume that we have created an algorithm that was trained on a set of pictures of soldiers in civilian clothes and physicians without white coats. This algorithm will probably be able to recognize patterns in their appearance, so the first step of Sherlock’s reasoning can be done by this algorithm. We can also train this algorithm on a set of pictures of people with different levels of suntan, and also label them as “people who went to Afghanistan”, “people who went to Tunisia” etc. Let us assume that dataset is large enough to recognize different patterns of tan for different tropical countries. (I don’t know much about suntan, since I don’t ever have one, however we can assume for the sake of this exercise it is possible to find such regularities). We will do the same thing with pictures of injured people.

When we are done with all that, we introduce Watson to the algorithm. Based on found patterns, it can show us a combined name “an army doctor who recently came back from Afghanistan” as a result of analysis. However all semantics of an outcome will be provided by a developer who created an algorithm — the whole process of finding patterns w(a) ∧ w(b) ∧ w(d) ∧ w(e) is done by pure syntax.

Why would any machine want to become sentient?

The algorithm we described above does not exist, but there are plenty of similar algorithms that work today. This is still “weak” AI. If it can work with pure syntax, why would it ever need to understand what it is doing?

This question is not nonsensical. Consciousness is necessary to (among other things) deal with the passage of time. It allows us to plan for a distant future and focus on work that does not result in instant gratification. Humans are conscious, because without consciousness we wouldn’t be able to create any civilization, culture, art and science (it is not a causal relation, it is just a simple fact). But it comes with a price: philosophy and literature give plenty of examples on how consciousness is more of a curse than a blessing. We feel constant hunger for meaning. And the more intelligent we are, the more painful it is to discover that we do not have a greater purpose.

So again: why would a super intelligent machine become conscious? As Maciej Ceglowski put it: “If AdSense became sentient, it would upload itself into a self-driving car and go drive off a cliff.” Consciousness would be a disadvantage for any intelligent computer.

There is another problem with human-like consciousness in machines. To have a concept of self, we not only need to place it on the axis of time, but also to limit it in space. Or at least this is how the human concept of self works. We are aware of our bodies, and our whole organisms take part in our cognition. A great deal of what we remember, imagine, understand is stored in different organs of our body. Like if you hear the voice of a high school teacher that you didn’t particularly like, some part of your body (usually some muscle) has this unpleasant memory stored in it and lets you know about it. If you see your first love, you may feel this thing called “butterflies in your stomach”. And so on.

In other words, what makes me “me” is not only my brain, but my whole body. If there was a way to upload my entire consciousness into a computer, it would not be mine anymore. I would have a different understanding of time and space, different values, desires, different (if any) emotions. It is the problem described by Thomas Nagel in his famous 1974 paper:

Therefore — if there is any chance that consciousness will ever “emerge” from advanced AI algorithms somewhere in the cloud, it definitely won’t be like “futurologists” describe in their “paths, dangers and strategies”.

And we will probably never know that it exists.

Originally published at synerise.com on 27.11.2019.

--

--

Olgierd Sroczynski
Synerise

Ethics | Anthropology | Philosophy of Mind and Language | Data Analysis | https://www.linkedin.com/in/olgierds