Philosophy and Artificial Intelligence #1 — Consciousness

Olgierd Sroczynski
4 min readApr 8, 2021

What has philosophy to do with AI?

The main question behind the theory of Artificial Intelligence is a philosophical one: what does it mean to be human? There are many other more specific questions, however all of them are linked to this fundamental one. How to emulate human learning processes in the machine? Can machines become conscious? What does it mean to be “conscious”? Should we let machines make moral decisions? Does the “free will” exist? What is the relation between human knowledge and the real world?

We don’t have certain (“scientific”) answers to all of these questions, but without asking them we can’t say anything about how the machines should be used, which is the main concern of AI ethics (and also other fields related to it). Let’s start with the most fundamental question.

So, how do you know that you are conscious?

Consciousness is a mysterious phenomenon, and yet it is a key to what makes us human. Our experience of consciousness is literally the only thing we’re dealing with (that is, of course, when we are conscious; when we are not, we are not dealing with anything) — so we don’t really have the proof that anything else is real. As Rene Descartes put it,

How do I know that [God] has not brought it about that there is no earth, no sky, no extended thing, no shape, no size, no place, while at the same time ensuring that all these things appear to me to exist just as they do now?

However, to keep this short and not make this post a 400-pages long essay, we have to assume that:

1) the world exists,

2) we exist,

3) we all share this mysterious quality of being conscious, which is the human-made image of the world.

(Otherwise, writing anything about this matter would be as pointless as Jerry’s “Hungry for apples?” presentation).

Why are we conscious?

George Lakoff and Mark Johnson start their second most important book, Philosophy in the Flesh, with a statement that the mind is inherently embodied and the thought is mostly unconscious. The role of the unconscious in our everyday life has been present in philosophical debates since Plato and everyone who tried to remember the PIN to their phone or the code to the door after entering it manually hundred times, knows that the body has the memory of its own. That means, consciousness is not necessary for us to survive, as other animals don’t need it too. But we need it for something.

From the biological perspective, consciousness is a tool that was developed to enable us to deal with the passage of time. It is the consciousness that maintains the feeling of the continuity between the past, the present and the future. It allows us to postpone gratification and therefore to focus on time-consuming activities, such as art, science and technology. Consciousness creates meaning. We observe events and objects, and connect them in a certain way, creating order and sense, and we make decisions based on that.

The question whether consciousness is “just a trick” that has helped us move to the next level of evolution, or there is more to that (e.g. existence of the soul) is a metaphysical one, which means it’s beyond scientific method (which, by the way, is also a product of our consciousness). However, what we can tell with certainty, is that our biology (our central nervous system, our whole bodies, our evolution) is crucial for the structure of our consciousness.

Thinking machinery

Therefore, there is a fundamental error in the computational theory of mind. The brain may be a “computer made of meat”, but the brain is not the mind and the computer will always lack the whole biological background of the human brain.

  1. At the moment we are not sure what role certain connections in the nervous system play in processing complex intellectual tasks. Only that would make it possible to recreate the human mind in the machine. We’re talking about billions and billions of cells.

2. This means that we can’t really estimate what “computing power” is necessary to create Human-level Artificial Intelligence. It is not the question of “how much”, but of “how”. This is why computers much more powerful than the human brain, in terms of computing power, still seem unconscious and are still unable to “be human”.

3. But what if they are conscious? What if Google is really watching what you’re doing on the internet and silently judging you? Well, here’s the problem with that — even if it is “conscious”, or will become conscious in the future, it would not be human consciousness — simply because it would not run on human “hardware”. In other words, it can be “superintelligent”, but it would not be human — and we would probably never know that there is any kind of consciousness in it.

Understanding the phenomenon of consciousness is crucial for questions related to AI ethics and theory of AI. Once a week I will publish a short blog post (like this one) to discuss some basic questions in philosophy and ethics of AI. We will start with the philosophical concepts, and then discuss them in the context of AI. So in the few next weeks I will talk about:

  • Meaning and sense
  • Free will and personal responsibility
  • The truth, the biases and the data
  • Equality, diversity and justice

I will also try to popularize some interesting papers and books regarding these matters. If you want to recommend some other topic, a book or a paper to discuss, don’t hesitate to let me know! Hope you will find this series helpful.

--

--