Artificial consciousness – how can it be at all possible?

Lately I’ve been thinking a lot about artificial intelligence, from reading this awesome two-part article (12) on artificial intelligence, written about a month ago by Tim Urban on his awesome blog-style website “Wait But Why”. If you haven’t read the article, read it now. It’s a lengthy read, but it’s COMPLETELY worth it.

Among the many things that interested me in that article was the idea of artificial consciousness. It’s exactly what its name suggests – artificial machines being conscious in the same way humans are.

So a few days ago, I was thinking about artificial consciousness, and realized that I didn’t understand how in the hell that can be a thing at all. I was confused even more when I learned that many fictional robots, for example Wall-E or the robots in Futurama, are conscious and self-aware. What’s that all about? Was I missing something about artificial consciousness that the creators of Wall-E or Futurama understood?

In this blog post I will attempt to explain why the idea of artificial consciousness is hard to understand, and how it’s possible for artificial consciousness to actually exist.

What is consciousness?

Before we can understand what artificial consciousness really is, let’s try to understand simply what consciousness is.

Consciousness is most likely a sort of extension to intelligence, having a sense of self, and experiencing the things going on around. If someone or something is conscious, that probably means that its thoughts are freely changing, and perhaps also that it does think in the first place. Consciousness is associated with being living, awake, and aware of your surroundings, in contrast to dreamless sleep, a coma, or death, where you are not conscious.

And what things are themselves conscious? I know for a fact that I am, and it seems as though all other humans are as well. Additionally, at the very least the most intelligent living beings (mammals for instance) seem to be conscious too. On the other hand, beings that are not living (not part of the hierarchy of life forms), for example a car or a cell phone, are not considered to be conscious, and it’s pretty silly to think that they would be. Therefore, how could a machine people invented, even if it’s an extremely sophisticated robot, be at all conscious? To be able to answer that question, let’s look at how conscious beings are made.

Making conscious beings

Let’s assume that all humans as we know them are conscious. If humans are conscious, then how are they made? By giving birth of course. The process of giving birth makes a new human being that is conscious just like all other humans.

But how else can you make a human? One thing mentioned in Tim Urban’s artificial intelligence article is the ability for nanotechnology to manipulate atoms. If nanotechnology is conquered by humans (with the current rate of advance in technology this is overwhelmingly likely), then it won’t be long before we can make literally anything simply using atoms.

So let’s say a random human being (we’ll call him Bob) has its entire composition of atoms sampled, and then replicated by nanobots that can make things out of atoms (we’ll call this copy of the human being Joe). Before we see what would happen there, let’s discuss how we could do that.

Assembling a human being

The human body is made of about seven octillion (7*1027) atoms – let’s assume Bob is made of this many atoms. So if we had a single nanobot make a human being with one atom per second, that would take 220 quintillion (2.2*1020) years for them to make Joe! That isn’t exactly enough time for a nanobot to clone a human being.

So let’s say a nanobot could not just make Joe at a rate of one atom per second, but a quintillion (a billion billion) atoms per second! A quintillion atoms is about the size of a grain of salt, which mostly helps get an idea of how small an atom is. At this insane rate, making Joe would still take 220 years! That’s still way too long.

So how could we make Joe in a reasonable amount of time? Imagine a nanobot that is programmed to replicate itself, and have all its clones keep on replicating themselves until we reach a certain amount of robots. According to Tim Urban’s article, scientists estimate that a nanobot takes 100 seconds to replicate. With that, the amount of nanobots would double every 100 seconds.

Let’s say we wanted to make enough nanobots to make Joe within 24 hours, at a rate of 1000 atoms per second (that’s nothing unreasonable). For that, we would need 81 quintillion nanobots. How many doublings would that take?! Only 63 doublings, which would be only 1 hour 45 minutes! So making this many nanobots, despite the daunting size of a quintillion, is nothing unrealistic at all! All we’d need to do is program nanobots to replicate themselves, and stop after exactly 63 doublings, no more, no less.

Now it’s very important that the nanobot replication program correctly works – otherwise, a bug could case the nanobots to replicate forever and after a few hours destroy Earth. Yes you read that correctly – it would only take about 170 doublings (about 4.7 hours) to get as many nanobots as there are atoms that make up Earth (estimated at 1050). And that’s not even the worst part – if the nanobots double every second, in under 7.8 hours we would get as many nanobots as there are are atoms in the observable universe (estimated at 1080)!

So it’s extremely important that the nanobot replication program works right, and if it does, then nanobots making Joe wouldn’t be hard at all!

Is Joe conscious?

So let’s say some science company successfully manufactured Joe with all those quintillions of nanobots. We’ll assume the manufacturing worked perfectly, successfully making Joe as a replica of Bob. We know that Bob is a conscious being, so the question now is: is Joe a conscious being?

The rational answer to that question would be “yes, of course Joe is conscious”, but my gut is a bit skeptical on the matter. It doesn’t quite feel right that those nanobots making Joe would start with a bunch of atoms and end with a living, breathing, thinking conscious human. Now why is that? Maybe it’s because it doesn’t quite feel right for a conscious being made by any way other than ordinary reproduction. And that brings us back to the question: what is consciousness?

What is consciousness?

We’ve already come up with something of a definition of what consciousness is. But how do we determine whether something is conscious? Time to head over to Wikipedia.

According to Wikipedia’s article on artificial consciousness, scientists hypothesize that consciousness is caused by a sort of interoperation of several parts of the brain, which are called the neural correlates of consciousness. Those can be thought of as the minimum requirements for a being to be conscious. It is not known what parts of the brain are the neural correlates of consciousness, but this brings us to quite an interesting idea. Perhaps chemical composition is what really determines whether or not a being is conscious.

So when the nanobots build Joe, it seems sensible to say that at some point, Joe became conscious … but at which point did he?

Maybe consciousness is not a black and white concept, where things are either conscious or not conscious. Instead there might be a “consciousness staircase”, which Tim Urban discusses in his artificial intelligence article. The consciousness staircase would have a human higher up than a dog, who would be higher up than a chicken, who would be higher up than a fish, who would be higher up than an ant. The higher up something is on the consciousness staircase, the more conscious they are.

But what does “more conscious” really mean? It most likely relates closely to intelligence. Tim Urban gave an example:

A chimpanzee could probably understand and know what a skyscraper is, but they could never understand that a human made it. For all they know, things as big as a skyscraper are part of nature. That difference is only the difference in the consciousness staircase between a chimpanzee and a human, who aren’t very far apart on the staircase. So imagine how hard it would be to come anywhere near understanding what something even a little above us on the consciousness staircase would be.

So maybe the ladder idea is what consciousness really is. With all that in mind, we can now answer our ultimate question.

Can machines be conscious?

So where would a computer or something else humans made fall on the consciousness staircase? That’s a difficult question to answer. I personally don’t think a computer could fall anywhere on the consciousness staircase unless it has some clear form of artificial consciousness – that would most likely refer to the computer being able to emulate the neural correlates of consciousness, and therefore perhaps be conscious!

But keep in mind that computers’ artificial intelligence is a very different sort of intelligence from human intelligence. For example, consider how today’s computers are able to beat even the world’s best players in a game of chess, but they have great trouble reading those captcha codes that aren’t much of a challenge for people to read. As you can see, computers and humans are good at very different things.

So does artificial consciousness really matter? Maybe it isn’t really relevant at all when it comes to developing artificial intelligence. With my description of artificial consciousness, it seems likely that once machines are able to improve their own intelligence (as discussed in detail in Tim Urban’s article) and surpass that of humans, artificial consciousness will just happen.

On the other hand, artificial conciousness will indeed have notable effects on how we use technology. For example, if a machine is conscious, that means that shutting it off will not just be like turning off your computer, but equivalent to murder! That’s a very strange concept to humans today, and more important to those conscious machines (that phrase still sounds strange to me) than we’d probably think.

Looking at it this way, artificial consciousness is very relevant in the field of artificial intelligence. With that strange fact in mind, let’s examine how we can test for consciousness of a machine. 

Testing machines for consciousness

A well-known test of machine intelligence is the Turing test, named after the famed pioneering computer scientist Alan Turing. The most common version of that test is where a human (let’s call him Steve) talks to another human, and also a machine designed to chat as though it were a human. If Steve cannot tell which is the human and which is the machine, then the machine passes the test. As of yet no machine anyone has made has passed the test, but in June 2014 there was a competition that claimed to have passed the test.

On June 7, 2014 (60th anniversary of Alan Turing’s death), there was a Turing test competition held in London. In that competition, the machine people talked with was a chatbot named Eugene Goostman. He was developed by three programmers from Russia and Ukraine, and he was designed to be a 13-year-old boy from Ukraine who is learning English. Ten out of the thirty people who talked to him believed that Eugene was a human.

The people who held the competition claimed to have passed the Turing test because in one paper, Alan Turing mentioned that we would hit a milestone point in technology when less than 70% of people could tell that a chatbot was not human. From there they interpreted that passing the Turing test means at least 30% of people believing a chatbot is human, and they barely passed that requirement. So that’s why they claimed to pass the Turing test.

There was quite some controversy as to whether Eugene really passed the test. The chatbot was designed to distract people away from non-humanlike responses. This design made people think that instead of the weird responses being imperfections of the program, they were simply because Eugene was a “13-year-old boy from Ukraine”. Worse yet, even with that only a third of the people believed Eugene was human.This means that the competition was really somewhat rigged.

So, although that was an interesting competition (with clearly very much effort from its programmers), I don’t believe that really counts as passing the Turing test. Humanity is still yet to be able to make a machine that passes the Turing test.

In any case, the Turing test is a useful test for determining how artificially intelligent a machine is. Perhaps it can be used to detect artificial consciousness as well, or at least provide supporting evidence for artificial consciousness.

Really conclusively testing a machine for consciousness, however, cannot really be done until we figure out the mystery of what consciousness really is. In my opinion the idea of neural correlates of consciousness makes the most sense, though even that is pretty uncertain. Finding what those neural correlates really are may lead to a conclusive answer, but it’s hard to know for sure.

_______________________________

So is artificial consciousness going to be a real thing in our world, or is it just some nonsense? As strange as it may seem, it’s overwhelmingly likely that machines will be artificially conscious in not too long of a time at the rate technology is growing these days. Artificial consciousness is a strange concept, but one that makes much more sense when you really research it.

I may do more posts like this in the future, but I’m not sure if they’ll be on any sort of regular schedule. In any case, I hope you enjoyed this post. I myself learned a lot when writing it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s