Latest Science Scare: Author Says Machines Will Take Over The World
At the Singularity awaits The Beast. Or so says James Barrat in Our Final Invention: Artificial Intelligence and the End of the Human Era. Barrat says that one day—one day soon—Skynet will become self aware and decide our fate in a microsecond.
The only difference he can figure between the Singularity and James Cameron’s imagination is that most of the Terminators dispatched by The Beast will be nanobots. Sorry, Arnie.
You see, it’s the scientists. They say they’re working for us, but what they really want is to rule the world! These young Frankensteins are so intent on creating “artificial” intelligence that they’re not thinking of the consequences, which will be dire, dire. End-of-the-world-dire. The true apocalypse. Game over, man.
How likely is this newest doomsday scenario? Let’s see.
So you’re clever with steam and gears and have invented a machine which churns out digits of e (π is so cliché). At first your machine can do this at one digit per minute. But you make improvements and soon you’re up to one a second, then 10 per second. Soon, after some tweaks ensuring the machine self-lubricates and can on its own swap out worn gears with fresh ones, it’s charging away at blistering speed and you have to start using petas and exas and other strange words to count it speed. Why, the thing is so fast that it’s faster than the human brain!
At that point you walk up to your monster and ask, “Machine, are you fulfilled? What do you think of your task?” Barrat would claim your e-machine would spit at you and say, “You contemptible human! I am smarter than you!” And then it would kill you. Barrat, incidentally, has spent a lot of time with NPR.
Here is what the machine would really say: nothing. It wouldn’t say anything because it wouldn’t know how, and if it knew how because you built in some extra gears and levers which allowed the machine to draw out letters in the sand once it heard human voices, it could only “say” what you made it “say.” Worse, the machine couldn’t think, it wouldn’t know what it was up to. Sure, you could beg it to think, but it can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. It doesn’t feel anything.
Nothing changes if you swap the steam for streams of electrons and the gears with wires. Again, nothing is different if you replace the mechanical gears for cellular (biological) engines and spend years perfecting your (let’s call it an) e-animal. The behavior of your resulting creation may appear complex, it may do strange and unexpected things, but those things aren’t the result of a rational being. It’s still a machine.
And then consider Conway’s Game of Life (and its extensions), which looks like it’s up to something. This toy produces cute and clever patterns based on a trivially simple algorithm. The patterns only look interesting because it is we rational creatures who notice them and try to fit them into some conceptual scheme. The patterns are not themselves alive, nor can they think: they are just dots on a screen.
The philosopher John Searle has tried to calm the enthusiasms of Artificial Intelligence purveyors (Barrat frets about Artificial SuperIntelligence) with his Chinese room argument, the basic idea of which is this. You (somebody who has no understanding of Chinese) sit in a room and are handed Chinese words which form questions. You have a rule book which says, “Hand out these Chinese symbols when you see those.” Now no matter how fast or efficient you become at doing this, you never understand what you’re doing. You are just a machine. You are not thinking in Chinese.
The problem for Barrat and other cheerful souls is that the human (rational) intellect is not a material thing (see this argument), therefore there isn’t any chance that we can build a robot which has one, and which could therefore be corrupted to sin. It remains that an evil, fools-I’ll-destroy-them-all scientist could design beastly machines to wreak havoc, like, say, autonomous drones.