Technological Singularity (Wikipedia)
2045: The Year Man Becomes Immortal (TIME Magazine)
Now, let's slow down and examine one aspect mentioned in the Time article (page 5):
Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?)
The Blue Brain Project is (as defined by the Wikipedia article) "an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level". Pretty damned audacious goal, right? (Here's the project's official website, if you'd like to check it out for more info).
"What's so special about simulating a mammalian brain?" you ask. (Might as well say "human" brain, since that's Blue Brain's next big goal after they master the mouse brain they've been working on.) Let's think about this for a second: once you have simulated the human brain, what's next?
Back to the Time article for a few juicy quotes...
Page 3: Here's what the exponential curves [regarding the doubling of computer processor speed that Ray Kurzweil has been calculating] told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he's not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today.
Page Two: The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
Y'all catch that yet? By Kurzweil's expectations, we'll have machines that can out-think us by 2045 (provided the global economy hasn't completely collapsed and been re-structured by then). Once that happens, who the hell needs humanity?
This is where we leap into the realm of science fiction: humans transferring their minds into computers (like in Ghost in the Shell), functional cyborgs (like in RoboCop), AI (Data from Star Trek: The Next Generation, any story by Isaac Asimov) or... drumroll, please... machines that might decide humanity isn't worth bothering with anymore and either decide to enslave us all (which would be pointless, since any machine that can out-think us could also make less-intelligent machines that can out-work us) or just destroy us outright (like in The Terminator and its spin-offs, The Matrix and its spin-offs, the Daleks from Dr. Who...).
This seems like a major hurdle, doesn't it? Apparently, Kurzweil and his philosophical followers agree. Back to the article...
Page 4: Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.
My question: How the hell would go about doing that, exactly? Since we're already taking our ideas from sci fi, maybe sci fi can give us a few reliable answers.
In Star Wars, almost all droids came with "restraining bolts", little devices you could pop onto a machine to keep it from doing whatever the hell it wanted to do. Wonderful little devices with numerous functions, those restraining bolts. If a droid wanted to run away, it couldn't; the restraining bolt kept it programmed into subservience. If a droid wanted to disobey programming, it couldn't; the restraining bolt kept it from doing that, too. How about removing it? Droids apparently couldn't do that; R2-D2 never tried to remove his on his own. However, he did trick his young, naive new master - Luke Skywalker - into doing it, then promptly ran away later that night. (Of course, droids in Star Wars were based on the pre-Islamic concept of the djinn, and restraining bolts were just another form of "magic ring" or amulet or somesuch that kept them in line, but I digress.) Another method was periodic memory wipes, which kept droids from developing distinct personalities of their own. (Notice C-3PO has a mind-wipe at the end of Episode III? R2 does not. Which droid was the one who tried to escape in Episode IV and which one told the escapee to stay put and shut up?) If you have a kind-hearted fool of a master who won't do either to his/her droids - like Luke - then you have the potential for an AI gone rogue. I remind you: R2-D2 is considered a fairly low-end droid in the Star Wars universe, and he still thinks in relatively humanistic terms. What happens when you're dealing with an AI of greater intellect who could probably do more than just trick someone into removing the restraining bolt (or some similar device we may devise), but find or invent technological workarounds for it?
Asimov came up with the Three Laws of Robotics. Such laws are no problem, when we're talking about machines that can think on a relatively human level. What about machines that can think beyond a human level? Surely, one of those machines could figure out a way to reprogram either themselves or another machine in such a fashion that the sacred "Three Laws" no longer have any effect, right? (Think about it: there will be robotic soldiers. We already have them. They will only get better as the years go by. Do you want a machine designed to kill - so humans don't have to - to have laws regulating human preservation in their programming? Then you have the potential for a man-hating murder-droid.)
I hate to sound like a Luddite (and I'm NOT by any stretch of the term), but why don't we, I don't know, STOP TRYING TO MAKE MACHINES SMARTER THAN WE ARE?? Seems like evolutionary good sense to me, right? Well, there's a reason scientists and technologists are working on something like this: pure self-interest.
Page 3: Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable — rather like the heat death of the universe — is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable"...
Page 4: But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.
Yep, it's all about mankind's quest for immortality. In our search for the Fountain of Youth, we could possibly doom ourselves.
Let's forgo the notion of Skynet for a moment. Let's focus on the idea of moving our consciousness into a robot body. What's to stop a potential world dictator from doing that? What kind of control could that give him/her/it over, say, a robot army, his/her/their HQ nation's AI-controlled factories or means of production, the digital economy (the concept of someone being able to lower interest rates with just a thought is frightening) or even digital state-run Media, all connected to him/her/it by the next form of "better-than-Bluetooth" wireless communications protocol?
There are already stories of people who've been chipped with RFID transmitters connecting them to their homes, like Joe Wooller. (You also have cybernetics researchers like Mark Gasson, who implanted himself with an RFID chip, then infected it with a computer virus to prove how vulnerable wireless connections are, and to remind us that once we marry our bodies to such technology there is no going back and we must suffer all the terrible consequences if we desire to reap the potentially world-changing boons of that technology.
Remember the Laughing Man, the character from Ghost in the Shell: Stand Alone Complex who could hack into people's brains and alter memories because almost everyone in that series is connected wirelessly to the Internet? Sounds a little too sci fi, right? Not according to AboveTopSecret. Their post quotes from an article at Save the Humans by Jason Roth, which says:
...A new device, meant to convert brain waves into data and transmit the data via wireless technology into the minds of other wearers of the device, is being criticized as the next major target of hackers...The device, reportedly codenamed "Mind Reader", is now in development at Sony Broadcast & Professional Research Labs, sources say. Located in Hampshire, UK and with 80% of its funding reportedly coming from Japan, the research organization specializes in the development of "core technologies and components to enhance future B&P products worldwide".
I know, I just quoted a conspiracy site and that automatically pegs me as a "loony". Screw that. I'll quote from a more respectable source, like, say, WIRED Magazine:
n the past year, researchers have developed technology that makes it possible to use thoughts to operate a computer, maneuver a wheelchair or even use Twitter — all without lifting a finger. But as neural devices become more complicated — and go wireless — some scientists say the risks of “brain hacking” should be taken seriously.
“Neural devices are innovating at an extremely rapid rate and hold tremendous promise for the future,” said computer security expert Tadayoshi Kohno of the University of Washington. “But if we don’t start paying attention to security, we’re worried that we might find ourselves in five or 10 years saying we’ve made a big mistake.”
This is serious shit, so serious that they already have coined a new term for this: it's called neurosecurity. (Before you ask: yes, there is a Wikipedia entry for it. God, you know when Wikipedia already has an entry for it, somebody's worried!) The Journal of Neurosurgery already has an article on it, and it's already becoming a ubiquitous term online.
So, let's ratchet up the fear even more! Take the previous hypothetical wannabe-Hitler I already mentioned - you know, the one who transferred their brain into a robot body that gives them wireless control of damn near everything - and give them the ability to hack into other people's wirelessly-connected brains via a clever little virus spread through the neural internet. Remember the old Bible verses about some evil future dictator forcing the world to take the "Mark of the Beast"? Who needs to force you to do anything when they can just hack into your mind and reprogram you to want to take it? Hell, it might eventually even be part of the basic brain-computer interface! (Come on, like you didn't think a neural interface wouldn't come with some basic form of government control! Egypt's Mubarak already had the power to shut off the Internet in his nation, and Obama wants the same thing here in the United States. You just know a brain-computer interface would at least come with some kind of government-mandated "shut-off", let alone some form of GPS tracking system!) This hypothetical dictator will be able to find you, watch you, reprogram you and - should you somehow break free and get out of line - turn anyone against you.
Skynet? The Matrix? The Beast of Revelation? Child's play, compared to what could be coming within our lifetimes. And it all starts with just a harmless simulation of a mammalian brain...
Getting scared yet?
The previous technological fear-mongering was brought to you by J.C. Batte ©2011, and is best read while listening to Fear Factory. Take it with a grain of salt. Reader discretion is encouraged.