Monday, May 5, 2014

Scientists Worry about Artificial Intelligence

Four leading scientists -- Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek -- have co-authored an op-ed asking us to think harder about artificial intelligence:
Artificial-intelligence (AI) research is now progressing rapidly. . . .

The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks. . . . So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues.
Every once in a while the thought crosses my mind, what will we do when our machines are smarter than we are? But then I hit a wall, because I have no idea what that would mean, beyond a general vision of reclusive billionaires and an increasingly useless and marginalized populace. And I don't even know if that notion makes sense or is just my anxiety centers calling up scenes from Neuromancer and Bladerunner. Park of me thinks the worrying is pointless, because we have absolutely no idea what such a world will be like, so we might as well wait and see what happens. Suppose we decided that strong AI was a huge threat and ought to be banned; could we even do it? Or would the research continue in various underground ways?

One of the bits of science fiction that I have long thought most plausible is the "Butlerian Jihad" from Dune, which leads to the banning of all thinking machines. Because what will we do if our technology renders us useless? There isn't enough room on the beach for all of us to lie there at once.

No comments: