Google’s latest AI efforts push beyond the limitations of their human developers. Its artificial intelligence algorithms are teaching themselves how to code and how to play the intricate, yet easy-to-learn ancient board game Go.
This has been quite the week for the company. On Monday, researchers announced that Google’s project AutoML had successfully taught itself to program machine learning software on its own. While it’s limited to basic programming tasks, the code AutoML created was, in some cases, better than the code written by its human counterparts. In a program designed to identify objects in a picture, the AI-created algorithm achieved a 43 percent success rate at the task. The human-developed code, by comparison, only scored 39 percent on the task.
On Wednesday, in a paper published in the journal Nature, DeepMind researchers revealed another remarkable achievement. The newest version of its Go-playing algorithm, dubbed AlphaGo Zero, was not only better than the original AlphaGo, which defeated the world’s best human player in May. This version had taught itself how to play the game. All on its own, given only the basic rules of the game. (The original, by comparison, learned from a database of 100,000 Go games.) According to Google’s researchers, AlphaGo Zero has achieved superhuman-level performance: It won 100–0 against its champion predecessor, AlphaGo.
But DeepMind’s developments go beyond just playing a board game exceedingly well. There are important implications that could positively impact AI in the near future.
“By not using human data—by not using human expertise in any fashion—we’ve actually removed the constraints of human knowledge,” AlphaGo Zero’s lead programmer, David Silver, said at a press conference.
Thursday, October 19, 2017
Artificial Intelligence is Beating Us
The "constraints of human knowledge" are falling:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment