By beating a top professional Go player twice in a five-game contest of Go, Google's AlphaGo program is on track to achieve a major milestone in artificial intelligence (AI). But does that mean computers are now smart enough to beat us in everything? Experts say that is obviously not the case.
South Korean Lee Se-dol is considered the most successful Go player in the past decade. But just after two games of the machine-vs-human Go series, AlphaGo has already gained the upper hand -- it only has to win once more to claim victory. The odds are in machine's favor.
AlphaGo's victory "seems very likely now," said Felix Hill, an AI researcher at the University of Cambridge's Computer Laboratory.
"Once it is clear that the computer is capable of playing at a higher level than the human, I think it's unlikely that it is actually performing at a slightly lower level, that the first victory was an outlier and that the human will win more games in the long run," he told Xinhua. "It is much more likely that the average performance of the computer is higher than the level of the human, and therefore that it will continue to win almost all of the time."
Another AI expert also believes that computer can learn to play Go, the ancient Chinese board game, at "an amazing level."
"We can compare AlphaGo to DeepBlue, the computer program that beat Gary Kasparov in chess in the 90s: It is a computer program that can solve one particular task very well -- way better than humans," said Dr Marc Deisenroth, who is a lecturer in statistical machine learning in the Department of Computing at Imperial College London.
In Go, two players take turns to place tiles on a board, trying to gain territory by arranging their tiles in strategic shapes or patterns. It sounds simple, but there are trillions of possible moves.
Pitching computers against human in various games, including Go and chess, seems to be a popular way to test what scientists and engineers have achieved in the sphere of AI.
And computers have done a good job by far. In 1997, IBM's Deep Blue defeated world champion Garry Kasparov. Then the company's Watson won the Jeopardy competition against the best human players. In October 2015, AlphaGo beat the European Go champion, an achievement that was not expected for years. Go was described as "the only game left above chess" by Demis Hassabis, CEO of DeepMind, Google's AI arm that designs AlphaGo. Now, there is a big chance that computers will conquer Go by defeating human champion player.
But to many experts, it is still too early for us to worry about computers taking your jobs by outsmarting you in everything.
"No, absolutely not," said Hill." The game of Go is massively more constrained than the real world."
"There are many billions of combinations of things that can happen (in Go), but the possibilities are still finite, discrete and can be easily described. In real life, whether computing how things move through the air or trying to interpret or produce language, there are infinitely many possible actions at any one time, and an infinite number of times such a decision must be made," noted Hill.
Go is much more like learning how to multiply numbers than learning to master many real world problems including, say understanding language, said Hill.
Hill and his colleagues have been working on a web-based machine language system that can solve crossword puzzles, and it uses deep learning in some way. Deep learning and monte-carlo tree search are two key ingredients in AlphaGo. Both of these two methods individually have many successful applications. For example, deep learning is being applied to various areas, such as image recognition, text translation, audio/text processing, face recognition, robotics, etc.
AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning, according to an article posted by Demis Hassabis on the Google official blog.
By learning and improving from its own matchplay experience, AlphaGo is now a much better Go player than when it beat the European champion late last year.
But Deisenroth told Xinhua that we "should be careful" when interpreting the results of these human-computer competitions.
"These competitions are interesting because they are milestones toward human-like AI... The game of Go was the most recent milestone -- Go is so much more complicated than chess that it was not expected to be solved before 2025," said Deisenroth. But "none of these milestones have so far led to a truly intelligent system that we would consider similar to human intelligence and behavior," he said.
In some sense, smart computer systems are already helping us do things every day -- Apple's Siri, Google Now, Facebook's friend suggestions, Amazon's purchase recommendations, Microsoft's ranking in video games. They all rely on AI technologies at their very core. But they are still far from the learned and talkative super AI program Jarvis we see in the movie Iron Man.
"If the success and funding of AI keeps accelerating at this rate, we may see something like 'Jarvis' already in 10 to 20 years," said Deisenroth.
As for Hill, he believes it will take much longer to reach that point. "Siri and Google Now are much harder challenges than something like Go. My intuition is that we will not get to the level of Jarvis for at least 50 years. I hope we do in my lifetime and I may be completely wrong," he said.