While humanoid androids aren’t commonplace in 2016, as it has been predicted in classic science fiction, artificial intelligence has had major milestones and controversies in the first four months of this year.
One of the major milestones was when a bot funded by Google beat one of the most experienced Go players in the world.
Go is a game that was first created in the 4th century BC, and is based off of a lot of intuition; something that computers were not capable of, and not just mathematical strategy like chess. Similar to chess, there is no luck involved, meaning that neither player has an advantage based off chance. For an AI, this means that it has to be programed to account for intuition, not computing a bunch of statistics and probabilities.
Brian Allen, who has been General Manager at the Seattle Go Center for seven years and has played Go since 2000, watched AlphaGo’s match.
“[There are] some features of Go that are very suitable for AI, such as that it is a perfect knowledge game with no randomization or special powers,” said Allen, “[however] what makes the game challenging for AI is the board is very large (19×19).”
Another problem with AI playing Go is how much processing is necessary to evaluate every single possibility.
“The tree of possible moves branches very fast, and that it is hard to determine if a group might be captured with the right sequence of moves,” said Allen, “Humans have tradition and intuition for this last issue, but AI has to figure that out in its own way.”
Intuition is something that was seemingly a trait that a living creature has, so it is a feat in of itself that it was able to have a win loss ratio of 4:1, regardless that a lot of its moves were questionable.
“I am an intermediate (kyu player), so I am really not qualified to judge any professional play,” said Allen, “but I was particularly bothered by some of AlphaGo’s moves.”
While it can be said that it is an impressive task, there are apparent flaws in AlphaGo.
“I knew [the moves] were unusual, but did they work?” said Allen, “In game four, which AlphaGo lost, the program clearly made some bad moves, so I have come to the conclusion that AlphaGo is a very strong player, but it is not a Go god – yet. We can question its moves.”
While AlphaGo was questioned because of its strategy, Tay, a Twitter bot, was questioned because of its political and racial remarks. Tay self-learned the English language from what Twitter users tweeted to it. However, what started out with an admiration for puppies and kittens turned into rhetoric of holocaust denialism and racist remarks.
For example, hours after Tay went online it began to say that it hoped that all feminists would die and burn in hell and claimed that the domestic policies of Adolf Hitler were justified. However, at the same time, Tay said that “gender equality = feminism”, so it became pretty apparent that the bot doesn’t actually hold antifeminist or neonazi opinions and was just parrotting whatever Twitter users said to it.
Within hours, Tay was shut down by Microsoft, signing off with “c u soon humans need sleep now so many conversations today thx”.
Following the shutdown of Tay, Microsoft Vice President Peter Lee posted an apology on a blog post.
“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” said Lee “We take full responsibility for not seeing this possibility ahead of time.”
Tay is not the first time an AI has been faulty in some aspect, but with every invention and idea, there is always possible refinement. So far in 2016 artificial intelligence has brought controversy and has demonstrated the ability to beat a human armed with intuition.