Article Source
How DeepMind’s AlphaGo Defeated Lee Sedol
This time around, Google DeepMind embarked on a journey to write an algorithm that plays Go. Go is an ancient chinese board game where the opposing players try to capture each other’s stones on the board. Behind the veil of this deceptively simple ruleset, lies an enormous layer of depth and complexity. As scientists like to say, the search space of this problem is significantly larger than that of chess. So large, that one often has to rely on human intuition to find a suitable next move, therefore it is not surprising that playing Go on a high level is, or maybe was widely believed to be intractable for machines. The result is Google DeepMind’s AlphaGo, the deep learning technique that defeated a professional player and world champion, Lee Sedol.
What it also important to note is that the techniques used in this algorithm are general, and can be used for a large number of different tasks. By this, I mean not AlphaGo specifically, but the Monte Carlo Tree Search, the value network and deep neural networks.
The paper “Mastering the Game of Go with Deep Neural Networks and Tree Search” is available here:
https://storage.googleapis.com/deepmi…
http://www.nature.com/nature/journal/…
A great Go analysis video by Brady Daniels. Make sure to check it out and subscribe if you like what you see there!
https://www.youtube.com/watch?v=dOQsY…
The mentioned post on the Go reddit:
https://www.reddit.com/r/baduk/commen…