Stop Thinking, Just Do!

Sung-Soo Kim's Blog

How DeepMind Conquered Go With Deep Learning


5 March 2016

How DeepMind Conquered Go With Deep Learning

This time around, Google DeepMind embarked on a journey to write an algorithm that plays Go. Go is an ancient chinese board game where the opposing players try to capture each other’s stones on the board. Behind the veil of this deceptively simple ruleset, lies an enormous layer of depth and complexity. As scientists like to say, the search space of this problem is significantly larger than that of chess. So large, that one often has to rely on human intuition to find a suitable next move, therefore it is not surprising that playing Go on a high level is, or maybe was widely believed to be intractable for machines. The result is Google DeepMind’s AlphaGo, the deep learning technique that defeated a professional player and European champion, Fan Hui.

The paper “Mastering the Game of Go with Deep Neural Networks and Tree Search” is available here:

Wired’s coverage of AlphaGo:

Video coverage from DeepMind and Nature:

Myungwan Kim analysis:

Photo credits: Watson - AP Photo/Jeopardy Productions, Inc. Fan Hui match photo - Google DeepMind -

Go board image credits (all CC BY 2.0): Renato Ganoza - Jaro Larnos (changes were applied, mostly recoloring) - Luis de Bethencourt -

Detailed analysis of the games against Fan Hui and some more speculation:

Subscribe if you would like to see more of these! -

Splash screen/thumbnail design: Felícia Fehér -

Károly Zsolnai-Fehér’s links:

comments powered by Disqus