DeepMind GO for StarCraft.

I just watched a fascinating game of GO where the world champion Lee Sedol defeated the Google DeepMind program for the first time.
DeepMind admitting the defeat.

After losing three games the self learning human Lee Sedol has finally beaten the DeepMind machine. It is really fascinating to see humans with self learning capabilities can by themselves learn to outsmart machines.
Not knowing the  game of GO, it seems to me it is an ideal board game for computers to play, the basic rules seems simple to turn into algorithms, it is the humongous possible moves that seems to be the challenge. To store data and calculate next move it should just be a matter of capacity. But it is probably trickier than I think. The DeepMind seems to be faster than Lee Sedol who needed one minute slots extra time for the entire end game. I like to see statistics on think time for the players as the game progressed. Actually I like to see stats on everything from these games. And it would be really interesting if DeepMind could explain the reasoning behind every move.
I have written some posts on my thoughts of AI. I do not believe much on the imminent machine takeover, the so called singularity. We have not figured out what human intelligence is, we have come a long way but we are nowhere near of a full description of our intelligence. It may not be necessary to understand our own brain to create a smarter machine brain, but I think it would help if we know ourselves first. The singularity is still 20 years in future, and I predict it will be so for a while.
Nevertheless it is not a small feat for a machine playing GO 3-1 against human. Now I like to see a machine take on something considerable harder, beat human in a game of StarCraft.

Oh fuck! DeepMind's AlphaGo program won the last match so 4-1 to the machine against the self learning human brain. I had hoped for a second human victory. Still AI is in it's infancy machines are still far from human intellectual capacity. GO is just a deterministic board game, with simple rules. It is the multitude of possible moves that makes it hard for human. And that also makes it hard for human to study and learn from AlphaGo how to beat the program. Which by the way is an interesting idea, self learning humans learn by study machines how to outsmart them. I still put my two cents on humans.

No comments:

Post a Comment