The Orion's Arm Universe Project Forums

Full Version: humans beaten in the 2015 Arimaa match
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
This is rather amazing. I recently talked about Arimaa in this discussion and now it's over:

Quote:The Arimaa Challenge was won on April 18, 2015 and is no longer available. This page exists for historical purpose.

Just a little more than 10 years after the invention of the game the baselines have been defeated... . Smile Maybe one of the reasons, why humans haven't been defeated in Go yet is the game's long history. In Go humans had much more time to invent various ways of efficient play. In Arimaa humans didn't have this kind of "experience advantage" compared to the AI.

Here are some discussions about all this:
For those of us unaware perhaps you could explain what Arimaa is Smile
Arimaa is a strategy board game, developed in 2003 by computer scientist Omar Syed as a response to Garry Kasparov's defeat in chess in 1997 against Deep Blue. The game's rules are designed to defeat "refined brute force" methods like alpha-beta search. For example unlike in chess the initial board position can be chosen randomly and each player can make several moves before it's the other player's turn. The details about the game's rules can be found in the Wiki article:

and also on the official site:
Go might soon go to the machines. A new neural-network/search engine hybrid known as Darkforest2 is at least second dan level and likely higher as it beat it's predecessor, Darkforest1, who managed to achieve first/second dan playing online games.

PopSci report:

Original article:
I have developed a neural network architecture which is ridiculously effective for the specific task of learning long-ish term strategies to accomplish tasks. Arimaa (or any turn-based adversarial game, really) sounds like exactly the sort of thing it *could* do very well.

It basically consists of a recurrent neural network using backpropagation-through-time - and also capable of direct backprop on the outputs of its previous states. A few of the outputs are reserved for making predictions about how well it will be doing in various timeframes, and it gets positive/negative feedback (on ALL of its past decisions within that timeframe) depending on whether it's beaten or failed to beat the predictions it made that many turns ago. And likewise the predictions can be refined with backpropagation every round (against previous states) using the current state as the 'correct' value.

Of course neural networks are not the "refined brute force" approach that Arimaa was designed to be difficult for. Go is even more resistant to refined brute-force search than chess, so I don't really understand the motivation for creating Arimaa, unless it's good intellectual fun to play as, you know, a game for humans.

But, hmm, I think Go would be very amenable to the recurrent-network approach, too.
Yes, it appears that something significant is happening in the field of AI research. Demis Hassabis recently claimed that his AI approach might(?) "crack" Go: