Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks very cool!

I have a question - with AI nowadays that can dominate pretty much any board game - is there any hope for anyone 'hand coding' a strategy here against someone who just trains an AI model.

How does the problem space change in a game such as this where it may be 1v8 opponents vs a 1v1 game such as chess I wonder



Last year, I believe only one team even _tried_ an ML snake and they didn't make enough progress to actually compete. "Hand coding," so far, has been the way to go.

Not sure if it's what you meant, but it's less "hand coding" and more about trying to enumerate board-state possibilities (i.e. "I'm hungry" or "there's a snake at x") and addressing them via generalized algorithms.


That's interesting I updated my comment to be a bit more specific.

I'd love to know how the problem space looks in this game vs chess for example as I mentioned.

My terming is a little vague what I had meant is 'black box' solutions I guess where the logic can not be inferred via looking at the code


Very plainly: no, hand coding is not going to be able to compete with a fully developed ML/AI agent. Even simple hand coded agents use AI techniques such as minimax, which is fundamentally an "AI" technique even though most people today don't really think about it that way since there are no deep neural networks involved.

However, this contest looks unpopular enough that you can likely still compete because no one has spent the effort to actually build a proper deep-neural-network agent, etc.

You can work out the problem space yourself by simply calculating all the possible board states. Or you could calculate the move states (3 moves per player, to the power of the average number of turns).


This is a super interesting question to us, and to be honest we're not sure of the answer. Is it possible for a single ML snake to achieve 100% 1v1 win rate? What about 1v7?

What we've learned so far is that playing against multiple snakes, especially AI/ML driven snakes, adds an incredible amount of complexity to the game and the top ranked snakes tend switch around quite a bit as players adjust to attack/defend specific strategies.

If anyone has formal theory around this sort of problem, we'd love to investigate further.


Sure, the answer is trivial: 100% perfect agent isn't possible. Just have an AI play against itself. Since the game is simultaneous moves, there is no "first move advantage" or anything, so it would play out exactly as a mirror (give or take some random noise in the model). The only thing that would break the symmetry is where food spawns, which also would just be a coin flip which snake it would be close to.

In a 8 player game, it's a little different because there are 'side' and 'corner' positions. Obviously this isn't symmetric anymore, so if you played many games, one position would make itself clearly superior. That likely depends again, on the probability of food appearing in a snakes "zone of control" which you could represent with a voronoi diagram.

My gut instinct is that a 'side' position would eventually be revealed to be stronger because the snake has more options and freedom.

Of course, in a tournament setup, it's fully possible for an AI to have 100% win rate just by random chance, because you are only playing a finite number of games.


More than 2 players adds another level of complexity: alliances. Snake is a bit like Risk in that it's possible to attack a specific player while signalling intent.

We should expect any proper AI to read and understand the others and work out who should be avoided or ganged up on. Rock-Paper-Scissors tournaments have bots that use game moves to recognize eachother and then dump all their points onto a single player -- something similar could happen in Snake as well.


Glad to know that the answer isn't a definitive 'yes' right now then! It's certainly an interesting space I bet you enjoy reading through the solutions :)


Some competitions like this only give you a certain amount of "cycles" per turn, so hand written solutions would have an "advantage" in that sense.

I don't know if that method is applicable to this particular game though, since the processing is done by the user.


From what I found in their documentation, the limitation is latency. Your server can use whatever compute cycles it wants to formulate a decision for a single turn, but if it doesn't respond with sub 250ms latency then the snake just carries on in its current direction instead of making a move that turn.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: