I have no idea how many other developers use genetic programming to build AI into their games, but there will probably be at least someone who can nod along to my little rant and feel my pain.
Now I know that right off the bat, people would want to ask a bunch of questions about the game this AI is meant for, but that isn't really the point of this post. Just throwing that out there since I know it will/would happen otherwise.
Usually I have great success with genetic programming, but for the past 4 days I have struggled greatly with what I am working on. Terminology used in genetic programming seems to be all over the place, with it being rare to run into 2 people who use the exact same phrases to describe the same things, so bear with me if I describe things in ways you aren't used to, hehe. The first brain framework was 512 nodes (degrees of freedom, attributes, neurons, or pick your preferred term). This was a little on the high end for me, since I tend to find ways to boil things down to a smaller number. In my experience it's a bit easier to work with the smaller number even if in theory the higher number can lead to more complex AI.
Well it ran for about 30,000 generations, with many stops to make tweaks to help things progress, but I got nowhere. I couldn't even get the AIs to master the basics of game play to keep themselves alive, without fighting even being a factor. A generic way to describe this without going into specifics of the game would be that these AIs never figured out how to even grow enough food to keep their people alive, so learning to advance and fight never even started to happen.
Grudgingly I decided my starting framework must be flawed, so I highlighted a hundreds of lines of code and pressed delete. That always stings. I thought and thought, but couldn't come up with a way to boil down the framework into a smaller brain design, so I threw that idea out the window. The next framework would up being over 133,000 nodes per AI, using a pretty different approach. I'm happy to say that this worked and got the AIs growing food, collecting supplies, and generally keeping themselves alive after only a few thousand generations. Success, or so it would seem.
After tens of thousands of more generations, I realized no AI ever survived long who attacked other AIs. They were all going for a defensive approach and just seeing who could last for the longest amount of time (game ticks) until lack of resources killed off the slowest. This was a big problem because combat is supposed to be a huge part of the game, and I need the AI to rely on that heavily.
Now years back I mentioned somewhere that I had grown 2 AIs who eventually decided to work cooperatively, and messed up a game I was making at the time. I held, and still hold, that as one of the neatest and most surprising experiences I've had when programming, because it defied all odds and I still struggle to wrap my mind around how it happened. That was a very different situation though, so while it may seem similar to people who remember me telling that story, this time is not remarkable.
I lowered costs to attack and raised the destructive impact of attacking to the extremes, but still no luck. AIs would develop who fiercely attacked the others, but after a few generations they'd always go back to being peaceful. I let the system run non-stop until it had grown more than a half a million generations, but still no luck. Clearly I had done something wrong again.
My next realization was when I thought about how I have the test set up. There are 3 AIs who are in the test together, and at the end the one who dies first will be deleted, and the one who survived the longest is copied + mutated to serve as the new 3rd AI in the next round. In the past I rarely do tests with such low numbers of AI players, but then again I normally don't have such high numbers of nodes in each AI. Running just 3 AIs in a test with so many nodes is just as processor intensive as running dozens of AIs in a different game where each has a low number of nodes. I figured this 3 player approach was my problem.
As it stood, eventually an AI would be mutated to attack more, for the sake of perspective lets say that is you. So you're about to play for the first time, and you are facing 2 opponents who are nearly identical. One will be a past winner, and the other will be his slightly mutated clone from 2 games back. Even if you happen to be superior to them with your newly mutated strategy, it is essentially 2 verses 1. You may use attacks to beat the crap out of one of them, but the other is going to still perform very well in just saving up supplies and lasting for as long as possible. Most of the time all you are doing is fighting to be in 2nd place. The one who wins is cloned to be the new 3rd place, so each new round you're still battling 2 nearly identical opponents. Eventually luck will run out and you'll be in 3rd place, thereby being bred out of the gene pool. It's only a theory, but I think this is why even my decent fighters keep getting bred out after several generations.
I'm rerunning everything with 8 AI players to hopefully get better results. With only the very worst player being copied over, this should give AIs with new strategies at least a handful of turns to try to establish themselves before they find themselves at the bottom and are deleted. I really would prefer to run with larger numbers, but the tests cycle pretty slow. It would also be nice having the top 2 players breed instead of just the top 1, but with only 8 players I worry that would do more harm than good. It sucks enough needing to wait hours to see if a small change is having any effect, so I wouldn't like having to wait 3 days for the same thing. ROFL!
So yeah, this has all been frustrating the crap out of me lately, and it feels good to vent about it a bit. Welcome back AG.net forum, we missed having you around.
Please try out my games and programs: