I would call it a very good implementation of "old school" AI, where the behavior of your actors is all about utility curves, Monte Carlo search, and genetic algorithms. Basically all math/algorithm based stuff, kind of like old expert system AI implementations.
Of course "new school" AI is all about neural networks that can automatically learn those complex actor behaviors without the developer explicitly specifying all those mathematical algorithms.
Like many people here I'm very interested in working on hooking up modern "new school" AIs to virtual worlds, so it's very interesting to see Grail as a good concrete example of the algorithmic approach to game AI. I suspect some hybrid fusion of both approaches may give us some interesting and fun AI behavior.
[1] https://grail.com.pl/media/Grail_Whitepaper_June_2021.pdf
The biggest challenge with these is coming up with a function that clearly represents the fitness of a candidate. EAs are worthless if you can't provide good selection pressure over time. There are always multiple conflicting objectives in the most practical cases. I've recently discovered that you can skip all the nasty objective weighting business if you just select the Pareto front every generation. You never know when a trip down a less important path might result in the most optimal global solution.
Today's game designers might use this sort of thing to find game-breaking exploits before those damned players do.
Many games were ruined post-release because developers tried to make the gameplay "more balanced". This usually leads to everything feeling the same.
Helldivers 2 was a good recent example, which is suprising since it's not a PvP game at all but a co-op PvE only.
NEAT is really well suited to modeling behaviors in a less structured way and gets used to play videogames.
One example.. of course, you goto keep it civil. No pro players dunking on the mom&pop gamers who just want to have 5mins of fun on a tablet.
Like if all the humans use the AK because it’s super over powered, and your optimization algorithm sets the AK damage to 0; what are your “human” bots going to do? Because all the training data says to use the AK.
This approach only makes sense if you’re evaluating bot-optimal play outcomes.
It also takes away a lot of the design thinking behind balance. You probably don’t want to nerf the AK. You probably want to buff counterplay options (guns are not a great example but still)
There is precedent in Maia Chess, which does a good job of mimicking human chess players at various ELO ratings. Of course, it's a lot more difficult to extrapolate to games with significantly more state/movesets, but I imagine that this space will be further explored in the near future.
> And if you’re going to be optimizing game parameters that means you’re assuming that either the AI doesn’t change its behaviors even though the game is different or you’re assuming that humans will adapts in the same way the bots do.
This could be addressed by including the game parameters of interest (what map, what character, the weapon stats at time of gameplay, etc.) in the input to the training data.
> It also takes away a lot of the design thinking behind balance. You probably don’t want to nerf the AK. You probably want to buff counterplay options (guns are not a great example but still)
Tool-assisted QA is nothing new. Using AI is a newer iteration of the concept. You still have to interpret the results it gives and make decisions based on that. The design thinking isn't replaced, it's augmented with additional insights. Are those insights potentially inaccurate? Sure, but you can account for that with sanity checks/manual intervention/play testing.
You need the power fantasy of being strongest characters and the underdog tale of winning against the strongest characters as a less meta one.
OK I read the article. I'm very skeptical of this approach. I doubt we can actually uncover fitness functions that reliably maps to "fun", and I believe it would require huge engineering effort to keep the game "simulable." Their examples aren't convincing. What would be convincing is a full, complex, and _fun_ game using these techniques.
Also the article seems like an ad for their AI solution.
We have a god complex that brain surgeons can only envy. How many universes have they built?
You don't have to. Get empirical data and form a proxy evaluator. Usable enough for most evolutionary algorithms. I've done this sort of stuff for very subjective metrics and actually sold something with it.
So you are saying procedural level design could be solved with generative AI?