ML in Games – Part 1

The next step in my research process is to look more specifically ad games. My goal in this post will be to discover and discuss several situations where machine learning could be useful in a game. I will be looking, if available, at published studies as well as my own opinions.

1.0 – Adaptive Artificial Intelligence

An important area of focus in many newer computer games is in their artificial intelligence (AI) systems. AI systems are used to control non-player characters (NPCs) or other game entities in order to create challenge for the player. As mentioned before, players are very unique, and providing them with just enough challenge relative to their play style and skill level will make the experience more enjoyable to them (Hagelback and Johansson, 51)

1.1 Some Theory

In general, an AI requires architecture and algorithms, knowledge, and an interface to the environment (Laird and van Lent, 2005). When we start looking at Adaptive, learning AI there are other factors that come into play. Nick Palmer outlines his methods for creating what he calls ‘Learning Agents’ in his online essay Machine Learning in Games Development. His setup involves the following components:

Learning Element – which is responsible for actually changing the behavior of the AI based on its past level of success or failure

Performance Element – which decides which action the AI will take based on it’s current knowledge of the environment

Performance Analyzer – which judges the actions of the performance element. Palmer clarifies that the judging must be made on the same information that used by the performance element to make decisions. The analyses made by this element will help decide how or if the learning element alters the behaviour of the AI.

Curiosity Element – which understands the goals of the AI and will challenge the performance element to try new behaviour possibilities which may improve the state of the AI with respect to it’s overall goals. This element helps keep the AI from getting satisfied by moderately successful behaviour (like hiding in cover the whole game so as not to die).

In general, developers who use adaptive AI in game development, will train the AI and allow it to improve during development, but simply capture the best results and build them statically into the final game (Palmer).

1.2 Supervised Learning and AI

Palmer mentions that his proposed system utilizes the ideas of reinforcement learning, and I found that this if often the case with game AI. I wanted to look into why this may be the case. Thore Graepel explored both methods for two different games in his talk “Learning to Play: Machine Learning in Games”. Supervised learning involves a  lot of training, and is targeted more at predicting outcomes than generating real-time behaviour (Browlee, 2013). This type of learning is used to create game playing AI’s for games such as chess and GO because these games have past move sets and results that can be used for training. They also are not real-time games, and the AI can have time to think about the best solution (Graepel)

1.4 Behaviour-Based Example

In their journal “Dynamic Game Difficulty Scaling Using Adaptive Behavior-Based AI”, Tan, Tan and Tay explore one of the ways to implement an adaptive AI system using a behaviour-based controller. In essence, they have created a system which gives the AI 7 different behaviours. These behaviours can then be activated or deactivated to change the skill level of the AI. A digital chromosome is used to store 7 real numbers from 0 to 1 which represent the probability of each behavior being activated. These values are then updated at run-time to keep the AI competitive but not too difficult. In Javascript, we could setup a chromosome quite simply with an object, this way you can even assign nice names to each value:

The idea here is that upon any win or lose state, the values within the chromosome are increased or decreased depending on if they were activated or deactivated during the round. Overtime, this process would allow the chromosome to adjust in favor of the behaviors that cause it to win. Because each behavior may be activated or deactivated for each round, a properly tuned algorithm will vary in skill around the threshold of the players ability (Tan, Tan and Tay, 290-293).

References

Laird, John; van Lent, Michael. “Machine Learning for Computer Games.” Game Developers Conference. Moscone Center West, San Francisco, CA. 10 Mar. 2005. Lecture.

Palmer, Nick. “Machine Learning in Games Development.” Machine Learning in Games Development. N.p., n.d. Web. 20 June 2014. <http://ai-depot.com/GameAI/Learning.html>.

Graepel, Thore. “Learning to Play: Machine Learning in Games.” . Microsoft Research Cambridge, n.d. Web. 20 June 2014. <http://www.admin.cam.ac.uk/offices/research/documents/local/events/downloads/tm/06_ThoreGraepel.pdf>.

Brownlee, Jason. “A Tour of Machine Learning Algorithms.” Machine Learning Mastery. N.p., 25 Nov. 2013. Web. 2 June 2014. <http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/>.

Hagelback, J.; Johansson, S.J., “Measuring player experience on runtime dynamic difficulty scaling in an RTS game,” Computational Intelligence and Games, 2009. CIG 2009. IEEE Symposium on , vol., no., pp.46,52, 7-10 Sept. 2009

Chin Hiong Tan; Kay Chen Tan; Tay, A., “Dynamic Game Difficulty Scaling Using Adaptive Behavior-Based AI,” Computational Intelligence and AI in Games, IEEE Transactions on , vol.3, no.4, pp.289,301, Dec. 2011

Tang, H.; Tan, C.H.; Tan, K.C.; Tay, A., “Neural network versus behavior based approach in simulated car racing game,” Evolving and Self-Developing Intelligent Systems, 2009. ESDIS ’09. IEEE Workshop on , vol., no., pp.58,65, March 30 2009-April 2 2009

 

Leave a Reply

Your email address will not be published. Required fields are marked *