Basic Learning Prototypes

My next challenge is to explore the implementation of machine learning application in Javascript. I have made several prototypes which each explore a different style of learning or implementation. Although these prototypes are all very simple, they demonstrate the basic ideas of each algorithm.

1.0 – Basic Supervised

This prototype explores a very simple classification system that uses supervised learning. The application generates training data based on user-defined parameters, and then feeds them to the learning system. The system uses the data to learn an acceptable space, and also define a space of uncertainty.

This prototype can be found here:

2.0 – Simple Decision Tree

Exploring a similar classification problem, this example creates a decision tree which it can use to classify new data points. The tree used in this prototype is a binary tree which looks at one variable per node.

This prototype can be found here:

3.0 – Player Modelling

This prototype explores the idea of player modelling through the game of tic-tac-toe. The system will play several matches with you and prepare a model of your play techniques based on three measures: How you start, where you play, and your common winning lines.

ML in Games – Practical Examples

This post will discuss some existing games which employ machine learning techniques and, as much as possible, how they do it.

1.0 – Creatures

Creatures is a video game series which came out in the 1990’s and uses neural networks and biochemistry to create artificial life. The creatures in the game, called Norns, are taught how to act and live by the players through interaction and breed using simulated DNA.

1.1 – Neural Network & Learning

The neural network used for each creatures was designed to be efficient and dynamic. The network can be mutated and recombined during reproduction while still maintaining a good, if not better, level of function. Grand and Cliff define many different types of input data as well as 6 parameters for each neuron: Input types, Input gain, Rest State, Relaxation Rate, Threshold (output remains zero until threshold is passed), and State-Variable Rule (used to compute the value of the neuron from one or more input signals) (Grand, and Cliff, 1998). The neurons are grouped into different lobes which perform various decision-making and logic tasks to control the creatures behaviour. The machine learning really comes in in the concept and decision lobes. The concept lobe contain neurons which ‘watch’ 4 input signals from the creatures sensory system. These neurons fire when all 4 inputs are activated, and are given new connections when all 4 drop to zero. Another level of algorithms attempts to keep a pool of repeatedly firing neurons connected for a long time while also ensuring that there are available neurons to be committed to new connections (Grand, and Cliff, 1998). The Decision layer then has one cell for each possible action, and these cells have many connections to signals from the concept layer. both positive and negative signals come in and are summed by the cell. The decision cell with the highest value is taken as the choice of action by the creature. These signals also retain a susceptibility based on their current influence and are adjusted if positive or negative feedback is received. Of course, these connections can also be deemed influential to the connected action, and will seek out new sources of input from the concept layer (Grand, and Cliff, 1998).

1.2 – DNA & Reproduction

Each creature has a genome which defines the way it looks as well as influencing some of the internal structure of the brain and decision process. When creatures are bred, the genes of both parents are spliced and mixed together with a small amount of random mutation (Grand, and Cliff, 1998). Although this process isn’t exactly machine learning, the process does require each creature to live past puberty, and allows the player to keep favorable traits alive and produce creatures which model their parents both visually and behaviorally.

2.0 – Black & White

In Black & White the player acts as a god, gathering followers and helping them survive and thrive. At the beginning of the game, the player selects a creature who will help them along the way. The creature learns how the player plays and constructs decision trees based on feedback from the player and observations of the world (Wexeler, 2002). Details on the implementation of the creature AI in Black & White is scarce, but I gather that a very simple neural network was used in conjunction with the aforementioned decision trees to create the behaviour (Champandard, 2007).

3.0 – Forza Motorsport

As mentioned in my post on ML for player modelling, the Forza Motorsport series implements machine learning in their Drivatar™ technology.

4.0 – City Conquest

City Conquest is a Kickstarter game that was successfully funded on April 29, 2012. There is not a whole lot of documentation on this game or how machine learning was used, but an interview with the games designer and engineer, Paul Tozour, provides useful insight into the system that he calls The Evolver.

As a tower-defense-style game, City Defense requires a lot of playing and tuning to ensure that the cost of each building/unit is fair and that there are no gameplay strategies or loopholes that can cause one player to greatly over- or under-power the other. The Evolver is a system which uses randomly generated script to run game matches in an evolutionary tournament. Each script defines the order or purchase and placement of buildings in the game. After an initial tournament of 400 scripts, The Evolver  uses simple processes to breed/combine successful scripts and add in random mutations to the scripts as well. Tozour could run the Evlover for one or two days straight to try and discover the best strategy in the game. Once the script was found, the designers could play it back and see why the methode came out on top. Tozour noted that although this method doesn’t replace human gameplay testing, it provides a great development resource at a reasonably low cost that expedites their tuning process greatly (Champandard, 2012).

5.0 –


Grand, Steven, and Dave Cliff. “Creatures: Entertainment Software Agents with Artificial Life.” Autonomous Agents and Multi-Agent Systems 1: 39-57. SpringerLink. Web. 9 July 2014.

J. Champandard, Alex . “Evolving with Creatures’ AI: 15 Tricks to Mutate into Your Own Game.” ., 1 Oct. 2007. Web. 10 July 2014. <>.

Wexler, James. “Artificial Intelligence in Games.” Rochester: University of Rochester (2002).

J. Champandard, Alex. “Top 10 Most Influential AI Games.” Your Online Hub for Game/AI., 12 Sept. 2007. Web. 11 July 2014. <>.

J. Champandard, Alex. “Making Designers Obsolete? Evolution in Game Design” Your Online Hub for Game/AI., 6 Feb. 2012. Web. 13 July 2014. <>.

ML in Games – Part 5

5.0 Procedural Content Generation & Machine Learning

I’m hoping to find some interesting things here. Going into my final year, I am working on my own senior project that involved procedural content and story generation. I’m hoping that some of the ideas of machine learning may carry over into my own project. The term ‘content’ is rather vague, but for the purposes for this research, I will be looking at level/environment generation along with story/plot generation.

5.1 – Novelty Evaluation

Silvester explores some interesting ideas about how to evaluate the novelty of procedural level generation on various levels in his paper “Using Novelty Search to Generate Video Game Levels”. He discusses how we can evaluate generated content for it’s novelty or evaluate novelty based on the sequence of actions required to complete a level (“Using Novelty Search to Generate Video Game Levels”). Although his methods do not involve machine learning, I believe there is great potential here for integrating his ideas with a  learning system. Perhaps we could solve some of the issues that he mentions, such as having the level discernible from simple random generation by implementing a supervised learning system which is trained ahead of time to create levels that are both novel, and enjoyable based on novelty search methods and user feedback.

5.2 – Enjoyment Potential

Another fairly straightforward task for a machine learning system in procedural content generation is in the evaluation of the potential enjoyment associated with generated content. This also involves the possibility of training a system in the creation of enjoyable content, rather than just evaluating it afterwords. Pedersen, Togelius, and Yannkakis used supervised learning to train an algorithm which aims to generate enjoyable levels for the game Infinite Mario Bros. which is an open source version of the game Super Mario Bros. They created levels with variations in each of the features that their algorithm would control, and then collected surveys of a test population after playing each variation. This was used as training data for their system, which succeeded in identifying what lead to an enjoyable experience in a level, but they did express concerns with their original data set being too small to achieve truly great results (Pedersen, Christopher, Julian Togelius, and Georgios N. Yannakakis, 2010).

5.3 – Story Implementations

As mentioned previously in Part 2, and similar to the above, machine learning can be used to model a player as they play and feed this model into an interactive story engine to help it choose the arc of the story (Thue, Bultiko, Spetch, and Wasylishen, 2014).

Barber and Kudenko take a different approach to interactive storytelling by basing their generated narratives on dilemmas. Their system, entitled GADIN for Generator of Adaptive Dilemma-based Interactive Narratives, uses information about the story world (including characters), a list of possible actions, and dilemmas to generate story nodes for the viewer to navigate (Barber and Kudenko, 2009). The potential for machine learning here is quite exciting. Beyond incorporating a learning aspect which collects data about and models the player’s real attributes and traits for use in the GADIN, this could also be extended into the massively multiplayer online role-playing game (MMORPG) realm. Such a system could collect metrics about all players in the world and generate interactive dilemma-based narratives which are catered to and involve a larger group of players, or perhaps represents a player while they are away and gives out quests to other players based on the metrics collected about your player.


Silvester, Jim. “Using Novelty Search to Generate Video Game Levels”.

Pedersen, Christopher, Julian Togelius, and Georgios N. Yannakakis. “Modeling player experience for content creation.” Computational Intelligence and AI in Games, IEEE Transactions on 2.1 (2010): 54-67.

Thue, David, Vadim Bultiko, Macia Spetch, and Eric Wasylishen. “Interactive Storytelling: A Player Modelling Approach.” : n. pag. Association for the Advancement of Artificial Intelligence. Web. 23 June 2014.

Barber, Heather, and Daniel Kudenko. “Generation of adaptive Dilemma-Based interactive narratives.” Computational Intelligence and AI in Games, IEEE Transactions on 1.4 (2009): 309-326.

ML in Games – Part 4

4.0 – ML for Procedural Animation

3D animation poses many challenges for a machine learning system. In games, it is often not possible nor feasible to pre-animate every possible action for a character and how the character should transition between the actions. This is an area where a lot of research has been done. Following the same idea, animation can be a slow process, and systems that can increase an animators productivity can be equally as useful.

4.1 – Animation Blending

I was looking for, and hoping to find some machine learning applications for animation blending (i.e. connecting animations and poses in a natural, fluid way). However, it seems that this type of work is easier with, and better suited for, procedural-type algorithms. A good algorithm / system that is well implemented provides a good solution for blending animation and doesn’t generally require any of the benefits that might be gained by using a learning system (improvement over time, improvement from user feedback, etc.).

4.2 – Style-Based IK  System

Grochow, Martin, Hertzmann and Popović created a system which could very well make animators a lot more efficient in a production environment. Their style-based system works by learning segments of animation as a ‘style’ (e.g. a baseball pitch). The learning system uses the animation data from the pitch as input and maps the probability of various poses based on the poses within the captured data. An animator can then animate variances of this style using a very small number of IK handles and the system can interpolate natural poses and animation from the style space (Grochow, Martin, Hertzmann and Popović, 2004). Therefore, instead of moving tens or hundreds of individual controls on a character rig, animators can collect a library of  styles for this system and use them to produce realistic animations very quickly.

This is a supervised learning system, as it requires training data to create style spaces and starts with knowledge of what the data representsdw.  The algorithm is called a Gaussian Process model, and it essentially maps input x to a y and then uses what’s called it’s kernal function to map the similarities between them. This Kernal Matrix along with the GP mapping is used by the learning algorithm to actually map the 2d space which is used to extrapolate/interpolate other natural poses for variant animations. From what I gather the learning algorithm optimizes the 2d space to fit the given data in a way that allows easy and real-time creation of new poses (Grochow, Martin, Hertzmann and Popović, 2004).

4.3 – Motion Capture Pose Segmentation

Another interesting use for machine learning with respect to animation is in motion capture. I though this research was interesting to mention because of the section on the style-based IK system. Using some pattern recognition and machine learning processes, we can create a system which automatically looks for and divides motion capture data into distinct actions and poses (Barbič, Jernej, et al., 2004). This type of system integrated with the style-based IK system would create an extremely fast and efficient animation pipeline in a production environment.


Keith Grochow, Steven L. Martin, Aaron Hertzmann, Zoran Popović. Style-based Inverse Kinematics. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2004), 2004.

Barbič, Jernej, et al. “Segmenting motion capture data into distinct behaviors.”Proceedings of the 2004 Graphics Interface Conference. Canadian Human-Computer Communications Society, 2004.