What is artificial intelligence (AI)? Patrick H. Winston says that perception, thinking, and action are the general terms that define an AI-based system. This system needs to be able to perceive a given environment, process the observed and latent structures in the environment, and decide on a reasonable (optimal) action given the environment and an objective.
As everywhere else, AI can be used in gaming in various ways. To specify the fields in which you can utilize AI, we’ll break it down into two here, which we’d like to refer to as “the fun way” and “the business way”, where the former explains the in-game usages of AI and the latter is the use cases in marketing and monetization.
Making “Fun” with AI
Games can have various taxonomies, such as zero-sum (versus) games where one party loses, the other party wins, or multiplayer-cooperative games where two or more players work in harmony as teammates, usually against one or more opponents. There are also other taxonomies for game types such as action, strategy, puzzles, arcades, etc. One more taxonomy can be formed as the information available for a player: full information and partial information settings. The first one stands for games that have their states fully observable (such as chess), whereas the second one stands for games that a player could only get a partial picture of the status of the game (such as poker, initial stages of Red Alert 2 and Age Of Empires 2).
Bots are made from sets of algorithms that aid them in their designated tasks. There are plenty of different types of bots designed differently to accomplish a wide variety of tasks and it’s no different for bots in gaming. Different types of games may require different types of bot designs. However, we can divide the bots into two main categories: they can be either script-based or AI-based. Script-based bots are generally designed with a scenario in mind and they are rule-based responses for that specific scenario. They were once largely used in MMORPGs to level up with ease (if the health drops down, use a potion, or detect the closest enemy and use the list of attacks). They are really useful for tutorials and helpers for players in general. But for a wide variety of gameplays, especially for human-like gameplays, they are not effective at all. AI-based bots address this issue by training a program by trial-and-error and giving a reward if the program does a nice job or penalizing it if it makes a mistake. The question is, how can we enumerate these strategies, and what is the general procedure of this training? In the first part of this blog post, we will be focusing on AI-based bots.
A brief history of AI in games
Before explaining further how you can use AI-based bots in gaming, we can take a look at the usage of AI in gaming, which has been getting more popular since the late 90s. Below are the most renowned examples of AI usage in the history of gaming.
Developed by IBM, a chess program called DeepBlue trained by computer scientists and field experts challenged grandmaster Garry Kasparov. In 1996, Kasparov won a match consisting of 6 games. After almost a year of retraining and fine-tuning, in 1997, DeepBlue won the second match consisting of 6 games. Under tournament conditions, this was the first time a world champion lost a match against a computer.
Monte Carlo Tree Search (MCTS) for Go, 2006
A program called CrazyStone showed great performance in the game of go with a 9×9 board using a new technique called Monte Carlo Tree Search. Then a new program called MoGo achieved a master level in 9×9 board.
Deep-Q-Learning on Atari Games, 2013
In 2012, something revolutionary happened, the winner of an image classification contest called ImageNet was won by a deep neural network (AlexNet) that was previously thought to be very hard to train. Since then, the deep learning phenomenon has touched almost everywhere in machine learning (from image recognition tasks to speech and natural language processing). The gaming industry took its fair share from this revolution as well. In 2013 DeepMind released an algorithm in the NeurIPS (NIPS at that time) conference, called Deep-Q-Learning, that tackles the problem of human-like Atari playing with a new, ground-breaking idea. For the agent to choose an appropriate action from a given state, they trained a convolutional neural network from the game frames. By allowing it to optimize its gameplay throughout the time, this reinforcement learning agent showed a performance that no other algorithm could show before. Since then, Atari gameplay has improved and a new algorithm by DeepMind, called Agent57, outperformed the standard human benchmark on all 57 Atari games in 2020.
AlphaGo is the first computer program to defeat a professional human Go player, the first to defeat a Go world champion, and is arguably the strongest Go player in history.
AlphaGo uses the supervision of example games, then self-plays thousands of human lifetimes. It mixes the Monte Carlo Search Tree idea with neural networks predicting the next move and probability of winning. Go experts seemed to be amazed by some of the moves that AlphaGo made during the match with Lee Sedol.
AlphaZero (on chess, Go and shogi), 2017
AlphaZero was introduced in late 2017. It is a single system that taught itself from scratch how to master the games of chess, shogi (Japanese chess), and Go, beating a world-champion program in each case.
While these are the most renowned examples of it, today, AI’s place in gaming is not restricted to computer games. There are various ways to use AI in games be it PC, mobile, video games, or other genres.
How to utilize AI while building bots?
There are various techniques you can use while building bots for your game, depending on the game type, the goal of the bot, the use case, and other variables. Let’s take a look at some of them together.
If the game is turn-based, this technique is quite useful. The main idea is to simulate the future states of the game with a tree (like an exhaustive search), then choose the move of the branch with the highest winning probability. The main thing to consider is, the state space that is generated by the game should be feasible to do a search.
Monte Carlo Tree Search
Monte Carlo methods use random sampling techniques to approximate the statistics of a general population. In our setup, it means the following: when the search space is too large to explore using exhaustive search, then randomly sample from most promising branches to explore the future states, then decide on these samples. The probability of winning from playing the samples is estimated through gameplay.
If you have recorded previous gameplays (states and actions corresponding to them), then a sequential model (e.g. Hidden Markov Model, LSTM, GRU) can be built to predict an action or sequence of actions in a given state. This model brings relatively lower cost if gameplays are available but keep in mind that if there is a mechanical change, then all the gameplay history must be rebuilt.
This is the model where agents are allowed to self-play in a simulated environment, using the defined rewards for actions in states. An algorithm is used to optimize the gameplay of the bot. To name a few, Q-learning, UCB, UCT are examples of such algorithms.
One nice thing about reinforcement learning is that it is adaptive, one can build varying styles of gameplay (using stochasticity in the input) and mechanism changes mostly require only retraining (with generally no code change for the learning part). On the other hand, rewards need to be carefully crafted, a simulator of the gameplay is mostly needed to be built and the computational power required for these algorithms to work is generally high. There is also the long-standing exploration vs. exploitation dilemma, which is a tradeoff between trying out new actions vs. using the information at hand to optimize the objective (reward/penalty).
The most important part of designing such a learning environment is the rewards, for which the after-effects cannot be estimated in general when a change is made but is decided by trial-and-error as well. The other important part is adjusting the difficulty of the bot. You can try to optimize the gameplay, but retrieving an easy/medium/hard difficulty for a bot is not trivial. Various techniques can be employed, such as adding an unseen noise to input to make things harder for the bot during gameplay, or changing the decided action with a random one stochastically, and so on.
A new trend has been reigning in the reinforcement learning world since 2013, which is deep reinforcement learning. Although the main idea is nearly the same, in deep reinforcement learning, we can utilize deep neural networks for value estimations and policy decisions. Deep-Q-Learning, AlphaGo, and AlphaZero are the main examples of deep reinforcement learning.
Use Cases for AI-based Bots
For the following cases, the usage of bots can be tested to see if they bring better results for your games.
- Increasing player engagement
You can challenge the player more by utilizing advanced bots or adjusting the level to challenge the user.
- Retention optimization
Utilizing bots in the game can be an improvement for the general retention of players.
- Player matching
For idle times, you can utilize human-like bots in the game for having player matches.
- Testing and improvement
One benefit of having RL-based bots is that they generally try many different action sequences, some of which can lead to easy scores/wins in the game. If the game mechanics have a “bug” for an easy score from a specific point or state/action tuple, it can be hard for a human to detect it by trial-and-error. But since RL agents autonomously try out many different combinations, their probability of detection is much higher than a human tester.
- Just for the fun of it
Since you can do it, do and have some fun watching your robot getting the job done. 🙂
Future of AI & Games
As it’s been actively used in gaming and becoming more and more popular, it is safe to say that the usage of AI in games will increase in the future.
A couple of utilization areas of AI in games in the future can be;
- Utilization of AI for arcade games that require continuous level design. For instance; AI can be used for understanding the player’s style and arranging building blocks for a level in such a way that it becomes more challenging for the player while removing the manual labor of crafting a level.
- Design and implementation of AI-based bots that are adaptive to the gameplay of the player can be an interesting use case for bots. An AI-based bot that adjusts its strategy given the playing styles of the user can challenge a user more than a generic AI. Or NPCs in the game can develop defenses considering the favorite strategies of the user.
- Federated learning: This is the concept of mobile phones collaboratively learning a shared model. Without data exchange between devices, federated learning aims to build a collaborative general machine learning model of the distributed data, by utilizing sharing of mathematical updates for the model rather than explicit data sharing. In the privacy-preserving world, following these types of trends is becoming a must.
Making “Business” with AI
The statistical part of AI can be really handy when it comes to business-level decision making.
When it comes to making business decisions and marketing efforts;
- Marketing ROI across thousands of user cohorts has a direct impact on profits
- Customer Lifetime Value (CLTV) is critical for optimizing marketing spend
- Legacy systems for predicting CLTV are highly manual, accuracy is unpredictable & requires significant human supervision
- An individual could have a hard time managing more than 25 ad-network – country – device combos for marketing campaigns
AI can play a big part when it comes to the marketing actions such as player acquisition, in-game monetization, campaign optimization, as well as in-game data collection, and data engineering.
Retention and Churn Analysis
By using AI for your retention and churn analysis, you can build a model based on initial events of a player that segments the player in a cluster, then use the cluster retention information for a probabilistic prediction of retention or a direct estimation from initial events.
You can understand the types of users that churn and build real-time strategies such as offering discount packages or in-game bonuses to prevent possible churns. However, you should keep in mind the general behavior varies among players. In these estimations, you can have a large variance in the group or outliers that drive your estimations out. Generally, it is a data science task to understand these phenomena and build a strategy against them.
Lifetime Value (LTV) Prediction
LTV estimation is critical for campaigns in general. In an ideal setup, when a change that affects the revenue behavior is made, it is expected to foresee the future impact as soon as possible. That is why statistical inference and machine learning boost the accuracy of LTV predictions from the available revenue information (from day 1, day 3, day 7 to day 30, day 360). Rather than only relying on point estimations, it is more informative to build a statistical model to understand its distribution and confidence.
While optimizing campaigns, AI can help you in allocating your budget wisely by making the calculations based on ROAS, installs, and other relevant metrics and utilizing methods like Thompson Sampling. You can also analyze your bidding strategy and optimize the bids for a campaign using LTV and ROAS estimations. And instead of doing multi-network, multi-channel, multi-targeting analyses and optimizations manually, it is wiser to optimize the strategy using computerized analysis and recommendations.
In addition to these, you can even understand the effect of channels on organic installs and analyze the effectiveness of campaign creatives. After iOS 14.5+ updates, with the latest changes on the available data for mobile marketing, the help of AI and data science is even more important than before. Since revenues and installs cannot be observed as directly and clearly as before, we must rely on statistical inference more than ever, which requires expertise in this field. Our guess is, AI-based systems using the statistics behind the veil will have a vital role in supporting the decision-making processes for campaign optimizations.
One last thing to add here would be to emphasize the importance of A/B testing. Regardless of what you do, from game bots to creative optimizations and bidding, continuous A/B testing should be adopted to get the best results from your efforts.
How UAhero can help you boost your marketing metrics with AI?
UAhero brings the power of artificial intelligence and machine learning to the world of user acquisition saving not only very precious time but also optimizing marketing spending.
The machine learning algorithms take into account all the complexities the game marketers are facing to produce easy-to-implement decisions. Because algorithms are doing all the hard work, there is no limit on how frequently they are run which means there is no need to wait for a week before making changes. If there is an opportunity within a certain audience in a network, the action can be taken immediately to increase the bid cap to exploit the opportunity. Or if there is a value leakage occurring in one of the campaigns, UAhero automatically recommends shifting the budget elsewhere, so you can stop the leakage right away.
The interdependency of the parameters that go into making the optimal decisions also makes UAhero the ideal solution for user acquisition campaign optimization, because this is what artificial intelligence and machine learning algorithms excel at. Where the human brain reaches its limit on comprehending large amounts of interdependent information, the algorithms, built by experts using different techniques specific to the optimization problem at hand, come to the rescue and produce the best decisions.