The answer to the question “is A or B better for my players?” is often “it depends on the players, some like A, others prefer B!”. Until now, game designers had to use A/B Testing tools to select the “best choice” for their overall audience. But today, new AI technologies like machine learning give game designers the possibility to offer in real time A to the players who prefer A and B to the players who prefer B, providing a better experience to all players.
The key steps in using A/B testing.
A/B testing is a useful tool to help game designers and developers make UX, level design or monetization design decisions based on real players’ behaviour. For instance, should you put the “save me” button in color A or B in order to have more people use this feature?
With an A/B testing solution setup: some players will see the button in color A, other players in color B. In the A/B testing tool portal, you set up some events and reports to display some metrics for both cohorts in order to make a decision (for example: percentage of players clicking the “save me” button when it is made available).
This amount to a lot of work for a result which ends up being difficult to read (16% click-rate with color A, 18% with color B, with a 4% error margin…?).
An example of A/B testing for difficulty tuning
Now let’s say that you tuned the difficulty of a specific stage in your game to a “default” value based on your tests, tests from people in your team or of some external testers, but you’d like to know if a difficulty set on easier or harder would be better for your real players.
What does “better for the players” mean? ” You have to define your objective: more people winning the stage? Having better retention? Having better monetization? Converting more players to payers? Having longer sessions?
Using an A/B testing solution, you set up 3 cohorts receiving the easier, the default and the harder difficulties and code these changes in your game (when a player is starting the specific stage) based on the cohort. Then you submit the game, update and, after a few days/weeks of play, you check the metrics related to your “objective” (ex: Day7 retention).
The difficulty to act upon A/B testing
It might be difficult to have results that you can act upon. Let’s suppose that your “default” difficulty is the best one for 40% of your players, the “easier” for 30% of them and “harder” for the remaining 30%:
- Cohort “default” should show a Day7 retention a bit better than the 2 other cohorts
- Cohorts “easier” and “harder” would be a bit lower but should both give almost the same Day7 retention
After all this work, you decide to keep your difficulty set to “default”: pleasing only 40% of players…
And what if you want to tune the difficulty of all your stages? Lots of work!
Using Machine Learning for difficulty tuning
Let’s see how we can work with Data Science to improve retention through difficulty tuning.
Like in A/B testing, you code the difficulty changes in your game (when a player is starting a stage) based on 3 “values”: “easier”, “default”, “harder”, you submit your game and that’s it!
When the game is live, events are sent to the platform about the players’ sessions and behaviors (wins, losses…) for all stages. After a few days of exploration (for each stage played, players receive “easier”, default” or “harder”), the platform learns from these players (builds a Machine Learning model) with a “reward” (equivalent to your objective in A/B testing), for instance: increasing the retention.
Once the platform has learnt from the players; every time a player starts a stage, the game sends a request to the platform which answers in real time a personalized value (“easier”, “default” or “harder”) in order to maximize the reward (increase retention) for the current player.
Going beyond A/B testing: the benefits of real-time personalization
The first benefit for your team is time saving: you do not have to do all the work of looking at reports and trying to figure out the best tuning.
The second benefit is improved players’ experience: you deliver to all players the tuning that suits them best! This will lead to better retention and monetization.
In the example above, 40% of the players will receive “default”, 30% will receive “easier” and the remaining 30% will receive “harder”. This is something impossible to achieve with A/B testing.
As you can see, A/B testing is a method that can take a lot of time to set up and use for very little actual results in mobile games. On the other hand, Machine Learning can bring huge improvements being not only much simpler to use, but also going much further by providing real-time player personalization.
To sum up:
|A/B Testing||ML Real-Time Personalization|
|The studio programs in game the effect
of each option (ex: difficulty of a stage
set to easier, harder or default).
|The studio programs in game the effect of each option (ex: difficulty of a stage set to easier, harder or default).|
|The studio defines, in the A/B testing
portal, an audience cohort that will
receive the values.
|The studio defines a % of the players who will be used to learn on (“explored players”).|
|The studio has to define or select a
“goal metric” (objective) to measure
the best results.
|The platform manages its own goal (ex: “increase time spent in session” for a difficulty tuning solution).|
|A player, in a cohort, will always receive
|The platform can explore any option on the players, dynamically.|
|After several days of live play, the studio
has to check in a portal the effect of each option on the desired objective (might
need to create special reports).
|The platform will automatically explore and learn from the players.|
|The studio has to choose one option, which might not be the best one for every players, but all them will receive this option.||The platform will indicate to the game in real-time which option is best for the current player in the game.|