Platform change-log platform change-log

Welcome to the platform change log. Here you will get the latest information about the main changes, updates and new features that our team is working hard to improve Stay tuned, subscribe to our newsletter!

Version 1.25.0 > December 30th, 2020

New Blu/Ref data: Sessions Info

In order to increase player retention, the goal assigned to Machine Learning is to increase the sessions length,  reduce the inter-sessions span, and increase the number of sessions played per day per player.

These information are now displayed in the SESSIONS dashboard, both for Blu active players (with personalization), and Ref players (no personalization).

In these 3 histograms (demo game), you can see that:

  1. there are more Blu (personalized) players with higher sessions length (more than 8 minutes)
  2. there are more Blu (personalized) players with lower inter-sessions span (less than 20 minutes)
  3. there are more Blu (personalized) players with 3 or more sessions per day

These 3 histograms are available in the new "sessions" option (for each game), which display also the Sessions funnel.

New Monetization dashboard

3 monetization events are now available in the SDK:

  1. purchaseConfirmed(): the player made an In-App Purchase (with real money)
  2. rewardVideoWatched(): the player watched a Video ad in exchange of a reward
  3. intertitialAdDisplayed(): the game displayed a full screen ad to the player

A new "Monetization" Dashboard displays 6 metrics, per day:

  1. Interstitial ads Use Rate: percentage of DAU served with at least one ad
  2. Interstitial ads displayed per users (average per day)
  3. Rewarded videos per users (average per day)
  4. Small In-App Purchased made per 1000 users (average per day)
  5. Medium In-App Purchased made per 1000 users (average per day)
  6. High In-App Purchased made per 1000 users (average per day)

This new dashboard will allow you to check the efficiency of all the player personalization done on a game (difficulty tuning, ad pressure...), targeting the bottom line: increase revenue.

Version 1.21.0 > November 30th, 2020

Improved Sessions Funnel

The sessions funnel is a more accurate measurement of you players activity in your game. With classic retention rate, a player doing 4 sessions on day one, with several stages played per session, is counted as 1 in day1 retention, same as a player doing only one session with 1 stage played.
For better A/B testing between REF players (no personalization) and BLU players (personalization active), the improvement is now an estimation of the difference, with confidence intervals based on Bayesian Statistics.
We also display for each improvement the reliability of this estimation.

Classic retention and retention summary

In addition, we now display new metrics on the portal:
- Classic Retention per day (we still display the Rolling Retention per day)
- Classic Retention SUMMARY (average for the period that you selected)

Version 1.20.0 > August 30th, 2020

New monetization events

In the upcoming version of our SDK you will be able to setup 2 new optional events in your game:
- IAPConfirmed(): send it when the player just bought one in-app purchase.
- rewardedVideoWatched(): send it when the player just watched an advertising video that he decided to play (in exchange of some reward).
These events reflect a player being engaged in a game through monetization: they will populate the features of the players, that will be used by the Machine Learning model of the game, thus improving the churn prediction.

Version 1.10.0 > June 30th, 2020

Direct personalization mode is doing a "Data-Driven Difficulty Analysis" on the stages in your game. The result for each stage can be "Tuning OK" or "Tuning needed". Until now the process would be that Player Personalization would work only on the "Tuning OK" stages.
But it can be tedious for Studios because they have to update their game, re-submit.... this can take quite a long time.
Plus our tests show that doing personalization even on "Tuning needed" stages can improve the retention.
So now you have the choice:
- activate only DDDA (Difficulty Analysis), spend time tuning your stages, and when enough stages are "ok" activate RTPP (Player Personalization)
- activate both DDDA and RTPP, and Persozanlisation will be active on all stages with a DDDA feedback, be it "Tuning ok" or "Tuning needed".
This is much more confortable for a studio, which can benefit much faster from personalization. Later on, when the game is updated, more stages will be in "tuning ok", and the quality of the personalization will increase!

Version 1.6.10 > Avril 30th, 2020

March and April have been quite "bumpy" for everybody in most countries, but we managed to continue our tests and developments.

Under the hood!

This update is a "dot dot": no big new client features (besides some UX/UI improvements in the web portal), but a lot of work "under the hood":
- lots of back-end improvements and developments,  for better efficiency, security, scalability (thanks AWS architects for you great support !-)
- SDK improvements (the last version is 1.2.3)
- improvements of the AI algorithms for Difficulty Analysis, based on real data tests with our Beta VIP studios (more precise feedback)
- improvements of the AI algorithms for Player Personalization, including working for IDLE games (you can test it now).

New feature coming soon > the Player Personalization Dashboard

Here is a glimpse of a cool new feature to be released in May: the Player Dashboard. It displays for each day the MOOD of your players: percentage of Personalized Players being detected by our Machine Learning as:
- "frustrated" (receive "easier"),
- "in the flow" (receive "default")
- "bored" (receive "harder")

Stay tuned !

Version 1.6 > March 5th, 2020

This version contains a lot of "under the hood" improvements in our algorithms, based on our tests with studios and publishers on the platform.

Data-Driven Difficulty Analysis improved

Our tests reveal that sometime the values that studios put behind "easier" or "harder" have no impact on the players!  For instance no changes in the WinRate of the players with easier / default / harder values.

We improved a lot the quality and precision of the feedbacks given by the Difficulty Analysis (below is an extract that you can see in the DEMO game):

For instance, now you can see  if we have a possible feedback on a stage, but not enough "confidence" yet (Tuning might be ok and dimed visuals).

You can also tick "Only Tuning Needed" to display the stages where you can act (feedback given and tuning needed):

Stay tuned,  do not hesitate to contact us for more information !

Version 1.5 > February 5th, 2020

Get feedback on the real effect that your difficulty values have on your players

As you know, in casual and hyper-casual games, you cannot only test your gameplay with your team and some friends. You need to test with real players, but it's complicated to easily get feedback without good tools and a Data Analyst.

A new feature in the Data-Driven Difficulty Analysis will check, for each stage, if your "easier" and "harder" values have a "tangible" effect players (if not you get a visual feedback on the AI Dashboard for this stage).

This feature is an important pre-requisite before checking if the "default" difficulty is optimum for the overall audience. Visible in the AI Dashboard.

SDK Debug Mode

This new panel is very useful for your developer to check if the SDK is properly integrated.

If you set "debugMode" in the SDK, compile and run, you will have a feedback in real-time in the Game > SDK Integration > SDK Debug Mode panel:
- version received, new/returning player
- feedback on the reception of events (order and and parameters)
- you can chose what answer you get (easier/default/harder) when you play with this debug build, to feel the impact on the gameplay.

(note: the "Direct Test Mode" has been deprecated, replaced by the SDK Debug Mode).

Other new features

The left sub-menu for a game has 2 groups:
- AI-Dashboard, Funnels & Metrics
- Game setup; SDK integration, etc...

New "Billing & Credits" panel (from the ACCOUNT access up right):
- displays the "Usage Credits" that you got from us to test at no cost
- displays the Account transactions (monthly bills and credits*).

*The Usage Credits you got are automatically applied to your account.

Version 1.4 > JANUARY 5th, 2010

Many improvements in the overall robustness of the AWS-based platform, and in the Machine Learning pipeline and algorithms.

New "Usage & Costs" panel  (from the ACCOUNT access up right), so you can see everyday the costs per game.

You can give a role (Producer, Developer, Game Designer...) to each user.

Version 1.3 > DECEMBER 5th, 2019

Lots of small changes and improvements based on feedback given by the publishers and studios testing

"Direct test" Mode

When you work on the easier/default/harder values for your game stages,  you might want to test how it "feel" to play the stages in easier or harder difficulty. Using the "Direct Test" mode, you can "force" to always answer one specific value for all players. This will only work in a "development" version of the game: the LIVE version cannot be affected by this "Direct Test" mode.

Version 1.2 > NOVEMBER 5th, 2019

AWS infrastructure improvements to secure better response time with the games, better scalability of the overall platform and the Machine Learning processes.

Version 1.1 > SEPTEMBER 15th, 2019

Developer feedback on web portal

The Stage Results page shows, for each stage, the percentage of wins/loses/quits, and also how players win or lose (close, large...). The AI-Dashboard indicates when these values are not optimum for the quality of the machine learning and predictions.

Tuning and optimizations

Lots of "under the hood" tuning and optimizations on our AWS architecture and our machine learning models and algorithms.

Version 1.0 > AUGUST 1st, 2019

Online updates

App Store updates are already automatically managed by : when a new game version is detected, restarts the learning for all the stages that you marked as "updated" (difficulty level changed).

When you are doing "Online updates",  using a server to store difficulty tuning values so that you can change these values without resubmitting the game on the store, you need to inform

To do so, there is a new "Online Update" button in the AI Dashboard so you can indicates to that you performs such an update (after marking the impacted stages on the AI Dashboard page, so restarts the learning only for those stages).

Version 0.9 > JULY 22th, 2019

Team members

As the "owner" of the account, you can now invite some colleagues as "Managers" on your account :  analytics manager, game designer, developer for the SDK integration...

If you have several games on, you can restrict each manager access to 1 or several games. Therefore, if you are a publisher you can setup in several games from different studios, then invite studio members and restrict their access to their own games only.

Pain points

If indicates that stage X is too difficult for your overall player audience (phase 1), but you want to keep stage X like that and you won't change its difficulty tuning, just mark ii in the AI Dashboard page, and will consider it as "ok" so player personalization can be activated (easier or harder around your "difficult" default value).

Version 0.8 > MAY 19th, 2019

First BETA release

First version with all the core features:

- basic metrics page (new users, DAU/WAU/MAU, sessions, D1, D3, D7, D15 and D30 retention rates

- sessions funnels page, stages funnels page, stages results page, all with 2 columns: for "active" players (going through, and for "passive" players (not going through

- AI Dashboard page, with all played stages, and for each stage, global tuning feedbacks (phase 1), and player personalization activated is stage tuning ok (phase 2).

- iOS and Android SDK, in native and Unity versions.