change-log

askblu.ai platform changelog

Welcome to the platform change-og. Here you will get the latest information about the main changes, updates and new features that our team is working hard to improve askblu.ai. Stay tuned, subscribe to our newsletter!

Version 1.6 > March 5th, 2020

This version contains a lot of "under the hood" improvements in our algorithms, based on our tests with studios and publishers on the platform.

Data-Driven Difficulty Analysis improved

Our tests reveal that sometime the values that studios put behind "easier" or "harder" have no impact on the players!  For instance no changes in the WinRate of the players with easier / default / harder values.

We improved a lot the quality and precision of the feedbacks given by the Difficulty Analysis (below is an extract that you can see in the DEMO game):

For instance, now you can see  if we have a possible feedback on a stage, but not enough "confidence" yet (Tuning might be ok and dimed visuals).

You can also tick "Only Tuning Needed" to display the stages where you can act (feedback given and tuning needed):

Stay tuned,  do not hesitate to contact us for more information !


Version 1.5 > February 5th, 2020

Get feedback on the real effect that your difficulty values have on your players

As you know, in casual and hyper-casual games, you cannot only test your gameplay with your team and some friends. You need to test with real players, but it's complicated to easily get feedback without good tools and a Data Analyst.

A new feature in the Data-Driven Difficulty Analysis will check, for each stage, if your "easier" and "harder" values have a "tangible" effect players (if not you get a visual feedback on the AI Dashboard for this stage).

This feature is an important pre-requisite before checking if the "default" difficulty is optimum for the overall audience. Visible in the AI Dashboard.

SDK Debug Mode

This new panel is very useful for your developer to check if the SDK is properly integrated.

If you set "debugMode" in the SDK, compile and run, you will have a feedback in real-time in the Game > SDK Integration > SDK Debug Mode panel:
- version received, new/returning player
- feedback on the reception of events (order and and parameters)
- you can chose what askblu.ai answer you get (easier/default/harder) when you play with this debug build, to feel the impact on the gameplay.

(note: the "Direct Test Mode" has been deprecated, replaced by the SDK Debug Mode).

Other new features

The left sub-menu for a game has 2 groups:
- AI-Dashboard, Funnels & Metrics
- Game setup; SDK integration, etc...

New "Billing & Credits" panel (from the ACCOUNT access up right):
- displays the "Usage Credits" that you got from us to test askblu.ai at no cost
- displays the Account transactions (monthly bills and credits*).

*The Usage Credits you got are automatically applied to your account.


Version 1.4 > JANUARY 5th, 2010

Many improvements in the overall robustness of the AWS-based platform, and in the Machine Learning pipeline and algorithms.

New "Usage & Costs" panel  (from the ACCOUNT access up right), so you can see everyday the askblu.ai costs per game.

You can give a role (Producer, Developer, Game Designer...) to each user.


Version 1.3 > DECEMBER 5th, 2019

Lots of small changes and improvements based on feedback given by the publishers and studios testing askblu.ai.

"Direct test" Mode

When you work on the easier/default/harder values for your game stages,  you might want to test how it "feel" to play the stages in easier or harder difficulty. Using the "Direct Test" mode, you can "force" askblu.ai to always answer one specific value for all players. This will only work in a "development" version of the game: the LIVE version cannot be affected by this "Direct Test" mode.


Version 1.2 > NOVEMBER 5th, 2019

AWS infrastructure improvements to secure better response time with the games, better scalability of the overall platform and the Machine Learning processes.


Version 1.1 > SEPTEMBER 15th, 2019

Developer feedback on web portal

The Stage Results page shows, for each stage, the percentage of wins/loses/quits, and also how players win or lose (close, large...). The AI-Dashboard indicates when these values are not optimum for the quality of the machine learning and predictions.

Tuning and optimizations

Lots of "under the hood" tuning and optimizations on our AWS architecture and our machine learning models and algorithms.


Version 1.0 > AUGUST 1st, 2019

Online updates

App Store updates are already automatically managed by askblu.ai : when a new game version is detected, askblu.ai restarts the learning for all the stages that you marked as "updated" (difficulty level changed).

When you are doing "Online updates",  using a server to store difficulty tuning values so that you can change these values without resubmitting the game on the store, you need to inform askblu.ai.

To do so, there is a new "Online Update" button in the AI Dashboard so you can indicates to askblu.ai that you performs such an update (after marking the impacted stages on the AI Dashboard page, so askblu.ai restarts the learning only for those stages).


Version 0.9 > JULY 22th, 2019

Team members

As the "owner" of the account, you can now invite some colleagues as "Managers" on your account :  analytics manager, game designer, developer for the SDK integration...

If you have several games on askblu.ai, you can restrict each manager access to 1 or several games. Therefore, if you are a publisher you can setup in askblu.ai several games from different studios, then invite studio members and restrict their access to their own games only.

Pain points

If askblu.ai indicates that stage X is too difficult for your overall player audience (phase 1), but you want to keep stage X like that and you won't change its difficulty tuning, just mark ii in the AI Dashboard page, and askblu.ai will consider it as "ok" so player personalization can be activated (easier or harder around your "difficult" default value).


Version 0.8 > MAY 19th, 2019

First BETA release

First version with all the core features:

- basic metrics page (new users, DAU/WAU/MAU, sessions, D1, D3, D7, D15 and D30 retention rates

- sessions funnels page, stages funnels page, stages results page, all with 2 columns: for "active" players (going through askblu.ai), and for "passive" players (not going through askblu.ai).

- AI Dashboard page, with all played stages, and for each stage, global tuning feedbacks (phase 1), and player personalization activated is stage tuning ok (phase 2).

- iOS and Android SDK, in native and Unity versions.