Use Cases
Contextual bandits bridge the gap between un-personalized solutions and fully fledged ranking solutions. The main limitation is that contextual bandits:- Have a fixed output set of variants they can show
- Have limited ability to account for complex context on the “object” being seen/predict for novel content (e.g. video ranking)
Methodology
Statsig’s autotune AI uses a LinUCB based approach. We think this paper is a good introduction to the topic: Li, Chu, Langford, Schapire. For coverage of regret analysis, we think these lecture notes from Jain from the University of Washington are a useful resource. Autotune AI works with categorical and numerical features. Whatever key-value pairs attached to the custom object on the Statsig user will be converted into categorical/numerical features based on their data type. Categorical features will be one-hot-encoded. You should not need to build complex training pipelines, though many customers will pass pre-evaluated user attributes or predictions as context objects.There is support for specifying features up front in Statsig’s console. This can be helpful for you to know what features you need to fetch for the bandit in cases that there’s an expensive/live lookup. For warehouse native customers, there is planned work around allowing you to join entity properties during the analysis phase so that you can plug in your own feature store to autotune AI analysis, similar to our approach with CURE.
You can also fetch a ranked list from Statsig and then manually expose those you show to the user, for use cases where you have client-side filtering or want to show multiple options; see Advanced Usage