Getting started with Autotune AI can be done very quickly.
Set Up Your Contextual Autotune
The first thing you will do is configure your contextual autotune in the Statsig console. This can also be configured programmatically via the Statsig console API.
Make the Contextual Autotune
Log into your Statsig console, and navigate to Autotune under Experiments.
Click create. You need to name your contextual autotune, and optionally specify the goal so other users can understand the motivation behind it.
Set your autotune type to Contextual
Configure Optimization
If optimizing for
- Users clicking
- Checkouts
- Actions
Choose event occurring /outcome. If optimizing for a continuous output like
- Revenue
- Latency
Choose Event Value, and set the directionality. You will also choose the field from your log or metric source (warehouse native) that you want to use for the value.
For warehouse native customers, specify the metric source and optional filters for your target event.
We highly recommend wrapping contextual autotunes in an experiment, but it is not required. You can set this up before your contextual autotune, and after. This experiment will wrap autotune calls in code, and can be used to measure the topline impact of using this contextual autotune in your project.
Training Settings
The other settings can use defaults, but you may want to tune them as well:
- Exploration window: how long to serve traffic randomly to bootstrap the exploration of the bandit
- Attribution window: how long, after a user sees your variant, to count outcome events for. If set to 1 hour, a user has 1 hour to click after seeing our experience in the example above
- Exploration rate: controls how much the bandit favors explore vs. exploit. 0 would not use confidence intervals at all and would just use the best prediction. 1 would use close to the 99.9% CI instead of the 95% CI for exploration
- Long-term exploration allocation %: how much traffic will always get randomly assigned? Use higher values if you plan to run this contextual autotune for a long time to help avoid drift
- Feature list: provide the list of features Statsig will use to train the model. This can also be fetched in code to understand what features a given contextual autotune requires for evaluation if you're fetching them on-demand
Set up your variants. These are configurations that you will fetch in code. For example, the group below would send this configuration to your codebase and the "red" value could be passed to the color setting on a button.
Use the Contextual Autotune in Code
Statsig supports contextual autotune in Client SDKs and Server Core SDKs. We'll use the python server core SDK in our examples below.
We assume you have your server secret key for the following code. Before running python, you'll need the SDK:
pip install statsig-python-core
First, import and initialize Statsig:
from statsig_python_core import Statsig, StatsigUserkey = <your_key_here>autotune_name = <autotune_name_here>statsig = Statsig(key)statsig.initialize().wait()
Then, create a user object and fetch your config:
user = StatsigUser('user_id', custom={'key1': 'value1', 'key2': 'value2'})cfg = statsig.get_experiment(user, autotune_name)
Now you have your cfg and can apply it!
color = cfg.get_string("color", "default color")print(f"Going to use {color} for my color now")
You should be able to:
- see that you fetched a value from one of your variants
- go to the diagnostics page of your autotune, and see a log of the userID along with the corresponding variant
That's it! Your code is now serving personalized variants to your users.
Notes
Statsig requires a few hundred units to train a model, and will also not start training until those units' attribution window has elapsed. If you want to test the functionality, we highly recommend "faking a test" to confirm things work like you expect - use logic like
fetch_autotune_value()if(user country == 'us'): log_click()
to conditionally send events and make sure the model picks up on the conditional behavior