Set Up Your Contextual Autotune
The first thing you will do is configure your contextual autotune in the Statsig console. This can also be configured programmatically via the Statsig console API.Make the Contextual Autotune
Log into your Statsig console, and navigate to Autotune under Experiments.Configure Optimization
If optimizing for- Users clicking
- Checkouts
- Actions
- Revenue
- Latency
Training Settings
The other settings can use defaults, but you may want to tune them as well:- Exploration window: how long to serve traffic randomly to bootstrap the exploration of the bandit
- Attribution window: how long, after a user sees your variant, to count outcome events for. If set to 1 hour, a user has 1 hour to click after seeing our experience in the example above
- Exploration rate: controls how much the bandit favors explore vs. exploit. 0 would not use confidence intervals at all and would just use the best prediction. 1 would use close to the 99.9% CI instead of the 95% CI for exploration
- Long-term exploration allocation %: how much traffic will always get randomly assigned? Use higher values if you plan to run this contextual autotune for a long time to help avoid drift
- Feature list: provide a list of features Statsig should use to train the model. This is just a mask/filter, and if not set Statsig will read every custom attribute. The main use case is fetching this via CAPI to understand which features a given contextual autotune requires for evaluation when they’re fetched on-demand (e.g. introducing latency)
Use the Contextual Autotune in Code (python example)
We assume you have your server secret key for the following code. Before running python, you’ll need the SDK:pip install statsig-python-core
First, import and initialize Statsig:
- see that you fetched a value from one of your variants
- go to the diagnostics page of your autotune, and see a log of the userID along with the corresponding variant