FAQ
And for everything else, there's always the FAQ. We'll try to keep this list updated with more recent and common questions, and move those answers to their respective places in the docs over time.
I don't see my client or server language listed, can I still use Statsig?
If none of our current SDKs fit your needs, let us know! Email jkw [at] statsig [dot] com. We're working hard to bring the power of Statsig to your client or server with more SDKs, plugins, and integrations shipping all the time.
How can I get started with running an A/B Test?
If you have a feature built but not yet released, you can run a simple A/B test by opening up a partial rollout using a feature gate. This will create test and control groups to measure the impact of your new feature. If you already have a feature in production and want to test different variants, create an experiment. Either way, you can analyze the results in the Pulse Results tab of your feature gate or experiment.
Is it possible to add a layer to a running experiment?
Once an experiment has been started you cannot change the layer. Doing so would impact the integrity of the experiment. Once an experiment has been created, you can no longer change the layer (we may allow this in the future)
Can I target my experiment at a subset of users. e.g users on iOS only?
Yes you can! When setting up your experiment you can select a Feature Gate that contains targeting rules.
Your Feature Gate can explictly pass only users who meet specific criteria - in this example users on iOS. Learn more about feature gates here.
Why should I define parameters for my experiments and get them in code, instead of just getting the group?
Even though some products on the market took the latter approach, parameterizing experiments is what companies like Facebook, Uber and Airbnb do in their internal experimentation platform, and it allows for much faster iteration (no code change for new experiments) and more flexible experiment design.
Take a look at this example, where you are testing the color of a button. If you are to get the group the user is in and decide what the button looks like in the code, it will look like this:
if (otherExpEngine.getExperiment('button_color_test').getGroup() === 'Control') {
color = 'BLACK';
} else if (otherExpEngine.getExperiment('button_color_test').getGroup() === 'Blue') {
color = 'BLUE';
}
In Statsig, you will add a parameter to your experiment named "button_color", and your code will look like this:
color = statsig.getExperiment('button_color_test').getString('button_color', 'BLACK');
In the first set up, if you want to test a new color, say "Green", you will need to change your experiment, make a code change, and even wait for a new release cycle if you are developing on mobile platforms.
In the second set up using Statsig, you can simply change your experiment to add a new group that returns "GREEN" and be done. No code change or waiting for release cycle needed!
When should I create a new Project?
Projects have distinct boundaries - nothing is shared between them. Create a new project when you're managing a distinct product with it's own distinct userIDs and metrics. If your app has web, iOS and Android endpoints - you should manage them in one project so you can measure metrics like DAU across them.
If you have a distinct marketing website (only sees anonymous users) from a product (only sees signed in users), you can create distinct projects for them. However if you want to track success across them, you'll want to manage them in the same project. An example of this is measuring user signup on the marketing website, where the success metric is new users created who're still active users 30 days after account creation.
When I increase the rollout % of a rule on a feature gate, will users who were passing the rule continue to pass?
Yes. For example, if your rule was passing 10%, and you increase it to 20%, the original 10% users will still pass, and an additional 10% will change from fail to pass.
If you find a bug with the rollout, disable the gate, fix the bug and then re-enable the gate - Statsig will maintain the set of users exposed to the rollout. Reducing the rollout % and then increasing it will cause users to be shuffled and we don't recommend doing this.
What counts as a billable event?
Statsig records an event when your application calls the Statsig SDK to check whether a user should be exposed to a feature gate or experiment, to log an event for a custom metric, or to check a dynamic config.
Statsig dedupes billable exposure events for the same user and feature/experiment on client (60m window; proxy for session duration) and server SDKs (1m window; proxy for user request handling).
Experiment checks that result in no allocation (e.g. the experiment hasn’t started, or has finished) or Feature Gates that have been disabled (fully shipped or abandoned with no rule evaluation) do not generate billable exposure events.
Layer exposures are slightly more nuanced as outlined here.
How do I manage my billable event volume?
See the question above for what counts as a billable event. Some tools available to keep an eye on your event volume include
- Go to the Usage and Billing tab within your Project Settings to download a CSV file that details what gate/experiment checks or events are contributing to your event volume. Open it in Excel and create a Pivot Table using this data to quickly sort and look at top event volume drivers.
- The CSV is the richest way to explore usage data. There is some ability to see what gates/experiments or events consume the most events in the Usage and Billing Tab (within Project Settings).
Admins will be proactively alerted when Enterprise Contracts hit 25/ 50/ 75/ 100% usage of their contracted events.
Look for gates that are 0% or 100% pass. These can be launched (always return Pass) or disabled (always return Fail) to stop generating billable events. More information. With experiments, filter to experiments with Status = In Progress and Created < [day-45] to find long running experiments to make sure these are intentional and not just forgotten.