- Created a new Feature Gate in the Statsig console, set up as an “A/A test”
Prerequisites
- You already have a Statsig account
- You already integrated the Statsig Client SDK into an existing application
Step 1: Create a feature gate in the console
The easiest way to run an A/A test in Statsig is by leveraging a Feature Gate. You can also leverage an Experiment to run an A/A, but we chose to use a Feature Gate for this tutorial for simplicity. Log into the Statsig console at https://console.statsig.com/ and navigate to Feature Gates in the left-hand navigation panel. Click on the Create button and enter the name and (optional) description for your feature gate. We will call our feature gate “aatest_example”. Click Create.





Step 2: Check the feature gate in your code
Copy the code snippet in the upper right hand corner of your feature gate page under the < > symbol and drop it into your application at the point you want to call the A/A check.
Step 3: Review A/A test results
Within 24 hours of starting your experiment, you’ll see the cumulative exposures in the Pulse Results tab of your feature gate.

- Exposures- make sure you’re seeing exposures flowing through as expected from your product. If you’re not seeing exposures, use the Diagnostics tab and the Exposure Stream to debug
- Pulse results- roughly 5% of your metrics in Pulse should be showing a statistically significant change due to the 95% confidence interval of Statsig’s stats engine
Simulated A/A Tests
We’ve made running A/A tests at scale easy by setting up simulated A/A tests that run every day in the background, for every company on the platform. An A/A test is like an A/B test - but both groups get the same experience. A/A tests help build trust in your experimentation platform (and your metrics!) A/A tests can be Online or Offline. An Online A/A test is run on real users. An engineer instruments your app with the Statsig SDK to check for experiment assignment. Assignment is logged, but there’s no difference in experience to the user. Since there is no effect, you expect to only see statistical noise. When using 95% confidence intervals, only ~1 in 20 metrics will show a stat-sig difference between control and test.Offline A/A tests
A single request runs on one unit type, and an offline A/A test works by- Querying a representative sample of your data
- Randomly assigning subjects to Test or Control
- Computing relevant metrics for Test vs Control and running them through the stats engine
- You’re looking for the % of false positives. If your p-value cutoff is 0.05 (typical), you’d expect a ~5% false positive rate.
File Description
Column Name | Description |
---|---|
metric_name | Name of the Metric |
metric_type | Type of Metric |
unit_type | The unit used to randomize (e.g. userID) |
n_tests | The number of tests run |
pct_ss_95_pct_confidence | The percentage of tests that have a stat-sig result for this metric |
avg_units_per_test | The number of units (often users) sampled into the A/A test |
avg_participating_units_per_test | The number of units in the test with a value for this metric |
