📄️ Your First Feature
Now that you have created your Statsig account, let's get started on building and shipping your first feature using Statsig.
📄️ Logging Events
After creating your first gate, you want to start logging events in order for the system to understand what you care about. These events are then automatically derived into metrics and used to quantify the impact of your features on the health of the overall product.
📄️ Make Your Code Dynamic
Now that you have created your Statsig account, and perhaps even your first feature, this guide will help you make your app a bit more flexible with Dynamic Config.
📄️ Your First A/B Test
In this guide, you will create and implement your first A/B/n test. While you can use Statsig's Feature Gates to roll out new features, Statsig's Experiments enable you to run all kinds of A/B tests, from simple bivariant (A vs. B) experiments to multi-variant experiments ( A vs. B/C/D/n) and mutually exclusive experiments.
📄️ Your First Device-level Experiment
In cases where you're unable to establish the user's identity and are unable to leverage a user ID as the unit of randomization for your experiment,
📄️ Experiment on custom ID types
Sometimes you may want to randomize experiment bucketing based on a ID other than the user ID, or the Statsig generated Stable ID. For example, say your company builds tasks management tools for other companies to use - you
📄️ Creating Holdouts
What is a holdout?
📄️ Using Environments
All of our SDKs allow you to set the environment tier your app is currently in during initialization. If you'd like to evaluate feature gates, dynamic configs, and/or experiments to different values in development/staging environment vs. production, you simply need to set the correct environment in your code when initializing and configure the corresponding features in Statsig console to evaluate differently for an environment tier. The sections below goes into details on how to do these.
📄️ Setting up Reviews
You can enable reviews for all Statsig resources such as feature gates, dynamic configs, segments, and experiments
📄️ Reusable Targeting (Segments)
User Segments allow you to predefine targeting groups for re-use in Feature Gates and Dynamic Configs. Think of it as a reusable macro for a set of users.
📄️ Private Attributes
Evaluating feature gates, dynamic configs, segments, and experiments without logging user data to Statsig
📄️ Synchronized Launches
As you get used to developing with feature flags, you will start to include them from the beginning of your feature development, changing the audience of your features as you go. As this takes hold across your team/organization/company, you will want to be able to tie features together and launch them simultaneously, as part of a broader release.
📄️ A/B Test on Shopify
If you're interested in A/B testing on your Shopify site, there are two steps you will need to take.
📄️ Use Statsig for analytics
If you are new to Statsig and are already using another feature flagging or experimentation platform, it is easy to import your events to Statsig and get analytics automatically.
📄️ Using Gates or Experiments
With Statsig, you can create control and test groups, and compare these groups as an A/B test using either a feature gate or an experiment.
📄️ Bootstrapping Your Experimentation Program
In this guide, we focus on chalking out a simple and successful program to get started with experimentation at your company using Statsig. The best part? You can get started completely for free, with Statsig’s free tier of up to 5M events per month and no limits on the number of team members.
📄️ Running an A/A Test
In this guide, we will walk you through how to leverage Statsig’s platform to run an A/A test on your product.
In this guide, we will walk you through how to leverage Statsig’s platform in serverless environments. The examples in this guide use the statsig-node SDK in a Google Cloud Function.
📄️ Config History
Config history for things like Feature Gates, Experiments, Dynamic Configs, and Segments can be accessed by clicking the "History" button on the top right of the page.
📄️ POC to Production
In this guide, we introduce you to transitioning from a proof of concept or limited test on Statsig to a broader production deployment.
📄️ Testing Your Setup
Statsig increases your engineering velocity. This page has pointers to features across the product that enable you to test while moving fast.
📄️ A/B Test Email Campaigns
A/B Testing an email campaign and getting experiment results on downstream product metrics (in addition to top level email interaction metrics), is a common use case for Statsig customers.
📄️ CMS Integrations
Using Statsig with a CMS