Debugging
Debugging Tools
When debugging why a certain user got a certain value, there are a number of tools at your disposal. Here are some troubleshooting tools:
Diagnostics / Log Stream
Every config in the Statsig ecosystem (meaning Feature Gates, Dynamic Configs, Experiments, and Layers) has a Setup tab and a Diagnostics tab. The diagnostics tab is useful for seeing higher level pass/fail/bucketing population sizes over time, via the checks chart at the top.
For debugging specific checks, the logstream at the bottom is useful and shows both production and non production exposures in near real time.
Note: To see logs from non-production environments, toggle the "Show non-production logs" in the upper right corner.
Evaluation Details
Clicking on a specific exposure shows more details on its evaluation. You can see info like the rule and userID in the exposure stream, and clicking on an individual row shows additional factors like Evaluation Reason, SDK, Server Details and more - all of which can help you debug your setup.
Evaluation Reason
Evaluation reasons are a way to understand why a certain value was returned for a given check. All SDKs provide the Data Source - which is where your Statsig Client/Server instance is getting its data. Newer SDKs also provide a Reason, which lets you know if an individual check was valid or overridden versus how you've initialized. These reasons are intended to be used for debugging and internal logging purposes only, and are sometimes updated in new SDK versions.
- Client SDKs
- Server SDKS
#1. Data Source
For client SDKs, the evaluation state can be:
Source Name | Description | Type | Debugging Suggestions |
---|---|---|---|
Network | Fetched at SDK initialization time from Statsig's servers. | Normal | |
Bootstrap | From bootstrapping the client SDK with a set of values (often from a Statsig Server SDK instance, see here). | Normal | |
Prefetch | Fetched from the prefetchUsers API (js-client only), see here. | Normal | |
NetworkNotModified | A request to the Statsig network was successful, but the cached values were already up to date for this user. | Normal | |
Sticky (old SDKs) | Persisted from a sticky evaluation previously. | Normal | |
LocalOverride (old SDKs) | From an override set locally on the SDK via an override API. | Normal | |
Cache | Loaded from the local storage cache for the current user, and network result was not available. | Normal | Not explicitly an error state, but you may be checking a config before initialize returns. |
InvalidBootstrap | The set of values was for a different user than the SDK was initialized with. These are discarded for analysis. | Error | See Fixing InvalidBootstrap |
Error | An unknown error has occurred, and was logged to Statsig servers. | Error | Reach out to us in Slack for support. |
Error:NoClient (js-client-only) | No client was found in your StatsigContext. | Error | You've likely made a call to a Statsig hook outside of a <StatsigProvider> , verify your setup and try again. |
Unrecognized (old SDKs) | The SDK was initialized, but this gate/experiment/config did not exist in the set of values. | Error | Confirm the experiment or gate is configured in the Statsig console and you're using the correct API key. |
#2. Reason (new SDKs only)
Newer versions of the sdk will contain both the above initialization state and the source of an individual value that was returned.
Reason Name | Description | Type | Debugging Suggestions |
---|---|---|---|
Recognized | The value was recognized in the set of configs the client was operating with | Normal | |
Sticky | The value is from keepDeviceValue = true on the method call | Normal | |
LocalOverride | The value is from a local override set on the sdk | Normal | |
Unrecognized | The value was not included in the set of configs the client was operating with | Error | Confirm the experiment or gate is configured in the Statsig console and you're using the correct API key. |
For example: Network:Recognized
means the sdk had up to date values from a successful initialization network request, and the gate/config/experiment you were checking was defined in the payload.
If you are not sure why a config was not included (resulting in an "Unrecognized" source), it could be excluded due to Target Apps, or Client Bootstrapping.
#1. Data Source
For server SDKs, the evaluation state can be:
Source Name | Description | Type | Debugging Suggestions |
---|---|---|---|
Network | Configurations fetched at SDK initialization time from Statsig's servers. | Normal | |
Bootstrap | From bootstrapping the server SDK with a set of values. | Normal | |
DataAdapter | Values come from the provided data adapter or data store, see here for more. | Normal | |
LocalOverride (old SDKs only) | From an override set locally on the SDK via an override API. | Normal | |
StatsigNetwork | Custom proxy/GRPC streaming has triggered the fallback behavior, thus falling back to Statsig API. | Fallback | Review your proxy setup, your values are up-to-date but not using the expected methodology. |
Uninitialized | The SDK was not yet successfully initialized. | Error | Revisit your Initialization Strategy, as you're checking configs before initialization is complete. |
Unrecognized (old SDKs only) | The SDK was initialized, but this gate/experiment/config did not exist in the set of values. | Error | Confirm the experiment or gate is configured in the Statsig console and you're using the correct API key. |
#2. Reason (new SDKs only)
Reason Name | Description | Type | Debugging Suggestions |
---|---|---|---|
LocalOverride | From an override set locally on the SDK via an override API. | Normal | |
None | Successful evaluation. | Normal | |
Unrecognized | This gate/experiment/config did not exist in the set of values. | Error | Confirm the experiment or gate is configured in the Statsig console and you're using the correct API key. |
Unsupported | The SDK does not support this type of condition type/operator. Usually, this means the SDK is out of date and missing new features. | Error | Update your SDK to the latest version. |
Error | An unknown error occurred during evaluation. | Error | Reach out to us in Slack for support. |
So Network
means the sdk was initialized with values from the network, and the evaluation was successful. Network:Unrecognized
, means the sdk was initialized with values from the network, but the gate/config/experiment you were checking was not included in the payload.
Server Details
In addition to these reasons, the most recent versions of server SDKs will also give you two times to watch: the time at which config definitions initialized the SDK, and the time at which the SDK was currently evaluating those definitions. When you change a gate/config/experiment, the project time will update and server SDKs will download the new definition. If you have not changed your project in two hours, and the evaluation time is saying the SDK is up to date as of 2 hours ago, then you're evaluating the most up to date definition of that gate/experiment.
In this example, the project was last updated yesterday, and the SDK was initialized with those values. The project has not updated since that time, and the SDK is still using that same set of definitions which it fetched from the network. You can also see the SDK type and version associated with a given check.
Mocking Statsig / Local Mode
To facilitate testing with Statsig, we provide a few tools to help you test your code without fetching values from statsig network:
-
Local Mode: By setting the localMode parameter to true, the SDK will operate without making network calls, returning only default values. This is ideal for dummy or test environments that should remain disconnected from the network.
-
Override APIs: Utilize the overrideGate and overrideConfig APIs on the global Statsig interface. These allow you to set overrides for gates or configurations either for specific users or for all users by omitting the user ID.
We recommend enabling localMode and applying overrides for gates, configurations, or experiments to specific values to thoroughly test the various code flows you are developing.
For specific SDK implementation: refer to StatsigOptions in the respective SDK documentation.
Client SDK Debugger
It can be useful to inspect the current values that a Client SDK is using internally. For this, we have a Client SDK Debugger. With this tool, you can see the current User object the SDK is using as well as the gate/config values associated with it.
Javascript/React: Via a Chrome Extension https://github.com/statsig-io/statsig-sdk-debugger-chrome-extension
NOTE: Accounts signing in to the Statsig console via Google SSO are not supported by this debugging tool.
iOS: Available with Statsig.openDebugView()
. Available in v1.26.0 and above.
Android: Available with Statsig.openDebugView()
. Available in v4.29.0 and above.
Landing | Gates List | Gate Details | Experiment Details |
---|---|---|---|
FAQs
For more sdk specific questions, check out the FAQs on the respective SDK pages. If you have more questions, feel free to reach out directly in our Slack Community.
Invalid Bootstrap
This can occur when you are Bootstrapping a Statsig Client SDK with your own prefetched or generated values. The InvalidBootstrap reason is signally that the current user the Client SDK is operating against is not the same as the one used to generate the bootstrap values.
The following pseudo code highlights how this can occur:
// Server Side
userA = { userID: 'user-a' };
bootstrapValues = Statsig.getClientInitializeResponse(userA);
// Client Side
bootstrapValues = fetchStatsigValuesFromMyServers(); // <- Network request that executes the above logic
userB = { userID: 'user-b' }; // <- This is not the same User
Statsig.initialize("client-key", userB, { initializeValues: bootstrapValues });
Users must also be a 1 to 1 match. The SDK will treat a user with slightly different values as a completely different user. For example, the following two user objects would also trigger InvalidBootstrap even though they have the same UserID.
userA = { userID: 'user-a' };
userAExt = { userID: 'user-a', customIDs: { employeeID: 'employee-a' }};
Environments
SDKs get the environment configurations from initialization options. If no environment is provided, the SDK will default to the production environment.
If you are wondering why a certain user is not passing an environment-based condition and what you SDK is initialized with, you can check the user properties in any of the log streams.
The statsigEnvironment
property will show you the environment the SDK is operating in.
Maximizing Event Throughput
This is currently only applicable to Python SDK v0.45.0+
The SDK batches and flushes events in the background to our server. When the volume of incoming events exceeds the SDK's flushing capacity, some events may be dropped after a certain number of retries. To reduce the chances of event loss, you can adjust several settings in the Statsig options:
- Event Queue Size: Determines how many events are sent in a single batch.
- Increasing the event queue size allows more events to be flushed at once, but it will consume more memory. It's recommended not to exceed 1800 events per batch, as larger payloads may result in failed requests.
- Retry Queue Size: Specifies how many batches of events the SDK will hold and retry.
- By default, the SDK keeps 10 batches in the retry queue. Increasing this limit allows more batches to be retried, but also increases memory usage. Tuning these options can help manage event volume more effectively and minimize the risk of event drops.