Skip to main content

Debugging

Debugging Tools

When debugging why a certain user got a certain value, there are a number of tools at your disposal. Here are some troubleshooting tools:

Diagnostics / Log Stream

Every config in the Statsig ecosystem (meaning Feature Gates, Dynamic Configs, Experiments, and Layers) has a Setup tab and a Diagnostics tab. The diagnostics tab is useful for seeing higher level pass/fail/bucketing population sizes over time, via the checks chart at the top.

Screen Shot 2023-04-27 at 11 17 08 AM

For debugging specific checks, the logstream at the bottom is useful and shows both production and non production exposures in near real time.

Note: To see logs from non-production environments, toggle the "Show non-production logs" in the upper right corner.

Screen Shot 2023-04-27 at 11 20 14 AM

Evaluation Details

Clicking on a specific exposure shows more details on its evaluation. You can see info like the rule and userID in the exposure stream, and clicking on an individual row shows additional factors like Evaluation Reason, SDK, Server Details and more - all of which can help you debug your setup.

Screen Shot 2023-04-27 at 11 21 50 AM

Evaluation Reason

Evaluation reasons are a way to understand why a certain value was returned for a given check. All SDKs provide the Data Source - which is where your Statsig Client/Server instance is getting its data. Newer SDKs also provide a Reason, which lets you know if an individual check was valid or overridden versus how you've initialized. These reasons are intended to be used for debugging and internal logging purposes only, and are sometimes updated in new SDK versions.

#1. Data Source

For client SDKs, the evaluation state can be:

Source NameDescriptionTypeDebugging Suggestions
NetworkFetched at SDK initialization time from Statsig's servers.Normal
BootstrapFrom bootstrapping the client SDK with a set of values (often from a Statsig Server SDK instance, see here).Normal
PrefetchFetched from the prefetchUsers API (js-client only), see here.Normal
NetworkNotModifiedA request to the Statsig network was successful, but the cached values were already up to date for this user.Normal
Sticky (old SDKs)Persisted from a sticky evaluation previously.Normal
LocalOverride (old SDKs)From an override set locally on the SDK via an override API.Normal
CacheLoaded from the local storage cache for the current user, and network result was not available.NormalNot explicitly an error state, but you may be checking a config before initialize returns.
InvalidBootstrapThe set of values was for a different user than the SDK was initialized with. These are discarded for analysis.ErrorSee Fixing InvalidBootstrap
ErrorAn unknown error has occurred, and was logged to Statsig servers.ErrorReach out to us in Slack for support.
Error:NoClient (js-client-only)No client was found in your StatsigContext.ErrorYou've likely made a call to a Statsig hook outside of a <StatsigProvider>, verify your setup and try again.
Unrecognized (old SDKs)The SDK was initialized, but this gate/experiment/config did not exist in the set of values.ErrorConfirm the experiment or gate is configured in the Statsig console and you're using the correct API key.

#2. Reason (new SDKs only)

Newer versions of the sdk will contain both the above initialization state and the source of an individual value that was returned.

Reason NameDescriptionTypeDebugging Suggestions
RecognizedThe value was recognized in the set of configs the client was operating withNormal
StickyThe value is from keepDeviceValue = true on the method callNormal
LocalOverrideThe value is from a local override set on the sdkNormal
UnrecognizedThe value was not included in the set of configs the client was operating withErrorConfirm the experiment or gate is configured in the Statsig console and you're using the correct API key.

For example: Network:Recognized means the sdk had up to date values from a successful initialization network request, and the gate/config/experiment you were checking was defined in the payload.

If you are not sure why a config was not included (resulting in an "Unrecognized" source), it could be excluded due to Target Apps, or Client Bootstrapping.

Mocking Statsig / Local Mode

To facilitate testing with Statsig, we provide a few tools to help you test your code without fetching values from statsig network:

  • Local Mode: By setting the localMode parameter to true, the SDK will operate without making network calls, returning only default values. This is ideal for dummy or test environments that should remain disconnected from the network.

  • Override APIs: Utilize the overrideGate and overrideConfig APIs on the global Statsig interface. These allow you to set overrides for gates or configurations either for specific users or for all users by omitting the user ID.

We recommend enabling localMode and applying overrides for gates, configurations, or experiments to specific values to thoroughly test the various code flows you are developing.

For specific SDK implementation: refer to StatsigOptions in the respective SDK documentation.

Client SDK Debugger

It can be useful to inspect the current values that a Client SDK is using internally. For this, we have a Client SDK Debugger. With this tool, you can see the current User object the SDK is using as well as the gate/config values associated with it.

Javascript/React: Via a Chrome Extension https://github.com/statsig-io/statsig-sdk-debugger-chrome-extension

NOTE: Accounts signing in to the Statsig console via Google SSO are not supported by this debugging tool.

iOS: Available with Statsig.openDebugView(). Available in v1.26.0 and above.

Android: Available with Statsig.openDebugView(). Available in v4.29.0 and above.

LandingGates ListGate DetailsExperiment Details
client-debugger-landingclient-debugger-gates-listclient-debugger-gate-infoclient-debugger-experiment-details

FAQs

For more sdk specific questions, check out the FAQs on the respective SDK pages. If you have more questions, feel free to reach out directly in our Slack Community.

Invalid Bootstrap

This can occur when you are Bootstrapping a Statsig Client SDK with your own prefetched or generated values. The InvalidBootstrap reason is signally that the current user the Client SDK is operating against is not the same as the one used to generate the bootstrap values.

The following pseudo code highlights how this can occur:

// Server Side

userA = { userID: 'user-a' };
bootstrapValues = Statsig.getClientInitializeResponse(userA);

// Client Side

bootstrapValues = fetchStatsigValuesFromMyServers(); // <- Network request that executes the above logic

userB = { userID: 'user-b' }; // <- This is not the same User
Statsig.initialize("client-key", userB, { initializeValues: bootstrapValues });

Users must also be a 1 to 1 match. The SDK will treat a user with slightly different values as a completely different user. For example, the following two user objects would also trigger InvalidBootstrap even though they have the same UserID.

userA = { userID: 'user-a' };
userAExt = { userID: 'user-a', customIDs: { employeeID: 'employee-a' }};

Environments

SDKs get the environment configurations from initialization options. If no environment is provided, the SDK will default to the production environment. If you are wondering why a certain user is not passing an environment-based condition and what you SDK is initialized with, you can check the user properties in any of the log streams. The statsigEnvironment property will show you the environment the SDK is operating in.

Maximizing Event Throughput

note

This is currently only applicable to Python SDK v0.45.0+

The SDK batches and flushes events in the background to our server. When the volume of incoming events exceeds the SDK's flushing capacity, some events may be dropped after a certain number of retries. To reduce the chances of event loss, you can adjust several settings in the Statsig options:

  • Event Queue Size: Determines how many events are sent in a single batch.
  • Increasing the event queue size allows more events to be flushed at once, but it will consume more memory. It's recommended not to exceed 1800 events per batch, as larger payloads may result in failed requests.
  • Retry Queue Size: Specifies how many batches of events the SDK will hold and retry.
  • By default, the SDK keeps 10 batches in the retry queue. Increasing this limit allows more batches to be retried, but also increases memory usage. Tuning these options can help manage event volume more effectively and minimize the risk of event drops.