Skip to main content

Debugging Tools

When a user sees an unexpected value, start with the tooling built into the Statsig console and SDKs. The sections below outline where to look and what each signal means.

Diagnostics & Log Stream

Every gate, config, experiment, and layer has both a Setup and Diagnostics tab. The diagnostics view highlights pass/fail rates and bucketing counts so you can spot anomalies over time.
Diagnostics tab showing pass and fail counts
Scroll to the log stream to inspect individual evaluations. Entries arrive within seconds for both production and non-production environments.
Log stream showing recent exposures
Enable Show non-production logs in the diagnostics view to surface checks coming from test keys and development builds.

Logging Levels and Expected Information

Statsig SDKs emit runtime logs across four verbosity levels:
  • Debug – deep tracing meant for onboarding or issue triage.
    • Missing gate/config warnings when a definition is unavailable.
    • Step-by-step messages that follow SDK initialization and evaluations.
  • Info – healthy lifecycle events for day-to-day operation.
    • Initialization summaries with source and SDK version details.
    • Notifications when the configuration store is populated.
  • Warning – recoverable issues that might affect functionality.
    • Non-critical errors automatically handled by the SDK.
    • gRPC reconnection attempts or similar transient network events.
  • Error – critical failures that block expected behavior.
    • Initialization timeouts or outright failures.
    • Fallback notices that indicate gRPC is unavailable or misconfigured.

Evaluation Details

Click any exposure in the log stream to drill into the precise rule, user attributes, evaluation reason, SDK metadata, and server timestamps. That detail is often enough to pinpoint why a user received a specific value.
Evaluation details modal with rule match information

Evaluation Reasons

Evaluation reasons answer two questions: where the SDK sourced its definitions and why a particular value was returned. Use them to distinguish between healthy results, overrides, and error states.
Evaluation reason popover showing source and reason
  • Client SDKs
  • Server SDKs

Data Source

SourceDescriptionTypeDebugging Suggestions
NetworkValues fetched during initialization from Statsig servers.Normal
BootstrapSupplied during bootstrap (often from a server SDK).Normal
PrefetchLoaded via the prefetchUsers API (JS only).Normal
NetworkNotModifiedNetwork request succeeded but cached values were already current.Normal
Sticky (legacy)Persisted from a sticky evaluation.Normal
LocalOverride (legacy)Set locally via override APIs.Normal
CacheServed from local cache because network values were unavailable.WarningEnsure initialize() has completed before checks run.
InvalidBootstrap / BootstrapPartialUserMatchBootstrap values were generated for a different user profile.ErrorSee Fixing InvalidBootstrap.
BootstrapStableIDMismatchStableID differed between bootstrap and runtime user.ErrorSee BootstrapStableIDMismatch.
ErrorA generic evaluation failure that was logged to Statsig.ErrorAsk for help in Slack.
Error:NoClient (JS)No Statsig client was found in context.ErrorWrap checks in <StatsigProvider> or equivalent.
Unrecognized (legacy)The definition was missing from the initialize payload.ErrorConfirm the config exists and the correct API key is used.
NoValuesInitialization ran but failed to retrieve values.ErrorVerify client key and network connectivity.
LoadingInitialization is still in progress.ErrorAwait initializeAsync() or guard checks until ready.
UninitializedInitialization never started.ErrorCall initializeAsync()/initializeSync() explicitly.
UAParserNotLoadedUA parsing was disabled while targeting relies on it.ErrorRemove UA-based targeting or re-enable parsing.
CountryLookupNotLoadedCountry lookups were disabled while targeting relies on them.ErrorAvoid IP-based targeting or re-enable lookups.

Reason (new SDKs only)

ReasonDescriptionTypeDebugging Suggestions
RecognizedThe definition was present and matched the current values.Normal
StickyResult persisted because keepDeviceValue was set.Normal
LocalOverrideValue came from a developer override.Normal
UnrecognizedDefinition missing from the payload.ErrorConfirm targeting and Target Apps.
FilteredDefinition filtered from /initialize because the default value was false.ErrorCheck client bootstrapping or targeting.

Evaluation Times

Evaluation timestamps reveal whether an SDK is serving fresh definitions. An up-to-date LCUT (Last Config Updated Time) indicates the SDK has the latest changes. If LCUT lags far behind, users may be seeing stale values—either because a browser tab stayed open or because a server integration is unable to sync.
  • Client SDKs
  • Server SDKs
Time FieldDescriptionSDKs
LCUTTime of the most recent config change reflected in the SDK.JavaScript (incl. React & RN), iOS, Dart
receivedAtTimestamp when the client obtained the current values.JavaScript (incl. React & RN), iOS, Dart

Mocking Statsig / Local Mode

Use the following tools to validate code paths without hitting the network:
  • Local mode: Set localMode to true so the SDK skips network calls and returns default values—ideal for tests and offline environments.
  • Override APIs: Call overrideGate and overrideConfig to force specific values for an individual user or globally.
Combine both techniques to exercise each branch of your application safely before shipping.

Client SDK Debugger

Inspect the exact values a client SDK is using by opening the Client SDK Debugger. It exposes the active user object and every gate/config tied to that client.
  • JavaScript / React: Use the Chrome extension.
  • iOS: Call Statsig.openDebugView() in v1.26.0 or later.
  • Android: Call Statsig.openDebugView() in v4.29.0 or later.
Accounts that sign in to the Statsig console via Google SSO are currently unsupported by the Chrome extension.
LandingGates ListGate DetailsExperiment Details
Client debugger landing viewClient debugger gates listClient debugger gate detailsClient debugger experiment details

FAQs

For SDK-specific edge cases, check each SDK’s FAQ or contact us in the Statsig Slack community.

Invalid Bootstrap

InvalidBootstrap occurs when a client SDK is bootstrapped with values generated for a different user profile. Always ensure the bootstrap user exactly matches the runtime user.
// Server side
const userA = { userID: 'user-a' };
const bootstrapValues = Statsig.getClientInitializeResponse(userA);

// Client side
const bootstrapValues = await fetchStatsigValuesFromMyServers();
const userB = { userID: 'user-b' }; // <-- Different from userA
await Statsig.initialize('client-key', userB, { initializeValues: bootstrapValues });
Even subtle differences count as a mismatch—adding customIDs or other attributes results in a distinct user object.
const userA = { userID: 'user-a' };
const userAExt = { userID: 'user-a', customIDs: { employeeID: 'employee-a' } };

BootstrapStableIDMismatch

BootstrapStableIDMismatch is similar but focuses on stableID. Client SDKs generate a stable ID automatically when one is not provided, so mixing empty user objects between server and client code can cause drift.
// Server side
const userA = {};
const bootstrapValues = Statsig.getClientInitializeResponse(userA);

// Client side
const bootstrapValues = await fetchStatsigValuesFromMyServers();
const userB = { stableID: '12345' }; // <-- Server user lacked a stableID
await Statsig.initialize('client-key', userB, { initializeValues: bootstrapValues });
Even if both sides start with {}, the client-generated stable ID may not match the server’s, leading to the same warning.
const userC = {}; // Client SDK auto-generates a stableID
await Statsig.initialize('client-key', userC, { initializeValues: bootstrapValues });

Environments

SDKs inherit their environment from initialization options. If none is provided, the SDK defaults to production. To verify which environment a user evaluated under, open the diagnostics log stream and inspect the statsigEnvironment property attached to the exposure.

Maximizing Event Throughput

Python SDK v0.45.0+ introduces tunables that help handle high event volume without drops.
The SDK batches events and retries failures in the background. When throughput spikes, adjust these options to reduce the chance of dropped events:
  • eventQueueSize – Number of events flushed per batch. Larger batches increase throughput but use more memory; keep the value under ~3000 to avoid oversized payloads.
  • retryQueueSize – Number of batches kept for retrying. The default is 10; raise it to retain more data at the cost of memory.
Tune both settings to match your traffic profile and keep pipelines healthy.
I