Debugging
When you’re interacting with complex data sources, it’s common to run into strange artifacts or results that don’t make sense. Statsig will proactively monitor for these and let you know if something looks off. This guide is to help with the next steps of identifying what went wrong, and for reconciling differences in results between Statsig’s analysis and your own.Common Issues
Mismatched IDs
In many cases, you may have multiple versions of the same ID across tables. For example, the userabc123
in one source might be USER_abc123
in another; or, in some cases, certain loggers will hash ID values for privacy reasons.
This will usually result in Statsig triggering an alert that it was unable to join data between sources. What we recommend doing is:
- Going to the relevant sources and running the sample queries to check for obvious ID mismatches
- Failing that, find the job which isn’t able to join sources and copy SQL. By working through the SQL query you should be able to pinpoint where the mismatch is occurring
Conflicting Filters
Statsig allows a high degree of customization in explore queries and on explore pages. This can lead to scenarios where you add two conflicting filters that, together, will never pass. For example:- You have a metric with a cohort window from 7 to 13 days, but set up your experiment analysis to run on users’ first 6 days. These two filters return no users.
- You create a count metric filtered to
event='purchase
’, and then create a local metric that filters toevent='checkout'
. This set of filters will also return a null set.