Skip to content

Latest commit

 

History

History
187 lines (132 loc) · 13.1 KB

debugging.mdx

File metadata and controls

187 lines (132 loc) · 13.1 KB
title sidebar_label slug
Debugging
Debugging
/sdk/debugging

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

Debugging Tools

When debugging why a certain user got a certain value, there are a number of tools at your disposal. Here are some troubleshooting tools:

Diagnostics / Log Stream

Every config in the Statsig ecosystem (meaning Feature Gates, Dynamic Configs, Experiments, and Layers) has a Setup tab and a Diagnostics tab.
The diagnostics tab is useful for seeing higher level pass/fail/bucketing population sizes over time, via the checks chart at the top.

![Screen Shot 2023-04-27 at 11 17 08 AM](https://user-images.githubusercontent.com/74584483/234955649-7ddef621-397a-44e5-b864-fa0abc29310f.png)

For debugging specific checks, the logstream at the bottom is useful and shows both production and non production exposures in near real time:
<small>To see logs from non-production environments, toggle the "Show non-production logs" in the upper right corner.</small>

![Screen Shot 2023-04-27 at 11 20 14 AM](https://user-images.githubusercontent.com/74584483/234956317-e65f7fd3-d87d-4616-b905-ee4df097863e.png)

Evaluation Details

Clicking on a specific exposure shows more details about its evaluation. While the rule is visible in the exposure stream, additional factors, known as the evaluation Reason, can be found in the details view.

![Screen Shot 2023-04-27 at 11 21 50 AM](https://user-images.githubusercontent.com/74584483/234956676-440a9ec1-f54d-4b81-b095-6ccf1327ece4.png)

Evaluation reasons are a way to understand why a certain value was returned for a given check.  The reasons are slightly different between client and server SDKs,
and the newer SDKs have a slightly different formatting. These reasons are used for debugging and internal logging purposes only and may be updated as needed.
For client SDKs, these reasons for the value you are seeing can be: - `Network`: fetched at SDK initialization time from the network - `Bootstrap`: derived from bootstrapping the client SDK with a set of values - `InvalidBootstrap`: the set of values was for a different user than the SDK was initialized with. These are discarded for analysis (See [Fixing InvalidBootstrap](#invalid-bootstrap)) - `Cache`: loaded from the local storage cache for the current user, and a network result was not available - `Prefetch`: fetched from the `prefetchUsers` api (js-client only) - `Sticky`: persisted from a sticky evaluation previously - `LocalOverride`: from an override set locally on the SDK via an override API - `Unrecognized`: the sdk was initialized, but this config did not exist in the set of values - `Error`: an unknown error has occurred, and was logged to statsig servers - `Error:NoClient`: No client was found in StatsigContext. Likely a call to a Statsig hook outside of the StatsigProvider. (js-client-only) - `NetworkNotModified`: the network response came back, but the cached values were already up to date for this user - `NoValues`: The sdk was initialized, but there were no values in the cache for the user (and nothing fetched from the network). Commonly caused by calling `initializeSync` rather than waiting for `initializeAsync` - `Loading`: Initialize was called, but the sdk is still waiting for the network request to resolve with updated values. You should `await` initialization before issuing checks
    Newer versions of the sdk will contain two tags: one regarding the initialization state of the sdk, and the other qualifying different sources for that value.

    So in addition to the reasons above, you may have:
    - `Recognized`: the value was recognized in the set of configs the client was operating with
    - `Unrecognized`: the value was not included in the set of configs the client was operating with
    - `Sticky`: the value is from `keepDeviceValue = true` on the method call
    - `LocalOverride`: the value is from a local override set on the sdk

    For example: `Network:Recognized` means the sdk had up to date values from a successful initialization network request, and the gate/config/experiment you were checking was defined in the payload.

    If you are not sure why the config was not included in the payload, it could be excluded due to [Target Apps](/sdk-keys/target-apps), or [Client Bootstrapping](/client/concepts/bootstrapping).
</TabItem>
<TabItem value="server" label="Server SDKS">
    For server SDKs, these reasons for the value you are seeing can be:
    - `Network`: fetched at SDK initialization time from the network
    - `Bootstrap`: derived from bootstrapping the server SDK with a set of values
    - `DataAdapter`: derived from the provided data adapter or data store
    - `LocalOverride`: from an override set locally on the SDK via an override API
    - `Unrecognized`: the sdk was initialized, but this config did not exist in the set of values
    - `Uninitialized`: the sdk was not yet successfully initialized

    In newer server SDKs, the reasons will be a combination of initialization source and evaluation reason:
    **Initialization Source**
    - `Network`: fetched at SDK initialization time from the network
    - `Bootstrap`: derived from bootstrapping the server SDK with a set of values
    - `DataAdapter`: derived from the provided data adapter or data store
    - `Uninitialized`: the sdk was not yet successfully initialized
    - `StatsigNetwork`: this refers to when custom proxy/grpc streaming has triggered the fallback behavior, thus falling back to statsig api. 
        * If no network config/overrides were used, the sdk default uses statsig apis but the reason will be Network.

    **Evaluation Reason**
    - `LocalOverride`: from an override set locally on the SDK via an override API
    - `Unrecognized`: this config did not exist in the set of values
    - `Unsupported`: the sdk does not support this type of condition type/operator. Usually this means the sdk is out of date and missing new features
    - `error`: an error happened during evaluation
    - no reason: successful evaluation
    
    So `Network` means the sdk was initialized with values from the network, and the evaluation was successful.  If you see `Network:Unrecognized`, it means the sdk was initialized with values from the network, 
    but the gate/config/experiment you were checking was not included in the payload.
</TabItem>

Server Details

In addition to these reasons, the most recent versions of server SDKs will also give you two times to watch: the time at which config definitions initialized the SDK,
and the time at which the SDK was currently evaluating those definitions.  When you change a gate/config/experiment, the project time will update
and server SDKs will download the new definition.  If you have not changed your project in two hours, and the evaluation time is saying
the SDK is up to date as of 2 hours ago, then you're evaluating the most up to date definition of that gate/experiment.

In this example, the project was last updated yesterday, and the SDK was initialized with those values.  The project has not updated since that time,
and the SDK is still using that same set of definitions which it fetched from the network.  You can also see the SDK type and version associated with a given check.

![Screen Shot 2023-04-27 at 11 15 50 AM](https://user-images.githubusercontent.com/74584483/234955366-0e39adbf-c5a2-4a52-b9bb-ae0df9f15819.png)

Mocking Statsig / Local Mode

To facilitate testing with Statsig, we provide a few tools to help you test your code without fetching values from statsig network:

- Local Mode: By setting the localMode parameter to true, the SDK will operate without making network calls, returning only default values. This is ideal for dummy or test environments that should remain disconnected from the network.

- Override APIs: Utilize the overrideGate and overrideConfig APIs on the global Statsig interface. These allow you to set overrides for gates or configurations either for specific users or for all users by omitting the user ID.

We recommend enabling localMode and applying overrides for gates, configurations, or experiments to specific values to thoroughly test the various code flows you are developing.

For specific SDK implementation: refer to StatsigOptions in the respective SDK documentation.

Client SDK Debugger

It can be useful to inspect the current values that a Client SDK is using internally. For this, we have a Client SDK Debugger.
With this tool, you can see the current User object the SDK is using as well as the gate/config values associated with it.

Javascript/React: Via a Chrome Extension https://github.com/statsig-io/statsig-sdk-debugger-chrome-extension

NOTE: Accounts signing in to the Statsig console via Google SSO are not supported by this debugging tool.

iOS: Available with `Statsig.openDebugView()`. Available in [v1.26.0](https://github.com/statsig-io/ios-sdk/releases/tag/v1.26.0) and [above](https://github.com/statsig-io/ios-sdk/releases).

Android: Available with  `Statsig.openDebugView()`. Available in [v4.29.0](https://github.com/statsig-io/android-sdk/releases/tag/4.29.0) and [above](https://github.com/statsig-io/android-sdk/releases/).

|Landing|Gates List|Gate Details|Experiment Details|
|--|--|--|--|
|![client-debugger-landing](https://github.com/statsig-io/statsig-sdk-debugger-chrome-extension/assets/95646168/fa6d7237-eb47-4f09-896c-696cfd5c956c)|![client-debugger-gates-list](https://github.com/statsig-io/statsig-sdk-debugger-chrome-extension/assets/95646168/161d8f35-a9b8-4ff9-b549-e04d04acac8a)|![client-debugger-gate-info](https://github.com/statsig-io/statsig-sdk-debugger-chrome-extension/assets/95646168/ab15e586-5259-4475-8f5c-018b2ab6e8db)|![client-debugger-experiment-details](https://github.com/statsig-io/statsig-sdk-debugger-chrome-extension/assets/95646168/920a6e8a-eb84-4d37-bf77-bb909a575d58)|

FAQs

For more sdk specific questions, check out the FAQs on the respective SDK pages. If you have more questions, feel free to reach out directly in our Slack Community.

Invalid Bootstrap

This can occur when you are [Bootstrapping](https://docs.statsig.com/search?q=Client+SDK+Bootstrapping) a Statsig Client SDK with your own prefetched or generated values. 
The InvalidBootstrap reason is signally that the current user the Client SDK is operating against is not the same as the one 
used to generate the bootstrap values.

The following pseudo code highlights how this can occur:

```js
// Server Side

userA = { userID: 'user-a' };
bootstrapValues = Statsig.getClientInitializeResponse(userA);

// Client Side

bootstrapValues = fetchStatsigValuesFromMyServers(); // <- Network request that executes the above logic

userB = { userID: 'user-b' }; // <- This is not the same User
Statsig.initialize("client-key", userB, { initializeValues: bootstrapValues });
```


Users must also be a 1 to 1 match. The SDK will treat a user with slightly different values as a completely different user. 
For example, the following two user objects would also trigger InvalidBootstrap even though they have the same UserID.

```js
userA = { userID: 'user-a' };
userAExt = { userID: 'user-a', customIDs: { employeeID: 'employee-a' }};
```

Environments

SDKs get the environment configurations from initialization options. If no environment is provided, the SDK will default to the production environment. 
If you are wondering why a certain user is not passing an environment-based condition and what you SDK is initialized with, you can check the user properties in any of the log streams. 
The `statsigEnvironment` property will show you the environment the SDK is operating in.

Maximizing Event Throughput

:::note
This is currently only applicable to Python SDK v0.45.0+
:::


The SDK batches and flushes events in the background to our server. When the volume of incoming events exceeds the SDK's flushing capacity, some events may be dropped after a certain number of retries. To reduce the chances of event loss, you can adjust several settings in the Statsig options:
- Event Queue Size: Determines how many events are sent in a single batch.
- Increasing the event queue size allows more events to be flushed at once, but it will consume more memory. It's recommended not to exceed 1800 events per batch, as larger payloads may result in failed requests.
- Retry Queue Size: Specifies how many batches of events the SDK will hold and retry.
- By default, the SDK keeps 10 batches in the retry queue. Increasing this limit allows more batches to be retried, but also increases memory usage.
Tuning these options can help manage event volume more effectively and minimize the risk of event drops.