diff --git a/README.mdx b/README.mdx index 8cd55f0fe..79f2ad1eb 100644 --- a/README.mdx +++ b/README.mdx @@ -1,5 +1,12 @@ +--- +title: Readme +description:
-A project in Statsig serves as a workspace that contains everything you and your team will create. This includes configs (e.g., feature gates, experiments, dynamic configs, layers), metrics, integrations, and more.
+A project in Statsig serves as a workspace that contains everything you and your team will create. This includes configs (e.g., feature flags, experiments, dynamic configs, layers), metrics, integrations, and more.
When creating a new project, you can set its type to *Open* which would allow anyone in the organization to join freely, or to *Closed* which would allow people to join only by invitation or request.
@@ -140,7 +141,7 @@ Each **team** also has team-level **review settings** that can require reviews f
-In Statsig, you have **Client API Keys** to initialize all Statsig [client SDKs](https://docs.statsig.com/client/introduction) and **Server Secret Keys** to initialize all Statsig [server SDKs](https://docs.statsig.com/server/introduction).
+In Statsig, you have **Client API Keys** to initialize all Statsig [client SDKs](https://docs.statsig.com/client/introduction) and **Api Key Keys** to initialize all Statsig [server SDKs](https://docs.statsig.com/server/introduction).
We believe there is a **"Crawl, Walk, Run"** phase when it comes to configuring your API keys:
diff --git a/access-management/introduction.mdx b/access-management/introduction.mdx
index 013fafe76..0cb4c20f9 100644
--- a/access-management/introduction.mdx
+++ b/access-management/introduction.mdx
@@ -1,6 +1,7 @@
---
title: Workspace Management Overview
sidebarTitle: Overview
+description: Statsig provides a few different solutions for access management as you scale out adoption in your team/org/company. We have simple settings like auto
---
Statsig provides a few different solutions for access management as you scale out adoption in your team/org/company.
diff --git a/access-management/org-admin/experiment_policy.mdx b/access-management/org-admin/experiment_policy.mdx
index ecc44ef42..10d06b8ad 100644
--- a/access-management/org-admin/experiment_policy.mdx
+++ b/access-management/org-admin/experiment_policy.mdx
@@ -1,5 +1,6 @@
---
title: Experiment Policy
+description:
-#### Organization Information
+### Organization Information
Your organization's **Info** sidebar includes the organization's name, SSO configuration, and other settings on access management and security settings.
As the organization's **Admin**, you can enable or disable SSO for all projects in the organization from this one place.
diff --git a/access-management/projects.mdx b/access-management/projects.mdx
index 3a7e249e7..0236f4340 100644
--- a/access-management/projects.mdx
+++ b/access-management/projects.mdx
@@ -1,5 +1,6 @@
---
title: Project Access Management
+description:
-4. Select the success event to optimize for as shown below. You can further specify an optional [event value](/guides/logging-events).
+4. Select the success event to optimize for as shown in the following example. You can further specify an optional [event value](/guides/logging-events).
@@ -31,7 +32,7 @@ There are a few parameters you can specify:
Click "Create" to finalize the setup.
-6. Similar to Feature Gates and Experiments, you can find a code snippet for the exposure check event to add to your code. Don't forget to click "Start" when you're ready to launch your Autotune test.
+6. Similar to Feature Flags and Experiments, you can find a code snippet for the exposure check event to add to your code. Don't forget to click "Start" when you're ready to launch your Autotune test.
diff --git a/autotune/using-bandits.mdx b/autotune/using-bandits.mdx
index e1ac8b766..c77a543e1 100644
--- a/autotune/using-bandits.mdx
+++ b/autotune/using-bandits.mdx
@@ -5,6 +5,7 @@ keywords:
- owner:vm
last_update:
date: 2025-09-18
+description: Both contextual and non-contextual bandits are managed on Statsig's console, or through Statsig's [console API](../console-api/autotunes.mdx) for prog
---
Both contextual and non-contextual bandits are managed on Statsig's console, or through Statsig's [console API](../console-api/autotunes.mdx) for programmatic creation. Both bandit types use a common, streamlined API, making it easy to explore either use case without significant changes from using experiments.
@@ -17,7 +18,7 @@ There's no additional steps beyond a regular experiment check to use a bandit, t
Check bandits using your standard experiment call. For example, in react this is as simple as configuring a bandit's variant json like:
-```
+```json
{
"text": "
@@ -43,7 +44,7 @@ Here’s a suggested workflow:
2. **Identify Configuration Variables**:
Consider which variables you want to decouple from your app and control via Statsig rather than hardcoding them in your app. These could include:
- - A boolean parameter to control access to a new feature. Even if you're used to using Feature Gates for boolean feature management, start with a boolean parameter instead.
+ - A boolean parameter to control access to a new feature. Even if you're used to using Feature Flags for boolean feature management, start with a boolean parameter instead.
- A string parameter for text resources that you may want to swap or experiment with.
- A number parameter for inputs such as the number of onboarding steps, a list length to truncate, and more.
@@ -51,7 +52,7 @@ Here’s a suggested workflow:
Begin with a static value for each parameter (what you would have hardcoded in the app). Use this static value initially.
4. **Remap When Ready**:
- Once your app is shipped and the feature is ready, remap the parameter to a Feature Gate, Experiment, Dynamic Config, or Layer. This allows you to test and target different variants.
+ Once your app is shipped and the feature is ready, remap the parameter to a Feature Flag, Experiment, Dynamic Config, or Layer. This allows you to test and target different variants.
5. **Update in Real-Time**:
After experimenting, you can update the static value or gate the feature for specific app versions. This can be done in real time across mobile apps that are already released.
diff --git a/client/concepts/persistent_assignment.mdx b/client/concepts/persistent_assignment.mdx
index 2eb062dc1..179b3a376 100644
--- a/client/concepts/persistent_assignment.mdx
+++ b/client/concepts/persistent_assignment.mdx
@@ -1,5 +1,6 @@
---
title: Client Persistent Assignment
+description: Persistent assignment allows you to ensure that a user's variant stays consistent while an experiment is running, regardless of changes to allocation
---
Persistent assignment allows you to ensure that a user's variant stays consistent while an experiment is running, regardless of changes to allocation or targeting.
@@ -68,7 +69,7 @@ console.log(experiment.getGroupName()); // 'Control'
const newUser = { userID: "456" };
const newExperiment = client.getExperiment('active_experiment', { userPersistedValues });
console.log(newExperiment.getGroupName()); // Still 'Control'
-```
+```text
diff --git a/data-warehouse-ingestion/faq.mdx b/data-warehouse-ingestion/faq.mdx
index ae15e414f..7a2fcbf51 100644
--- a/data-warehouse-ingestion/faq.mdx
+++ b/data-warehouse-ingestion/faq.mdx
@@ -4,6 +4,7 @@ keywords:
- owner:tim
last_update:
date: 2025-09-18
+description: Statsig currently accesses data warehouses from both the Statsig console service and Statsig data pipelines. If your data warehouse is IP protected, p
---
## What IP addresses will Statsig access data warehouses from?
diff --git a/data-warehouse-ingestion/introduction.mdx b/data-warehouse-ingestion/introduction.mdx
index 2f4b5664d..325991b76 100644
--- a/data-warehouse-ingestion/introduction.mdx
+++ b/data-warehouse-ingestion/introduction.mdx
@@ -1,5 +1,6 @@
---
title: Data Warehouse Ingestion
+description:
St
---
@@ -90,7 +91,7 @@ Enterprise customers can trigger ingestion for `metrics` or `events` using the s
To trigger ingestion, send a post request to the `https://api.statsig.com/v1/mark_data_ready_dwh` endpoint using your statsig API key. An example would be:
-```
+```bash
curl \
--header "statsig-api-key:
@@ -66,7 +67,7 @@ To set up key-pair authentication, first follow the [snowflake documentation](ht
The private key can then be provided here
-
+
### Custom User Privileges
@@ -123,5 +124,5 @@ COMMIT;
After running the script, input the `
+
diff --git a/data-warehouse-ingestion/synapse.mdx b/data-warehouse-ingestion/synapse.mdx
index 5279e5d22..440c4276e 100644
--- a/data-warehouse-ingestion/synapse.mdx
+++ b/data-warehouse-ingestion/synapse.mdx
@@ -4,6 +4,7 @@ keywords:
- owner:tim
last_update:
date: 2025-09-18
+description: To set up connection with Azure Synapse, Statsig needs the following - Workspace SQL Endpoint - Database Name - Admin User Name
---
## Overview
diff --git a/docs/server-core/node/migration-guide/_api_changes.mdx b/docs/server-core/node/migration-guide/_api_changes.mdx
index d6aac135f..6d50b7ff1 100644
--- a/docs/server-core/node/migration-guide/_api_changes.mdx
+++ b/docs/server-core/node/migration-guide/_api_changes.mdx
@@ -1,3 +1,8 @@
+---
+title: Api Changes
+description: import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; | Feature | Node Core SDK | Legacy Node SDK | Status |
+---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
diff --git a/docs/server-core/node/migration-guide/_statsig_options.mdx b/docs/server-core/node/migration-guide/_statsig_options.mdx
index eb3459125..184cf98fd 100644
--- a/docs/server-core/node/migration-guide/_statsig_options.mdx
+++ b/docs/server-core/node/migration-guide/_statsig_options.mdx
@@ -1,3 +1,8 @@
+---
+title: Statsig Options
+description: | Old Option | New / Notes
+---
+
| Old Option | New / Notes |
| ------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------- |
| `api` | Deprecated |
diff --git a/docs/server-core/node/migration-guide/_user_creation.mdx b/docs/server-core/node/migration-guide/_user_creation.mdx
index 5c70c5c30..c802dcf6f 100644
--- a/docs/server-core/node/migration-guide/_user_creation.mdx
+++ b/docs/server-core/node/migration-guide/_user_creation.mdx
@@ -1,3 +1,8 @@
+---
+title: User Creation
+description: import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; User creation is still the same in the new python core SDK.
- - You can target users in a defined [segment](/segments) as shown below
+ - You can target users in a defined [segment](/segments) as shown in the following example
- - You can target users who are eligible for a specific feature gate as shown below; this ensures that the dynamic config is activated only for users who're exposed to the target feature gate
+ - You can target users who are eligible for a specific feature flag as shown in the following example; this ensures that the dynamic config is activated only for users who're exposed to the target feature flag
-
+
- To complete the dynamic config, click on the **Edit** link to open the JSON configuration editor. In the editor, type the configuration parameters and values that your application should receive and click **Confirm**
diff --git a/dynamic-config/enforce-schema.mdx b/dynamic-config/enforce-schema.mdx
index 7e7e7e062..09e975d3a 100644
--- a/dynamic-config/enforce-schema.mdx
+++ b/dynamic-config/enforce-schema.mdx
@@ -10,7 +10,7 @@ Schemas are only enforced when editing dynamic configs through the console or AP
- If your targeting is straightforward, creating it through Inline Targeting works well. (Click "Criteria: Everyone" to get started.)
-- For more advanced targeting (e.g., progressive rollouts) or if you want to maintain targeting criteria when you launch your experiment, it’s better to reference an existing **Feature Gate**.
+- For more advanced targeting (e.g., progressive rollouts) or if you want to maintain targeting criteria when you launch your experiment, it’s better to reference an existing **Feature Flag**.
By default, no targeting criteria are set, so your experiment will include all allocated users within the defined **Layer** or exposed user base.
diff --git a/experiments/ending/make-decision.mdx b/experiments/ending/make-decision.mdx
index 93d689d8e..e650cf7a2 100644
--- a/experiments/ending/make-decision.mdx
+++ b/experiments/ending/make-decision.mdx
@@ -14,15 +14,15 @@ for _all_ your users going forward.
If the experiment happens to use parameters from a layer, the layer's parameters will now take on the shipped group's parameter values
as their defaults. These are the values that _all_ your users will see going forward.
-For example, suppose you have a **Demo Layer** that's configured with a parameter, **a_param**. It's default value is set to _layer_default_ as shown below.
+For example, suppose you have a **Demo Layer** that's configured with a parameter, **a_param**. It's default value is set to _layer_default_ as shown in the following example.
-Say you decide to create an experiment, **Demo Experiment** in **Demo Layer** as shown below.
+Say you decide to create an experiment, **Demo Experiment** in **Demo Layer** as shown in the following example.
-You set up **Demo Experiment** with two groups: **Control** and **Test**, intending to experiment with new values for the layer-level parameter, **a_param** as shown below.
+You set up **Demo Experiment** with two groups: **Control** and **Test**, intending to experiment with new values for the layer-level parameter, **a_param** as shown in the following example.
diff --git a/experiments/ending/stop-assignments.mdx b/experiments/ending/stop-assignments.mdx
index 4ab1f76ba..390b6374b 100644
--- a/experiments/ending/stop-assignments.mdx
+++ b/experiments/ending/stop-assignments.mdx
@@ -26,7 +26,7 @@ The **Stop Assignment** option must first be enabled in Project Settings to show
## How it Works
-You can stop assignment for an experiment by clicking the Make Decision dropdown as shown below.
+You can stop assignment for an experiment by clicking the Make Decision dropdown as shown in the following example.
diff --git a/experiments/exploring-results/aggregated-impact.mdx b/experiments/exploring-results/aggregated-impact.mdx
index c88d13580..af64aee4a 100644
--- a/experiments/exploring-results/aggregated-impact.mdx
+++ b/experiments/exploring-results/aggregated-impact.mdx
@@ -5,6 +5,7 @@ keywords:
- owner:vm
last_update:
date: 2025-09-18
+description: import Content from '/snippets/pulse/aggregated-impact.mdx'
diff --git a/experiments/statistical-methods/p-value.mdx b/experiments/statistical-methods/p-value.mdx
index 0c68a09bf..20f6db622 100644
--- a/experiments/statistical-methods/p-value.mdx
+++ b/experiments/statistical-methods/p-value.mdx
@@ -6,6 +6,7 @@ keywords:
- owner:vm
last_update:
date: 2025-09-18
+description: import Content from '/snippets/stats-methods/p-value.mdx'
@@ -63,7 +64,7 @@ There are a few key layers of settings governing templates, namely at the-
Within Experiment and Gate Policies (**Settings** -> **Project Configuration** -> **Feature Management**/ **Experimentation** - **Organization Tab**), you can enforce that a template is used for any new gate/ dynamic config/ experiment creation. Only organization admins can configure this setting. NOTE that you must create at least 1 experiment/ gate template for users to choose if you toggle on this setting, otherwise they will be blocked in creating new configs.
-
+
### Team-level Templates Settings
diff --git a/experiments/types/switchback-tests.mdx b/experiments/types/switchback-tests.mdx
index f83fdab3a..1f92a0bd7 100644
--- a/experiments/types/switchback-tests.mdx
+++ b/experiments/types/switchback-tests.mdx
@@ -13,7 +13,7 @@ Another common use case for switchbacks occurs when applying different variants
Switchback tests are often carried out across multiple "buckets", typically regions or other defined groups that are flipped between test and control treatments over the course of the experiment.
-### Example
+## Example
Let's say you are a rideshare platform and want to test pricing. You initially consider splitting your riders into two groups, one with the higher price and one with a lower price.
@@ -133,7 +133,7 @@ Burn-in/ burn-out periods enable you to define periods at both the beginning and
## Reading Results
-Both Diagnostics and Pulse metric lifts results for Switchback tests will look and feel like Statsig's traditional A/B tests, with a few modifications-
+Both Diagnostics and Pulse metric lifts results for Switchback tests will look and feel like Statsig's traditional Experiments, with a few modifications-
- **No hourly Pulse-** At the beginning of a traditional A/B/n experiment on Statsig, you can start to see hourly Pulse results flow through within ~10-15 minutes of experiment start. Given in a Switchback you will only see either *all* Test or *all* Control exposures right at experiment start, we have disabled Hourly Pulse until you have a meaningful amount of data. However, in lieu of Hourly Pulse you can still leverage the more real-time **Diagnostics** tab to verify checks are coming in and bucketing as expected.
- **No time-series**- The Daily and Days Since First Exposure time-series are not available for Switchback tests. This is due to the bootstrapping methodology used to obtain the statistics, which relies on pooling all the available days together in order to have enough statistical power.
diff --git a/faq.mdx b/faq.mdx
index 0da86be0a..1d2c54846 100644
--- a/faq.mdx
+++ b/faq.mdx
@@ -37,7 +37,7 @@ if (otherEngine.getExperiment('button_color_test').getGroup() === 'Control') {
// Statsig parameter approach — variants can be changed from the console
const color = statsig.getExperiment('button_color_test').getString('button_color', 'BLACK');
-```
+```text
---
diff --git a/feature-flags/conditions.mdx b/feature-flags/conditions.mdx
index f5454fb33..9cae012fe 100644
--- a/feature-flags/conditions.mdx
+++ b/feature-flags/conditions.mdx
@@ -1,27 +1,27 @@
---
-title: Feature Gate rule criteria
-description: Statsig feature gates contain a list of rules that are evaluated in order from top to bottom. This page describes in more detail how these rules are evaluated and lists all currently supported conditions.
+title: Feature Flag rule criteria
+description: Statsig feature flags contain a list of rules that are evaluated in order from top to bottom. This page describes in more detail how these rules are evaluated and lists all currently supported conditions.
---
-Statsig feature gates contain a list of rules that are evaluated in order from top to bottom. The page describes in more detail how these rules are evaluated and lists all currently supported conditions.
+Statsig feature flags contain a list of rules that are evaluated in order from top to bottom. The page describes in more detail how these rules are evaluated and lists all currently supported conditions.
## Rule Evaluation
The rules that you create are evaluated in the order they're listed. For each rule, the **criteria** or **conditions** determine which users _qualify_ for the Pass/Fail treatments. The Pass percentage further determines the percentage of _qualifying_ users that will be exposed to the new feature. The remaining _qualifying_ users will see the feature disabled.
-Suppose you set up your rules as shown below, the following flow chart illustrates how Statsig evaluates these rules.
+Suppose you set up your rules as shown in the following example, the following flow chart illustrates how Statsig evaluates these rules.
-
+
Note that as soon as a user qualifies based on the condition in a given rule, Statsig doesn't evaluate subsequent rules for this user. Statsig then picks the qualifying user to be in either the Pass or Fail group of that rule.
-Also note that in the example, the third rule for **Remaining Folks** captures all users who don't qualify for previous two rules. If we were to remove this third rule, then only a subset of your users (users in pools 1 and 2) would qualify for this feature gate and for further analysis, not your total user base.
+Also note that in the example, the third rule for **Remaining Folks** captures all users who don't qualify for previous two rules. If we were to remove this third rule, then only a subset of your users (users in pools 1 and 2) would qualify for this feature flag and for further analysis, not your total user base.
### Client vs Server SDKs
All of the following conditions work on both client and server SDKs. Client SDKs handle these conditions a bit more automatically for you - if you do not provide a userID, client SDKs rely on an auto-generated "stable identifier" which is persisted to local storage.
@@ -31,7 +31,7 @@ In addition, if you do not automatically set an IP or User Agent (UA), the clien
Evaluations at a given percentage are *stable* with respect to the unitID. For example, if the gate/config/experiment/layer has a unit type of "userID", and userID = 4 passes a condition at a 50% rollout, they will always pass at that 50% rollout. The same applies for `customIDs`, if the unit type of the entity is that `customID`. Want to reset that stability? See "Resalting" below.
### Resalting
-Gate evaluations are stable for a given gate, percentage rollout, and user ID. This is made possible by the salt associated with a feature gate. If you want to reset a gate, triggering a reshuffle of users, you can "resalt" a gate from the dropdown menu in the top right of the feature gate details page.
+Gate evaluations are stable for a given gate, percentage rollout, and user ID. This is made possible by the salt associated with a feature flag. If you want to reset a gate, triggering a reshuffle of users, you can "resalt" a gate from the dropdown menu in the top right of the feature flag details page.
@@ -39,8 +39,8 @@ Gate evaluations are stable for a given gate, percentage rollout, and user ID. T
### Partial Rollouts
While 0% or 100% rollouts for gates are simply "on for users matching this rule"/"off for users matching this rule", each rule allows you to specify a percentage of qualifying users who should pass (see the new feature).
-If you want to get [Pulse Results](/pulse/read-pulse) (metric movements caused by a feature), simply specifying a number between 0% and 100% will create a random allocation of users in Pass/Fail or "test"/"control" groups for a simple A/B test.
-You can use this to validate that a new feature does not regress existing metrics as you roll it out to everyone. Statsig suggests a 2% -> 10% -> 50% -> 100% roll out strategy. Each progressive roll out will generate its own Pulse Results as shown below.
+If you want to get [Pulse Results](/pulse/read-pulse) (metric movements caused by a feature), simply specifying a number between 0% and 100% will create a random allocation of users in Pass/Fail or "test"/"control" groups for a simple Experiment.
+You can use this to validate that a new feature does not regress existing metrics as you roll it out to everyone. Statsig suggests a 2% -> 10% -> 50% -> 100% roll out strategy. Each progressive roll out will generate its own Pulse Results as shown in the following example.
@@ -80,7 +80,7 @@ Usage: Percentage rollout on the remainder of users that reach this condition. T
Supported Operators: `None. Percentage based only.`
-Example usage: 50/50 rollout to A/B test a new feature. Or 0% to hide the feature for all people not matching a set of rules. Or 100% to show the feature to the remaining users who did not meet a condition above.
+Example usage: 50/50 rollout to Experiment a new feature. Or 0% to hide the feature for all people not matching a set of rules. Or 100% to show the feature to the remaining users who did not meet a condition above.
@@ -310,11 +310,11 @@ Example: Only show a feature to 20 somethings, as marked by the privateAttribute
-- Click in the window and edit the value of the Email property to include the users that you want to target. For example, type jdoe@example.com as shown below. When email domain matches "@example.com", the feature gate check succeeds and the window shows a PASS. Otherwise, it fails and the window shows a FAIL.
+- Click in the window and edit the value of the Email property to include the users that you want to target. For example, type jdoe@example.com as shown in the following example. When email domain matches "@example.com", the feature flag check succeeds and the window shows a PASS. Otherwise, it fails and the window shows a FAIL.
@@ -30,12 +30,12 @@ To validate your feature gate using the built-in Test Gate tool:
## Option 2: Use the Statsig Test App
-To validate your feature gate using the Test App:
+To validate your feature flag using the Test App:
- Log into the Statsig console at https://console.statsig.com
-- On the left-hand navigation panel, select **Feature Gates**
-- Select the feature gate that you want to validate
-- At the bottom of the page, click on **Check Gate in Test App** at the top right of the Test Gate window as shown below by the red arrow; this will open a new browser window with a prototype Javascript client that initializes and calls the Statsig `checkGate` API.
+- On the left-hand navigation panel, select **Feature Flags**
+- Select the feature flag that you want to validate
+- At the bottom of the page, click on **Check Gate in Test App** at the top right of the Test Gate window as shown in the following example by the red arrow; this will open a new browser window with a prototype Javascript client that initializes and calls the Statsig `checkGate` API.
@@ -43,19 +43,19 @@ To validate your feature gate using the Test App:
## Option 3: Use the Diagnostics tab
-To validate your feature gate using a live log stream:
+To validate your feature flag using a live log stream:
- Log into the Statsig console at https://console.statsig.com
-- On the left-hand navigation panel, select **Feature Gates**
-- Select the feature gate that you want to validate
+- On the left-hand navigation panel, select **Feature Flags**
+- Select the feature flag that you want to validate
- Click on the **Diagnostics** tab (next to the Setup tab)
-- Scroll down to the **Exposure Stream** panel, where you will see a live stream of gate check events as they happen as shown below
+- Scroll down to the **Exposure Stream** panel, where you will see a live stream of gate check events as they happen as shown in the following example
-- In the **Event Count by Group panel** as shown below, you can also validate that your application is recording events as expected for users who are exposed to the new feature (or not). Specifically, if you've started to record a new event type to test the impact of a new feature, you can also validate that these events are starting to show as more users are exposed to the new feature.
+- In the **Event Count by Group panel** as shown in the following example, you can also validate that your application is recording events as expected for users who are exposed to the new feature (or not). Specifically, if you've started to record a new event type to test the impact of a new feature, you can also validate that these events are starting to show as more users are exposed to the new feature.
diff --git a/guides/abn-tests.mdx b/guides/abn-tests.mdx
index e320b0f6d..3372771a2 100644
--- a/guides/abn-tests.mdx
+++ b/guides/abn-tests.mdx
@@ -1,5 +1,6 @@
---
title: "Run your first A/B test"
+description: In this guide, you will create and implement your first experiment in Statsig from end to end. There are many types of experiments you can set up in S
---
In this guide, you will create and implement your first experiment in Statsig from end to end. There are many types of experiments you can set up in Statsig, but this guide will walk through the most common one: an A/B test.
diff --git a/guides/cdn-edge-testing.mdx b/guides/cdn-edge-testing.mdx
index 2914225c7..186e23789 100644
--- a/guides/cdn-edge-testing.mdx
+++ b/guides/cdn-edge-testing.mdx
@@ -5,10 +5,11 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: Most users with heavy web traffic use a CDN to serve resources from cache, in order to minimize hit to their web servers. This has historically ma
---
## Background
-Most customers with heavy web traffic use a CDN to serve resources from cache, in order to minimize hit to their web servers. This has historically made testing challenging because you don’t have the luxury of calling the SDK for all requests — but now with the emergence of Edge compute (offered by *most providers*), customers can now run code at their CDN edge, allowing them to assign users to tests and determine which resources to serve in a convenient and performant way.
+Most users with heavy web traffic use a CDN to serve resources from cache, in order to minimize hit to their web servers. This has historically made testing challenging because you don’t have the luxury of calling the SDK for all requests — but now with the emergence of Edge compute (offered by *most providers*), users can now run code at their CDN edge, allowing them to assign users to tests and determine which resources to serve in a convenient and performant way.
This pattern is optimal for testing with cached content without sacrificing cache-hit ratio. For scenarios where running the Statsig SDK at the edge is not possible, the sdk must be implemented on the origin server, or you should consider using a client SDK.
@@ -90,7 +90,7 @@ Then, lets update the return value:
color: 'white',
fontSize: 16,
}
-```
+```python
Once again, don't forget to click "Save" to apply these new rules to your config. Your Dynamic Config should now look something like this:
@@ -112,7 +112,7 @@ After adding the SDK to the webpage via the [jsdelivr cdn](https://www.jsdelivr.
```js
const client = new window.Statsig.StatsigClient("
@@ -65,13 +65,13 @@ Once your Statsig account is ready, follow the steps below to create and test-dr
```js
const client = new window.Statsig.StatsigClient('YOUR_SDK_KEY', {});
await client.initializeAsync();
- ```
+ ```text
Then call:
```js
client.checkGate('mobile_registration');
- ```
+ ```text
You should see false because the current session is not mobile and doesn’t use the employee email domain.
true for the mobile profile.
@@ -103,7 +103,7 @@ Once your Statsig account is ready, follow the steps below to create and test-dr
```js
await client.updateUserAsync({ email: 'teammate@statsig.com' });
client.checkGate('mobile_registration');
- ```
+ ```text
The gate passes again thanks to the email rule.
@@ -115,7 +115,7 @@ Once your Statsig account is ready, follow the steps below to create and test-dr
-It should look like this: ```client-abcd123efg...```
+It should look like this: ```client-abcd123efg...```ruby
-- Now when you make any configuration changes, say to a feature gate or experiment, you'll be asked to **Submit for Review**; you can add reviewers when you submit the change for review
+- Now when you make any configuration changes, say to a feature flag or experiment, you'll be asked to **Submit for Review**; you can add reviewers when you submit the change for review
-- Reviewers will now see a notification on the Statsig console as shown below. When they click on **View Proposed Changes**, they will see a diff of the *current version* in production and *new version*. Reviewers can now **Approve** or **Reject** the submitted changes.
+- Reviewers will now see a notification on the Statsig console as shown in the following example. When they click on **View Proposed Changes**, they will see a diff of the *current version* in production and *new version*. Reviewers can now **Approve** or **Reject** the submitted changes.
@@ -44,9 +45,9 @@ You can now use these predefined **Teams** when you submit any changes for revie
### Enforcing Team Reviews
-You can restrict who can make changes to your Project by (a) turning on **Reviews Required** for your Project and (b) adding designated **Teams** or **Reviewers** when you create the Feature Gate or Experiment.
+You can restrict who can make changes to your Project by (a) turning on **Reviews Required** for your Project and (b) adding designated **Teams** or **Reviewers** when you create the Feature Flag or Experiment.
-For (a), see section **Turning on Change Reviews for a Project** to turn on project-wide reviews. For (b), as an owner of a Feature Gate or Experiment, you can add designated **Teams** or **Reviewers** at any time as shown below. This ensures that only these designated groups or members can review and approve any subsequent changes. When another member now tries to edit these designated review groups/reviewers, this will require approval from currently designated reviewers.
+For (a), see section **Turning on Change Reviews for a Project** to turn on project-wide reviews. For (b), as an owner of a Feature Flag or Experiment, you can add designated **Teams** or **Reviewers** at any time as shown in the following example. This ensures that only these designated groups or members can review and approve any subsequent changes. When another member now tries to edit these designated review groups/reviewers, this will require approval from currently designated reviewers.
@@ -80,10 +81,10 @@ To enable the Pre-commit Webhook experience:
-Now, when a change is made in the Statsig Console, Statsig hits the customer’s configured webhook with the proposed changes. The change in Statsig will be pending until the customer approves the review via Console API (after their internal checks are complete). Statsig exposes an option for Project Admins (only) to bypass this process and commit the changes directly.
+Now, when a change is made in the Statsig Console, Statsig hits the user’s configured webhook with the proposed changes. The change in Statsig will be pending until the user approves the review via Console API (after their internal checks are complete). Statsig exposes an option for Project Admins (only) to bypass this process and commit the changes directly.
Every payload will have these fields at a minimum:
-```
+```go
review_id (will need to pass this to the change_validation API to accept/reject a change)
submitter (email address)
committer (email address)
diff --git a/guides/shopify-ab-test.mdx b/guides/shopify-ab-test.mdx
index 53a81798d..3a896bd7b 100644
--- a/guides/shopify-ab-test.mdx
+++ b/guides/shopify-ab-test.mdx
@@ -1,10 +1,11 @@
---
-sidebarTitle: A/B Testing on Shopify
-title: A/B Testing on Shopify
+sidebarTitle: Experimenting on Shopify
+title: Experimenting on Shopify
keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: Shopify provides solutions for commerce businesses to build and manage all aspects of their online storefront, including product catalogue, inventory,
---
## Use cases & considerations
@@ -13,7 +14,7 @@ Shopify provides solutions for commerce businesses to build and manage all aspec
For experimenting with the more static aspects of the store experience (static landing pages, visual aspects), we recommend using [Statsig Sidecar](/guides/sidecar-experiments/introduction) to both build your test treatments and to assign users to experiments when they land on your site — all without writing any code.
-Customers looking to experiment on the more dynamic aspects of their online store (ie; your product catalogue, search capabilities) should use [Shopify Headless Commerce](https://www.shopify.com/plus/solutions/headless-commerce) and integrate our [SDKs](/sdks/getting-started) to unlock full control for experimenting within business logic.
+Users looking to experiment on the more dynamic aspects of their online store (ie; your product catalogue, search capabilities) should use [Shopify Headless Commerce](https://www.shopify.com/plus/solutions/headless-commerce) and integrate our [SDKs](/sdks/getting-started) to unlock full control for experimenting within business logic.
## Using Traditional Shopify + Sidecar for No-code testing
@@ -21,7 +22,7 @@ The traditional Shopify service is a fully-managed platform for businesses that
While Statsig does not have an integration in the Shopify App Store today, you can easily integrate Sidecar for running simple UX experiments on the storefront. The below steps will guide you through the process of setting up Sidecar within the traditional Shopify stack.
-#### Install Sidecar chrome extension
+### Install Sidecar chrome extension
[Follow this guide](/guides/sidecar-experiments/setup) on installing the Sidecar Chrome extension.
This simple, lightweight Chrome extension will allow non-technical users to build experiments and their treatments. You can easily indicate where the test should run based on URL, and then configure treatments such as content changes, style changes, image swaps, as well as injecting arbitrary JavaScript for more sophisticated use-cases where the visual editor tools cannot accommodate.
@@ -37,7 +38,7 @@ This simple, lightweight Chrome extension will allow non-technical users to buil
- [Locate your Sidecar script tag](/guides/sidecar-experiments/publishing-experiments#step-2-add-script-code) and copy the script tag to your clipboard
- Navigate to the `theme.liquid` file in your Shopify theme editor
-- Paste the Sidecar script tag toward the top of the page `` as shown below.
+- Paste the Sidecar script tag toward the top of the page `` as shown in the following example.
@@ -47,7 +48,7 @@ This simple, lightweight Chrome extension will allow non-technical users to buil
### Configure event tracking
-Shopify's [Custom Pixel framework](https://help.shopify.com/en/manual/promoting-marketing/pixels/custom-pixels) is ideal for tracking customer events to Statsig.
+Shopify's [Custom Pixel framework](https://help.shopify.com/en/manual/promoting-marketing/pixels/custom-pixels) is ideal for tracking user events to Statsig.
The custom pixel framework offers a [wide set of events](https://shopify.dev/docs/api/web-pixels-api/standard-events) you can subscribe to, and namely, the ability to perform tracking during the checkout experience.
Note that code deployed outside the scope of a custom pixel will not fire during checkout experience as documented [here](https://help.shopify.com/en/manual/promoting-marketing/pixels/overview#pixels-sandbox-environment).
@@ -82,7 +83,7 @@ const statsigEvent = async (eventKey, eventValue = null, metadata = {}, userObje
});
await fetch('https://events.statsigapi.net/v1/log_event', {
method: 'POST',
- headers: { 'Content-Type': 'application/json', 'statsig-api-key': 'client-STATSIG_CLIENT_KEY' },
+ headers: { 'Content-Type': 'application/json', 'statsig-api key': 'client-STATSIG_CLIENT_KEY' },
body: JSON.stringify({
"events": [{"user": userObject, "eventName": eventKey, "metadata": metadata}]
})
@@ -117,7 +118,7 @@ Using [Shopify Headless](https://shopify.dev/docs/storefronts/headless) gives yo
Whether you're using Shopify's [Hydrogen app](https://shopify.dev/docs/storefronts/headless/hydrogen/fundamentals) and its frameworks or a [custom headless stack](https://shopify.dev/docs/storefronts/headless/bring-your-own-stack), you can integrate Statsig's SDK as needed in order to assign users to experiments. Integrating Statsig in this architecture will follow a similar pattern to our recommendation to [integrating with headless CMS platforms](/guides/cms-integrations).
-#### Integrating data sources for experiment metrics
+### Integrating data sources for experiment metrics
Along with the measuring simple click stream and point-of-sale behavior as [outlined here](/guides/shopify-ab-test/#configure-event-tracking), commerce businesses performing deeper experimentation often want to integrate offline data systems and measure experiments using existing metrics that the broader business uses.
-Commonly, the Data Warehouse is the source of truth for user purchase data and other categories of offline data. This affords customers the ability to define more [bespoke metrics](/statsig-warehouse-native/configuration/metrics#metric-types) using filtering, aggregations and incorporating other datasets in the warehouse for segmenting experiment results.
+Commonly, the Data Warehouse is the source of truth for user purchase data and other categories of offline data. This affords users the ability to define more [bespoke metrics](/statsig-warehouse-native/configuration/metrics#metric-types) using filtering, aggregations and incorporating other datasets in the warehouse for segmenting experiment results.
diff --git a/guides/sidecar-experiments/advanced-configurations-v3.mdx b/guides/sidecar-experiments/advanced-configurations-v3.mdx
index 8423a888b..00135d13d 100644
--- a/guides/sidecar-experiments/advanced-configurations-v3.mdx
+++ b/guides/sidecar-experiments/advanced-configurations-v3.mdx
@@ -5,6 +5,7 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: This approach allows you to set User Identity and Attributes for Sidecar, enabling you to perform more advanced targeting and results segmentation.
---
## Advanced Targeting & Segmentation
@@ -20,7 +21,7 @@ window.statsigUser = {
isLoggedIn: false
}
}
-```
+```text
## Accessing the Statsig js client
For accessing the underlying Statsig js client instance, you can call `StatsigSidecar.getStatsigInstance()`.
@@ -32,7 +33,7 @@ window.statsigOptions = {
// example of disabling logging for
loggingEnabled: 'disabled'
}
-```
+```text
## Managing Consent
Prior to Sidecar script tag, configure these runtime options to disable browser storage and tracking:
@@ -41,12 +42,12 @@ window.statsigOptions = {
loggingEnabled: "disabled",
disableStorage: true
}
-```
+```text
Later on, after the user gives consent, re-enable storage and tracking:
```js
__STATSIG__.instance().updateRuntimeOptions({loggingEnabled: "browser-only", disableStorage: false});
-```
+```text
## Persisting stableID across subdomains
Statsig uses `localStorage` as the preferred mechanism for storing the user's stableID. Localstorage keys do not persist across any origin boundaries including across subdomains. For example, a user visiting `https://example.com`, `https://show.example.com` and `https://account.example.com` would be issued three distinct stableIDs.
diff --git a/guides/sidecar-experiments/advanced-configurations.mdx b/guides/sidecar-experiments/advanced-configurations.mdx
index 8e6850bb6..26b63004f 100644
--- a/guides/sidecar-experiments/advanced-configurations.mdx
+++ b/guides/sidecar-experiments/advanced-configurations.mdx
@@ -28,7 +28,7 @@ window.statsigUser = {
isLoggedIn: false
}
}
-```
+```text
## Accessing the Statsig js client
For accessing the underlying Statsig js client instance, you can call `StatsigSidecar.getStatsigInstance()`.
@@ -40,7 +40,7 @@ window.statsigOptions = {
// example of disabling logging for
loggingEnabled: 'disabled'
}
-```
+```text
## Managing Consent
Prior to Sidecar script tag, configure these runtime options to disable browser storage and tracking:
@@ -49,12 +49,12 @@ window.statsigOptions = {
loggingEnabled: "disabled",
disableStorage: true
}
-```
+```text
Later on, after the user gives consent, re-enable storage and tracking:
```js
__STATSIG__.instance().updateRuntimeOptions({loggingEnabled: "browser-only", disableStorage: false});
-```
+```text
## Persisting stableID across subdomains
Statsig uses `localStorage` as the preferred mechanism for storing the user's stableID. Localstorage keys do not persist across any origin boundaries including across subdomains. For example, a user visiting `https://example.com`, `https://show.example.com` and `https://account.example.com` would be issued three distinct stableIDs.
diff --git a/guides/sidecar-experiments/measuring-experiments.mdx b/guides/sidecar-experiments/measuring-experiments.mdx
index 37eccf86c..ad5485d99 100644
--- a/guides/sidecar-experiments/measuring-experiments.mdx
+++ b/guides/sidecar-experiments/measuring-experiments.mdx
@@ -21,7 +21,7 @@ StatsigSidecar.logEvent('Order', null, {
units: 3,
unitAvgCost: 18.22
});
-```
+```sql
## Post-Experiment Callback for outbound integrations
You can bind a callback that gets invoked after Sidecar has run experiments (also gets called when there are no experiments),
@@ -47,6 +47,6 @@ window.postExperimentCallback = function(statsigClient, experimentIds) {
}
```
-#### Disabling All Logging
+### Disabling All Logging
To disable all logging to statsig (both autocapture events and logging who has seen your experiments) append the following query string parameter to the Sidecar script URL: `&autostart=0`. This may be useful if you're dealing with GDPR compliance, and you can later re-enable events with `client.updateRuntimeOptions({disableLogging: false})`
diff --git a/guides/sidecar-experiments/setup.mdx b/guides/sidecar-experiments/setup.mdx
index d2578a3fe..5e1baef81 100644
--- a/guides/sidecar-experiments/setup.mdx
+++ b/guides/sidecar-experiments/setup.mdx
@@ -53,10 +53,10 @@ Hit "OK" to commit the API Keys.
## Install Sidecar on your website
-Add a single script tag within the `` portion of your website, replacing with your own [Client SDK Key](https://docs.statsig.com/sdk-keys/api-keys/) as shown below.
+Add a single script tag within the `` portion of your website, replacing with your own [Client SDK Key](https://docs.statsig.com/sdk-keys/api keys/) as shown in the following example.
-```
-
+```text
+
```
@@ -180,7 +181,7 @@ Statsig.initialize(mySdkKey, myUser, {
api: "https://my-statsig-proxy.com/v1",
},
});
-```
+```ruby
diff --git a/infrastructure/reliability-faq.mdx b/infrastructure/reliability-faq.mdx
index 1ce20eac9..ca3248626 100644
--- a/infrastructure/reliability-faq.mdx
+++ b/infrastructure/reliability-faq.mdx
@@ -6,6 +6,7 @@ keywords:
- owner:eric
last_update:
date: 2025-09-18
+description: Integrating your product with Statsig means depending on Statsig, and we take reliability seriously. Here are some questions many people have when try
---
Integrating your product with Statsig means depending on Statsig, and we take reliability seriously. Here are some questions many people have when trying to evaluate the risks, please feel free to reach out on Slack if you have questions that are not listed here.
diff --git a/infrastructure/sdk-monitoring.mdx b/infrastructure/sdk-monitoring.mdx
index 82faa8529..5978dbef8 100644
--- a/infrastructure/sdk-monitoring.mdx
+++ b/infrastructure/sdk-monitoring.mdx
@@ -6,6 +6,7 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description:
diff --git a/integrations/data-connectors/google-analytics.mdx b/integrations/data-connectors/google-analytics.mdx
index 4b64fe2e5..a78f303b5 100644
--- a/integrations/data-connectors/google-analytics.mdx
+++ b/integrations/data-connectors/google-analytics.mdx
@@ -4,6 +4,7 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: Enabling the Google Analytics 4 integration allows Statsig to send logged events and exposures to GA4. This enhances your existing Google Analytics tr
---
Enabling the Google Analytics 4 integration allows Statsig to send logged events and exposures to GA4. This enhances your existing Google Analytics tracking with additional data collected by Statsig's logging SDKs.
diff --git a/integrations/data-connectors/heap.mdx b/integrations/data-connectors/heap.mdx
index cabd56047..1790acc46 100644
--- a/integrations/data-connectors/heap.mdx
+++ b/integrations/data-connectors/heap.mdx
@@ -4,6 +4,7 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: import EventFormats from "/snippets/integration_event_formats.mdx"; Enabling the Heap integration allows you to export Statsig events to your configur
---
import EventFormats from "/snippets/integration_event_formats.mdx";
diff --git a/integrations/data-connectors/hightouch.mdx b/integrations/data-connectors/hightouch.mdx
index 97c2d3f92..4da814612 100644
--- a/integrations/data-connectors/hightouch.mdx
+++ b/integrations/data-connectors/hightouch.mdx
@@ -4,6 +4,7 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: import StatsigEnvironmentFormat from "/snippets/integration_statsig_env_format.mdx"; Enabling the **[Hightouch](https://hightouch.com/)** integration
---
import StatsigEnvironmentFormat from "/snippets/integration_statsig_env_format.mdx";
diff --git a/integrations/data-connectors/mixpanel.mdx b/integrations/data-connectors/mixpanel.mdx
index 0d25d2e3d..a9e484eb1 100644
--- a/integrations/data-connectors/mixpanel.mdx
+++ b/integrations/data-connectors/mixpanel.mdx
@@ -4,6 +4,7 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: The [Mixpanel](https://mixpanel.com/) integration has two functions. - Incoming: Statsig can sync your Mixpanel user cohorts with a Statsig ID list se
---
## Overview
@@ -43,7 +44,7 @@ Statsig can ingest user information via a [Mixpanel Cohort Syncing](https://deve
4. In the dialog that appears, paste the url below, substituting the
SERVER_SECRET_KEY with a "Server Secret Key" found in [Project Settings](https://console.statsig.com/api_keys), then click Continue.
-```
+```text
https://api.statsig.com/v1/webhooks/mixpanel?statsig-api-key=SERVER_SECRET_KEY
```
diff --git a/integrations/data-connectors/mparticle.mdx b/integrations/data-connectors/mparticle.mdx
index 4ab696649..ec947a781 100644
--- a/integrations/data-connectors/mparticle.mdx
+++ b/integrations/data-connectors/mparticle.mdx
@@ -1,5 +1,6 @@
---
title: mParticle
+description: Enabling the [mParticle](https://www.mparticle.com/) integration for Statsig allows Statsig to receive events from mParticle. You can find all events
---
## Overview
diff --git a/integrations/data-connectors/revenuecat.mdx b/integrations/data-connectors/revenuecat.mdx
index 4ffacf14c..4eb2f22fc 100644
--- a/integrations/data-connectors/revenuecat.mdx
+++ b/integrations/data-connectors/revenuecat.mdx
@@ -1,5 +1,6 @@
---
title: RevenueCat
+description: Enabling the RevenueCat integration allows Statsig to pull billing, subscription, and revenue metrics into your Statsig projects. This provides easy m
---
## Overview
diff --git a/integrations/data-connectors/rudderstack.mdx b/integrations/data-connectors/rudderstack.mdx
index 7f2c1dd8c..b068e92ea 100644
--- a/integrations/data-connectors/rudderstack.mdx
+++ b/integrations/data-connectors/rudderstack.mdx
@@ -4,6 +4,7 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: import StatsigEnvironmentFormat from "/snippets/integration_statsig_env_format.mdx"; Enabling the RudderStack integration for Statsig will allow Stats
---
import StatsigEnvironmentFormat from "/snippets/integration_statsig_env_format.mdx";
@@ -12,7 +13,7 @@ import StatsigEnvironmentFormat from "/snippets/integration_statsig_env_format.m
Enabling the RudderStack integration for Statsig will allow Statsig to pull in your RudderStack events. This allows you to run your experiment analysis on Statsig with all of your existing events from RudderStack without requiring any additional logging.
-When Statsig receives events from RudderStack, these will be visible and aggregated in the [Metrics](/metrics) tab in the Statsig console. These events will automatically be included in your [Pulse](/pulse/read-pulse) results for A/B tests with Statsig's [feature flags](/feature-flags/overview) as well as all your [Experiment](/experiments-plus/monitor) results.
+When Statsig receives events from RudderStack, these will be visible and aggregated in the [Metrics](/metrics) tab in the Statsig console. These events will automatically be included in your [Pulse](/pulse/read-pulse) results for Experiments with Statsig's [feature flags](/feature-flags/overview) as well as all your [Experiment](/experiments-plus/monitor) results.
## Configuring Incoming Events
@@ -21,8 +22,8 @@ To ingest your events from RudderStack,
1. On [app.rudderstack.com](https://app.rudderstack.com/), navigate to "Connections" and click **Add Destination** .
2. Search for “Statsig” in the Destinations Catalog, and select the “Statsig” destination.
3. Give your connection a name and choose which Source should send data to the “Statsig” destination.
-4. From the [Statsig dashboard](https://console.statsig.com/api_keys), copy the Statsig "Server Secret Key”.
-5. Enter the Statsig “Server Secret Key” in the “Statsig” destination settings in RudderStack.
+4. From the [Statsig dashboard](https://console.statsig.com/api_keys), copy the Statsig "Api Key Key”.
+5. Enter the Statsig “Api Key Key” in the “Statsig” destination settings in RudderStack.
6. On the Statsig [Integration page](https://console.statsig.com/integrations) enable the RudderStack integration.
7. As your RudderStack events flow into Statsig, you'll see a live **Log Stream** in the [Metrics](/metrics) tab in the Statsig console. You can click one of these events to see the details that are logged as part of the event.
@@ -30,11 +31,11 @@ To ingest your events from RudderStack,
-#### User IDs and Custom IDs
+### User IDs and Custom IDs
Statsig automatically detects the `event` and `userID` fields that you log through your RudderStack events. If you're running an experiment with the user as your unit type, this userID should match the user identifier that you log with the Statsig SDK.
-If you're using a [custom ID](/guides/experiment-on-custom-id-types) as the unit type for your experiment, you can provide this identifier using the key `statsigCustomIDs` as part of the RudderStack `properties` field as shown below.
+If you're using a [custom ID](/guides/experiment-on-custom-id-types) as the unit type for your experiment, you can provide this identifier using the key `statsigCustomIDs` as part of the RudderStack `properties` field as shown in the following example.
```bash title="JSON Body"
{
diff --git a/integrations/data-connectors/segment.mdx b/integrations/data-connectors/segment.mdx
index 6f3c1fcf8..7612f9a57 100644
--- a/integrations/data-connectors/segment.mdx
+++ b/integrations/data-connectors/segment.mdx
@@ -4,15 +4,18 @@ keywords:
- owner:brock
last_update:
date: 2025-09-18
+description: import StatsigEnvironmentFormat from "/snippets/integration_statsig_env_format.mdx"; Enabling the Segment integration for Statsig will allow Statsig t
---
-
import StatsigEnvironmentFormat from "/snippets/integration_statsig_env_format.mdx";
Enabling the Segment integration for Statsig will allow Statsig to pull in your Segment events. This allows you to run your experiment analysis on Statsig with all of your existing events from Segment without requiring any additional logging.
-When Statsig receives events from Segment, these will be visible and aggregated in the [Metrics](/metrics) tab in the Statsig console. These events will automatically be included in your [Pulse](/pulse/read-pulse) results for A/B tests with Statsig's feature gates as well as all your [Experiment](/experiments-plus/monitor) results.
+When Statsig receives events from Segment, these will be visible and aggregated in the [Metrics](/metrics) tab in the Statsig console. These events will automatically be included in your [Pulse](/pulse/read-pulse) results for Experiments with Statsig's feature flags as well as all your [Experiment](/experiments-plus/monitor) results.
### Supported Segment Event Types
+
+This page explains supported segment event types.
+
- [Track](https://segment.com/docs/connections/spec/track/)
- [Page](https://segment.com/docs/connections/spec/page/)
- [Group](https://segment.com/docs/connections/spec/group/)
@@ -25,8 +28,8 @@ Identify calls are only supported for syncing Segment Engage Audiences with Stat
### Benefits of using the Segment integration
Using the Segment integration has several benefits over other methods of event ingestion:
- * Customers who are ingesting customer data with Segment will be able to quickly populate Statsig with metrics and can typically get up and running within a day
- * Customers will only have to use Statsig's assignment SDKs (gate/experiment allocation), simplifying your code and engineer workflows
+ * Users who are ingesting user data with Segment will be able to quickly populate Statsig with metrics and can typically get up and running within a day
+ * Users will only have to use Statsig's assignment SDKs (gate/experiment allocation), simplifying your code and engineer workflows
* Additional logging can be done via the [event logging SDKs](/guides/logging-events#logging-events-via-sdks) but will and additional code orchestration and a collection window
* With [event filtering](/integrations/event_filtering) you can control which events are ingested and make billing more predictable
* If you have [Segment Replay](https://segment.com/docs/guides/what-is-replay/), you can send Statsig your historical events for analysis
@@ -82,7 +85,7 @@ If you are unable to connect to Segment via OAuth, you can still manually connec
- - Put your Server Secret Key in the “API Key” field in the Statsig Destination
+ - Put your Api Key Key in the “API Key” field in the Statsig Destination
@@ -129,7 +132,7 @@ Statsig will join incoming user identifiers to whichever [unit of randomization]
Statsig automatically detects the `event` and `userId` fields that are logged through your Segment events (see [`track`](https://segment.com/docs/connections/spec/track/) for an example). If you're running an experiment with the userId as your unit type, this `userID` should match the user identifier that you log with the Statsig SDK.
-If you're using a [custom ID](/guides/experiment-on-custom-id-types) as the unit type for your experiment, you can provide this identifier using the key `statsigCustomIDs` as part of the Segment `properties` field as shown below.
+If you're using a [custom ID](/guides/experiment-on-custom-id-types) as the unit type for your experiment, you can provide this identifier using the key `statsigCustomIDs` as part of the Segment `properties` field as shown in the following example.
```bash title="JSON Body"
{
@@ -138,7 +141,7 @@ If you're using a [custom ID](/guides/experiment-on-custom-id-types) as the unit
"statsigCustomIDs": [ "companyID", "
#### Experimenting on anonymous traffic
-For example, if you're running experiments on anonymous users, you can use Segment's `anonymousId` as the unit of randomization. First, you will want to [add a new customer identifier to Statsig](/guides/experiment-on-custom-id-types#step-1---add-companyid-as-a-new-id-type-in-your-project-settings). In the above example, we call our new custom ID `segmentAnonymousId`. Then, when [initializing](/client/javascript-sdk) the Statsig SDK, if you have access to the Segment `anonymousId` you will want to pass it to Statsig as a custom ID. For example, your Statsig initialization may look like this:
+For example, if you're running experiments on anonymous users, you can use Segment's `anonymousId` as the unit of randomization. First, you will want to [add a new user identifier to Statsig](/guides/experiment-on-custom-id-types#step-1---add-companyid-as-a-new-id-type-in-your-project-settings). In the above example, we call our new custom ID `segmentAnonymousId`. Then, when [initializing](/client/javascript-sdk) the Statsig SDK, if you have access to the Segment `anonymousId` you will want to pass it to Statsig as a custom ID. For example, your Statsig initialization may look like this:
```jsx
import { StatsigClient } from '@statsig/js-client';
@@ -173,7 +176,7 @@ const client = new StatsigClient(sdkKey,
{ environment: { tier: "production" } }
);
-```
+```python
You can access Segment's `anonymousId` using `analytics.user().anonymousId()` as [outlined in the Segment docs here](https://segment.com/docs/connections/sources/catalog/libraries/website/javascript/identity/).
diff --git a/integrations/data-exports/data_warehouse_exports.mdx b/integrations/data-exports/data_warehouse_exports.mdx
index e0a5b9698..0672f5c3a 100644
--- a/integrations/data-exports/data_warehouse_exports.mdx
+++ b/integrations/data-exports/data_warehouse_exports.mdx
@@ -1,5 +1,6 @@
---
title: Data Warehouse Exports
+description: You can export your data from Statsig to your Data Warehouse with a Data Connection. This lets you send exposures and events directly to your warehous
---
## Introduction
diff --git a/integrations/data-exports/experiment_result_exports.mdx b/integrations/data-exports/experiment_result_exports.mdx
index 2e11809c7..a05c326d1 100644
--- a/integrations/data-exports/experiment_result_exports.mdx
+++ b/integrations/data-exports/experiment_result_exports.mdx
@@ -1,5 +1,6 @@
---
title: Experiment Result Exports
+description: Your data is your data. Statsig makes it easy to export both the reports and the raw data your feature rollouts and experiments generate.
---
## Overview
Your data is your data. Statsig makes it easy to export both the reports and the raw data your feature rollouts and experiments generate.
diff --git a/integrations/data-imports/azure_upload.mdx b/integrations/data-imports/azure_upload.mdx
index d2a126b6f..269130f1a 100644
--- a/integrations/data-imports/azure_upload.mdx
+++ b/integrations/data-imports/azure_upload.mdx
@@ -5,10 +5,11 @@ keywords:
- owner:tim
last_update:
date: 2025-09-18
+description:
diff --git a/metrics/custom-dau.mdx b/metrics/custom-dau.mdx
index 0ebc6a555..747a49793 100644
--- a/metrics/custom-dau.mdx
+++ b/metrics/custom-dau.mdx
@@ -8,7 +8,7 @@ description: "Step-by-step guide to create custom Daily Active User (DAU) metric
This guide will walk you through the steps to create a custom DAU metric. Follow
the instructions carefully to ensure successful creation of your metric.
-### **Step 1: Navigate to the Metrics Catalog**
+## **Step 1: Navigate to the Metrics Catalog**
To begin, go to the "Metrics Catalog" in the left navigation bar and click on
"Create" button.
diff --git a/metrics/ingest.mdx b/metrics/ingest.mdx
index f1f24eed4..6ddffaeb6 100644
--- a/metrics/ingest.mdx
+++ b/metrics/ingest.mdx
@@ -7,7 +7,7 @@ Statsig can ingest your precomputed product and business metrics using our data
Statsig does not automatically process these metrics until you mark them as ready, as it's possible you might land data out of order. Once you are finished loading data for a period, you mark the data as ready by hitting the `mark_data_ready` API:
-```
+```bash
curl --location --request POST ‘https://api.statsig.com/v1/mark_data_ready’ \
--header ‘statsig-api-key: {your statsig server secret}’ \
--header ‘Content-Type: application/json’ \
diff --git a/metrics/metric-dimensions.mdx b/metrics/metric-dimensions.mdx
index 0173861b2..e97c5bf3b 100644
--- a/metrics/metric-dimensions.mdx
+++ b/metrics/metric-dimensions.mdx
@@ -13,7 +13,7 @@ Statsig enables you to define up to four custom dimensions for an event (one via
-Providing custom dimensions with logged events allows you to break down the impact on the total **add-to-cart** events by category in Pulse as shown below. This enables you to zero in on the category that's most impacted by your experiment.
+Providing custom dimensions with logged events allows you to break down the impact on the total **add-to-cart** events by category in Pulse as shown in the following example. This enables you to zero in on the category that's most impacted by your experiment.
diff --git a/product-analytics/alerts-overview.mdx b/product-analytics/alerts-overview.mdx
index b37c10cec..486888bcb 100644
--- a/product-analytics/alerts-overview.mdx
+++ b/product-analytics/alerts-overview.mdx
@@ -1,6 +1,7 @@
---
title: Alerts Overview
sidebarTitle: Alerts Overview
+description: Statsig offers two types of alerts on the platform today: 1. **[Topline Metric Alerts](/product-analytics/alerts/topline_alerts)** - Monitor a metric’
---
# Alerts
diff --git a/product-analytics/drilldown.mdx b/product-analytics/drilldown.mdx
index 0d755c402..715223109 100644
--- a/product-analytics/drilldown.mdx
+++ b/product-analytics/drilldown.mdx
@@ -1,16 +1,16 @@
---
title: 'Metric Drilldown Charts'
sidebarTitle: 'Metric Drilldown'
-description: 'A versatile tool for understanding customer behavior and trends within your product'
+description: 'A versatile tool for understanding user behavior and trends within your product'
---
-The Metric Drilldown chart in Metrics Explorer is a versatile tool for understanding customer behavior and trends within your product. Designed for clarity and depth, it allows you to analyze key metrics and user behavior over time. Importantly, it also allows you to delve several layers deeper into your metrics by filtering to interesting properties or cohorts, as well as the ability to group-by these same properties to compare behaviors between groups.
+The Metric Drilldown chart in Metrics Explorer is a versatile tool for understanding user behavior and trends within your product. Designed for clarity and depth, it allows you to analyze key metrics and user behavior over time. Importantly, it also allows you to delve several layers deeper into your metrics by filtering to interesting properties or cohorts, as well as the ability to group-by these same properties to compare behaviors between groups.
## Use Cases
- **Trend Analysis Over Time**: Gain insights into how specific metrics evolve over time. Visualizing product data in Metrics Explorer allows you to track and compare key performance indicators and user behavior, and helps understand long-term trends and short-term fluctuations in how users engage with your product and your product's performance.
- **Identify interesting cohorts**: Define and explore interesting cohorts by zooming in on users who performed certain events at frequencies you define.
-- **Understand how Targeted Feature Launches, A/B tests, and Experiments affect usage:** Split any metric out by Experiment Group or Feature Gate Group to compare how those metrics perform for different groups. Leverage automatically generated annotations on charts for important decisions such as Feature or Experiment launches to help correlate those decisions with changing trends.
+- **Understand how Targeted Feature Launches, Experiments, and Experiments affect usage:** Split any metric out by Experiment Group or Feature Flag Group to compare how those metrics perform for different groups. Leverage automatically generated annotations on charts for important decisions such as Feature or Experiment launches to help correlate those decisions with changing trends.
- **Segmentation and Comparison**: Dissect metrics to understand how different user segments or product features perform. This is crucial for identifying which areas are providing value for your users and those which may need more attention or improvement. It is also useful in understanding how different segments interact with your product, and for identifying unique trends or needs within these groups.
- **Filtering**: Focus on specific segments or cohorts that are of particular interest. This filtering capability allows for a more targeted analysis, helping you to understand the behaviors and needs of specific user groups.
- **Statistical Understanding:** Understand how the average, median, or other percentile value (e.g. p99, p95) of a metric changes over time.
@@ -18,9 +18,9 @@ The Metric Drilldown chart in Metrics Explorer is a versatile tool for understan
- **Flexible Visualization Options**: Choose from a range of visualization formats, like line charts, bar charts, horizontal bar charts, and stacked bar charts, to best represent your data. The right visualization can make complex data more understandable and actionable.
- **Event Samples for Debugging**: Quickly access and analyze a metric's underlying sample events, and the granular user-level information attached to the event. This feature is particularly useful for troubleshooting and understanding the root causes of trends or anomalies in your data.
- **Detailed Data Control**: Adjust the granularity of your data analysis, from high-level overviews to detailed breakdowns. Use features like rolling averages to smooth data for more accurate trend analysis and decision-making.
-- **Debug Experiments**: Breakdown your experiment's first exposures to understand how certain properties or groups (feature gates, experiments, holdouts) affect your experiment.
-- **View Sample Ratio Mismatch (SRM)**: See the SRM of your experiments over time and drill down into event and user metadata fields to understand how certain properties (country, browser, etc.) or groups (feature gates, experiments, holdouts) can affect your experiment SRM.
-- **Debug Feature Gates**: Breakdown your feature gate's first exposures per rule to understand how certain properties or groups (feature gates, experiments, holdouts) affect your feature gate.
+- **Debug Experiments**: Breakdown your experiment's first exposures to understand how certain properties or groups (feature flags, experiments, holdouts) affect your experiment.
+- **View Sample Ratio Mismatch (SRM)**: See the SRM of your experiments over time and drill down into event and user metadata fields to understand how certain properties (country, browser, etc.) or groups (feature flags, experiments, holdouts) can affect your experiment SRM.
+- **Debug Feature Flags**: Breakdown your feature flag's first exposures per rule to understand how certain properties or groups (feature flags, experiments, holdouts) affect your feature flag.
# Using the Metric Drilldown Chart
@@ -72,7 +72,7 @@ When selecting an event, the total number of times the event occurred (Count) on
### Exposures
Selecting an experiment or gate exposure plots its first exposures over your selected date-range. First exposures are the first time a unique id (set on the experiment or gate) was exposed to each of your experiment groups or each of your gate's rules.
-### Understanding First Exposures in Feature Gates
+### Understanding First Exposures in Feature Flags
When a gate is checked for a user, an exposure is created for the rule who's conditions they've met. If the user is exposed to multiple different rules at different times, the first exposure from each rule is kept. We recommend grouping by rule to see each rule's exposures separately.
@@ -117,7 +117,7 @@ In addition to plotting metrics, you may want to drill into your metrics to iden
Leveraging a Group-By makes it easy to disaggregate plotted metrics and events by a selected property or group. Doing so allows you to compare how an action or user behavior may correlate with a specific property. Adding a Group-By will split the the plotted metric(s) into several plots. By default we only show the top ten groups by value on the chart, but you can select more groups. You can select 50 groups when the chart is set to daily granularity.
-A metric can be grouped-by event properties, user profile properties, experiment group, or feature gate group.
+A metric can be grouped-by event properties, user profile properties, experiment group, or feature flag group.
Group-By limits can be added by first adding a group-by, then moving to the summary table below the charts, and clicking the "Top X series" dropdown button. From there you can select how many groups you want to see at once. You can use this to further drill down on your top X categories (up to 50). This feature is available for line charts, stacked-line charts, bar charts, and stacked-bar charts.
@@ -127,9 +127,9 @@ Group-By limits can be added by first adding a group-by, then moving to the summ
When you have a Group-By applied, you can view the results as raw numbers, or as a percentage.
-**Feature Gate and Experiment Groups**
+**Feature Flag and Experiment Groups**
-At Statsig we believe in the power of experimentation. To that end, you can also select one of your Feature Gate or Experiments in order to split out a metric by the different groups in the selected test.
+At Statsig we believe in the power of experimentation. To that end, you can also select one of your Feature Flag or Experiments in order to split out a metric by the different groups in the selected test.
**Adding a Group-By**
diff --git a/release-pipeline/actions.mdx b/release-pipeline/actions.mdx
index 1f3b71c08..a0f3b0847 100644
--- a/release-pipeline/actions.mdx
+++ b/release-pipeline/actions.mdx
@@ -11,7 +11,7 @@ Once a Release Pipeline is triggered, you can control its progression using the
-### Approve
+## Approve
**What it does:**
Kick off a phase that requires a manual approval before rollout begins. This is useful when human verification is required before changes move forward.
diff --git a/sdks/api-keys.mdx b/sdks/api-keys.mdx
index c8542bbb4..d6a90e156 100644
--- a/sdks/api-keys.mdx
+++ b/sdks/api-keys.mdx
@@ -1,5 +1,6 @@
---
title: API Keys
+description: There are three main types of API keys: 1. **Client API Key**: Intended for getting configuration and logging events on the client side.
---
## Overview
diff --git a/sdks/array-operators.mdx b/sdks/array-operators.mdx
index f0b6f3ad9..491981312 100644
--- a/sdks/array-operators.mdx
+++ b/sdks/array-operators.mdx
@@ -1,5 +1,6 @@
---
title: Array Operators
+description: Array operators allow for checking if an array custom field does or does not contain specific values Array operators will work with all versions of al
---
Array operators allow for checking if an array custom field does or does not contain specific values
diff --git a/sdks/client-vs-server.mdx b/sdks/client-vs-server.mdx
index fb4954075..b039ef206 100644
--- a/sdks/client-vs-server.mdx
+++ b/sdks/client-vs-server.mdx
@@ -2,6 +2,7 @@
title: Client vs Server SDKs
slug: /sdks/client-vs-server
+description: Statsig offers client and server SDKs to enable experimentation and feature management across different parts of your application. This document outli
---
Statsig offers client and server SDKs to enable experimentation and feature management across different parts of your application. This document outlines when you should choose each.
diff --git a/sdks/debugging.mdx b/sdks/debugging.mdx
index c2727945d..2f8991342 100644
--- a/sdks/debugging.mdx
+++ b/sdks/debugging.mdx
@@ -187,14 +187,14 @@ const bootstrapValues = Statsig.getClientInitializeResponse(userA);
const bootstrapValues = await fetchStatsigValuesFromMyServers();
const userB = { userID: 'user-b' }; // <-- Different from userA
await Statsig.initialize('client-key', userB, { initializeValues: bootstrapValues });
-```
+```text
Even subtle differences count as a mismatch—adding `customIDs` or other attributes results in a distinct user object.
```js
const userA = { userID: 'user-a' };
const userAExt = { userID: 'user-a', customIDs: { employeeID: 'employee-a' } };
-```
+```text
### BootstrapStableIDMismatch
@@ -209,7 +209,7 @@ const bootstrapValues = Statsig.getClientInitializeResponse(userA);
const bootstrapValues = await fetchStatsigValuesFromMyServers();
const userB = { stableID: '12345' }; // <-- Server user lacked a stableID
await Statsig.initialize('client-key', userB, { initializeValues: bootstrapValues });
-```
+```text
Even if both sides start with `{}`, the client-generated stable ID may not match the server’s, leading to the same warning.
diff --git a/sdks/getting-started.mdx b/sdks/getting-started.mdx
index b85c20eda..deb2b9da2 100644
--- a/sdks/getting-started.mdx
+++ b/sdks/getting-started.mdx
@@ -1,5 +1,6 @@
---
title: "SDK Overview"
+description: import ListOfSDKs from '/snippets/sdks/list-of-sdks.mdx' Statsig's SDKs are the in-code tool you'll use to show experiment variants, flag your feature
---
import ListOfSDKs from '/snippets/sdks/list-of-sdks.mdx'
diff --git a/sdks/how-evaluation-works.mdx b/sdks/how-evaluation-works.mdx
index 55ff77870..143282aeb 100644
--- a/sdks/how-evaluation-works.mdx
+++ b/sdks/how-evaluation-works.mdx
@@ -1,22 +1,23 @@
---
title: "How Evaluation Works"
+description: The essential function of the Statsig SDKs is reliable, consistent, incredibly performant allocation of users to the correct bucket in your experiment
---
## Evaluation's importance
-The essential function of the Statsig SDKs is reliable, consistent, incredibly performant allocation of users to the correct bucket in your experiment or feature gate. Understanding how we accomplish this can help you answer questions like:
+The essential function of the Statsig SDKs is reliable, consistent, incredibly performant allocation of users to the correct bucket in your experiment or feature flag. Understanding how we accomplish this can help you answer questions like:
- Why do I have to pass every user attribute, every time?
- Why do I have to wait for initialization to complete?
- When do you decide each users' bucket?
## How Evaluation Works
-Evaluation in Statsig is deterministic. Given the same user object and the same state of the experiment or feature gate, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here's how it works:
+Evaluation in Statsig is deterministic. Given the same user object and the same state of the experiment or feature flag, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here's how it works:
-1. **Salt Creation**: Each experiment or feature gate rule generates a unique salt.
+1. **Salt Creation**: Each experiment or feature flag rule generates a unique salt.
2. **Hashing**: The user identifier (e.g., userId, organizationId) is passed through a SHA256 hashing function, combined with the salt, which produces a large integer.
3. **Bucket Assignment**: The large integer is then subjected to a modulus operation with 10000 (or 1000 for layers), assigning the user to a bucket.
4. **Bucket Determination**: The result defines the specific bucket out of 10000 (or 1000 for layers) where the user is placed.
-This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt per-experiment or feature gate rule ensures that the same user can be assigned to different buckets in different experiments. This also means that if you rollout a feature gate rule to 50% - then back to 0% - then back to 50%, the same 50% of users will be re-exposed, **so long as you reuse the same rule** - and not create a new one. See [here](/faq/#when-i-change-the-rollout-percentage-of-a-rule-on-a-feature-gate-will-users-who-passed-continue-to-pass).
+This process ensures a randomized but deterministic bucketing of users across different experiments or feature flags. The unique salt per-experiment or feature flag rule ensures that the same user can be assigned to different buckets in different experiments. This also means that if you rollout a feature flag rule to 50% - then back to 0% - then back to 50%, the same 50% of users will be re-exposed, **so long as you reuse the same rule** - and not create a new one. See [here](/faq/#when-i-change-the-rollout-percentage-of-a-rule-on-a-feature flag-will-users-who-passed-continue-to-pass).
For more details, check our open-source SDKs [here](https://github.com/statsig-io/node-js-server-sdk/blob/main/src/Evaluator.ts).
@@ -31,7 +32,7 @@ All of the above logic holds true for both SDKs. In both, the user's assignment
* **Performant Evaluation:** no evaluations require a network request, and we focus on evaluation performance, meaning that checks take \<1ms after evaluation.
* **The SDKs don't "remember" user attributes, or previous evaluations:** we rely on you to pass all of the necessary user attributes consistently - and we promise if you do, we'll provide the same value.
-A common assumption is that Statsig tracks of a list of all ids and what group they were assigned to for experiments/gates. While our data pipelines track users exposed to each variant to compute experiment results, we do not cache previous evaluations and maintain distributed evaluation state across client and server SDKs. That won't scale - we've even talked to customers doing this in the past, and were paying more for Redis to maintain that state than they ended up paying for Statsig.
+A common assumption is that Statsig tracks of a list of all ids and what group they were assigned to for experiments/gates. While our data pipelines track users exposed to each variant to compute experiment results, we do not cache previous evaluations and maintain distributed evaluation state across client and server SDKs. That won't scale - we've even talked to users doing this in the past, and were paying more for Redis to maintain that state than they ended up paying for Statsig.
* **Server SDKs can handle multiple users:** because they hold the ruleset in memory, Server SDKs can evaluate any user. Without a network request. This means you'll have to pass a user object into the getExperiment method on Server SDKs, whereas on client SDKs you pass it into initialize().
* **We ensure each user receives the same bucket:** our ID-based hashing assignment guarantees consistency. If you make a change in console that could affect user bucketing on an experiment, we'll provide warning.
diff --git a/sdks/identify-users.mdx b/sdks/identify-users.mdx
index fd0ece770..ffcdc4356 100644
--- a/sdks/identify-users.mdx
+++ b/sdks/identify-users.mdx
@@ -1,5 +1,6 @@
---
title: "Identify Users"
+description: When you run an experiment, rollout a feature, or log events, Statsig needs to know who the user is to determine:
---
## Why identify users?
@@ -22,7 +23,7 @@ Start by defining a basic user object:
"userID": "u_123", // required for most setups
"email": "user@example.com" // optional
}
-```
+```text
@@ -53,7 +54,7 @@ export function App() {
);
}
-```
+```text
@@ -71,7 +72,7 @@ return Passing ({gate.details.reason})
} ); -``` +```ruby ### Dynamic Config Hooks @@ -48,7 +53,7 @@ return (Another Value: {getDynamicConfig('my_dynamic_config').get('a_bool', false)}
); -``` +```ruby ### Experiment Hooks @@ -67,7 +72,7 @@ return (Value: {experiment.get('a_value', 'fallback_value')}
); -``` +```text ### Layer Hooks @@ -85,7 +90,7 @@ return (Value: {layer.get('a_value', 'fallback_value')}
); -``` +```text ### Parameter Store Hooks @@ -104,7 +109,7 @@ function MyComponent() { return