<@U07FJ89LFFB> if I redirect users through a scrip...
# ask-ai
w
@brash-pillow-71627 if I redirect users through a script inside a feature flag to the same URL where that feature flag is, will users be in a redirect loop?
o
A message from kapa.ai
New message text here
w
@brash-pillow-71627 can you give me more information about sticky bucketing and how to implement it in react?
o
A message from kapa.ai
Sure, Sticky Bucketing is a feature in GrowthBook that ensures users continue to see the same variation when you make changes to a running experiment. This is useful in scenarios such as managing an experiment rollout, fixing a bug in your experiment, or ensuring a consistent experience as users log in and move between devices [source](https://docs.growthbook.io/app/sticky-bucketing). To implement Sticky Bucketing in React, you need to use a compatible SDK version (React:
0.22.0
or later) and pass a Sticky Bucketing Service into your SDK Implementation. You can use one of the built-in Sticky Bucketing Services or implement your own. The built-in services include
LocalStorageStickyBucketService
and
BrowserCookieStickyBucketService
[source](https://docs.growthbook.io/lib/react#sticky-bucketing). Here's an example of how to use the `BrowserCookieStickyBucketService`:
Copy code
javascript import { GrowthBook, BrowserCookieStickyBucketService } from "@growthbook/growthbook"; import Cookies from 'js-cookie';  const gb = new GrowthBook({   apiHost: "<https://cdn.growthbook.io>",   clientKey: "sdk-abc123",   stickyBucketService: new BrowserCookieStickyBucketService({     jsCookie: Cookies,   }),   // ... });
[source](https://docs.growthbook.io/lib/react#sticky-bucketing) In this example,
BrowserCookieStickyBucketService
is used for simple bucket persistence using browser cookies. The
jsCookie
is implemented (can be polyfilled). Cookie attributes can also be configured. The default cookie expiry is 180 days; override by passing
expires: {days}
into the constructor's
cookieAttributes
. Remember to update your SDK Connections in the GrowthBook app to ensure that your SDK Connections are configured correctly [source](https://docs.growthbook.io/app/sticky-bucketing#setting-up-sticky-bucketing).
w
@brash-pillow-71627 do you have an example for next.js for this? stickyBucketService: new BrowserCookieStickyBucketService({ jsCookie: Cookies, }),
o
A message from kapa.ai
Yes, here's an example of how to use the
BrowserCookieStickyBucketService
in a Next.js application:
Copy code
javascript import { GrowthBook, BrowserCookieStickyBucketService } from "@growthbook/growthbook"; import Cookies from 'js-cookie';  export const growthbookClient = new GrowthBook({   apiHost: "<https://cdn.growthbook.io>",   clientKey: "sdk-abc123",   stickyBucketService: new BrowserCookieStickyBucketService({     jsCookie: Cookies,   }),   // ... });
[source](https://linen.growthbook.io/t/18659192/hello-u01t6hchd0a-and-u05ryuxpvnu-i-attended-the-office-hour) In this example,
BrowserCookieStickyBucketService
is used for simple bucket persistence using browser cookies. The
jsCookie
is implemented (can be polyfilled). Cookie attributes can also be configured. The default cookie expiry is 180 days; override by passing
expires: {days}
into the constructor's
cookieAttributes
. Remember to update your SDK Connections in the GrowthBook app to ensure that your SDK Connections are configured correctly [source](https://docs.growthbook.io/app/sticky-bucketing#setting-up-sticky-bucketing).
w
@brash-pillow-71627 I'm storing the ID in a cookie and i'm using sticky bucketing. If I manually change the ID and delete the cookie eventually I should be able to land on a different variation, right? or it may be difficult because of sticky bucketing?
o
A message from kapa.ai
Yes, if you manually change the ID and delete the cookie, you should be able to land on a different variation. Sticky bucketing in GrowthBook is designed to ensure that a user sees the same variation of an experiment even when their session, login status, or experiment parameters change. It does this by persisting the user's experiment variant in a data store, such as a cookie [source](https://docs.growthbook.io/lib/react#sticky-bucketing). However, if you manually change the ID that is used for bucketing and delete the cookie where the variant was stored, the next time the user is bucketed into the experiment, they will be treated as a new user because there's no existing record of them in the data store. As a result, they could potentially be bucketed into a different variation. Remember that the actual variation the user lands on will still depend on the experiment's configuration, such as the distribution of traffic between variations. Also, keep in mind that manually changing IDs and deleting cookies could lead to inconsistent user experiences and could potentially skew your experiment results.
w
@brash-pillow-71627 if I have 3 variations, how should I set up the feature flag script?
o
A message from kapa.ai
To set up a feature flag with three variations in GrowthBook, you would first create a new feature with the 3 variations (or 2 variations and 1 control). Then, you would set up the experiment in the GrowthBook UI, specifying the percentage of users you want to see each variation. Here's an example of how you might use a feature flag with three variations in your code:
Copy code
javascript const result = growthbook.feature("my-feature");  if (result.value === "1") {   // Variation 1 } else if (result.value === "2") {   // Variation 2 } else {   // Control }
In this example,
"my-feature"
is the name of your feature flag. The
growthbook.feature()
function returns an object with a
value
property that indicates which variation the user should see. You can then use this value to determine which code to execute for each variation. Remember to replace
"my-feature"
,
"1"
,
"2"
, and the code inside each condition with your actual feature flag name, variation values, and variation code [source](https://docs.growthbook.io/faq#how-do-i-run-an-ab-test-in-growthbook).
w
@brash-pillow-71627 How do I remove a linked feature flag of a A/B test that I haven't launched yet?
o
A message from kapa.ai
To remove a linked feature flag from an A/B test that you haven't launched yet, you would typically do this from the experiment setup page in the GrowthBook UI. However, the exact steps can vary depending on the specific UI and version of GrowthBook you are using. Unfortunately, the provided knowledge sources do not include specific instructions on how to remove a linked feature flag from an A/B test in GrowthBook. For detailed instructions, I would recommend referring to the GrowthBook documentation or reaching out to the GrowthBook support team.
w
@brash-pillow-71627 in the Add Feature Flag to Experiment section it asks me for the Variation Values and all the 3 fields say OFF. Should I add ON to the variations or the strings that I'm going to use in the script to turn off and on the featue flag
o
A message from kapa.ai
The Variation Values in the "Add Feature Flag to Experiment" section correspond to the possible values that your feature flag can take. These values are what you will use in your code to determine the behavior of your feature under different variations. If your feature flag is a Boolean type (ON/OFF), then you would typically use "ON" and "OFF" as your variation values. However, if your feature flag is a String, Number, or JSON type, you can use any values that make sense for your feature. Here's an example of how you might use these values in your code:
Copy code
javascript const result = growthbook.feature("my-feature");  if (result.value === "ON") {   // Code for ON variation } else if (result.value === "OFF") {   // Code for OFF variation }
In this example,
"my-feature"
is the name of your feature flag, and
"ON"
and
"OFF"
are the variation values you specified in the GrowthBook UI. The
growthbook.feature()
function returns an object with a
value
property that indicates which variation the user should see. You can then use this value to determine which code to execute for each variation. Remember to replace
"my-feature"
,
"ON"
,
"OFF"
, and the code inside each condition with your actual feature flag name, variation values, and variation code [source](https://docs.growthbook.io/faq#how-do-i-run-an-ab-test-in-growthbook).
w
@brash-pillow-71627 how do you set up a trigger for an A/B test?
o
A message from kapa.ai
To set up a trigger for an A/B test in GrowthBook, you typically use feature flags and the GrowthBook SDKs. Here's a general approach to setting up a trigger: 1. Create a feature in GrowthBook with an A/B Experiment rule. 2. Use the GrowthBook SDK in your code to check the feature flag and serve different variations. Here's a basic example:
Copy code
javascript if (growthbook.feature("new-signup-form").on) {     // Variation   } else {     // Control   }
This code checks if the "new-signup-form" feature is on, and serves the variation if it is, or the control if it's not [source](https://docs.growthbook.io/faq#how-do-i-run-an-ab-test-in-growthbook). 3. Set up event tracking to measure the impact of your experiment. You'll need to connect GrowthBook to your data source. 4. In your tracking setup, you should include a trigger for when a user views an experiment. This is typically done in the
trackingCallback
of your GrowthBook configuration. Here's an example of what this might look like:
Copy code
javascript trackingCallback: (experiment, result) => {   // Track that the user was part of the experiment   analytics.track("Experiment Viewed", {     experimentId: experiment.key,     variationId: result.variationId   }); }
This callback will fire whenever a user is exposed to an experiment, allowing you to track which users saw which variations [source](https://linen.growthbook.io/t/23080247/hello-team-i-am-new-to-growthbook-and-want-to-run-an-a-b-tes). 5. If you're using Google Tag Manager (GTM), you can set up a custom event trigger for the "Experiment Viewed" event. In GTM, you would: - Create a new trigger - Set it as a Custom Event - Use "experiment_viewed" as the Event name This trigger can then be used to fire tags for tracking your experiment [source](https://docs.growthbook.io/guide/google-tag-manager-and-growthbook#step-4-add-a-trigger). Remember, the exact implementation may vary depending on your specific setup and the tools you're using for tracking and analysis. Always refer to the most recent GrowthBook documentation for the most up-to-date information.
w
@brash-pillow-71627 how does GB knows when to trigger a feature flag for an A/B test or not?
o
A message from kapa.ai
GrowthBook determines when to trigger a feature flag for an A/B test based on several factors and mechanisms: 1. Feature Flag Rules: When you set up a feature flag in GrowthBook, you define rules that determine when and for whom the flag should be activated. For A/B tests, you typically create an A/B Experiment rule [source](https://docs.growthbook.io/faq#how-do-i-run-an-ab-test-in-growthbook). 2. Targeting Attributes: You can specify targeting attributes to control which users are eligible for the experiment. For example, you might target based on user ID, employee status, or other custom attributes [source](https://docs.growthbook.io/guide/nextjs-app-router#7-gradually-roll-out-to-your-users). 3. Deterministic Hashing: GrowthBook uses deterministic hashing to consistently assign users to variations. It hashes together the selected targeting attribute (e.g., user ID) and the feature key, converting it to a float between 0 and 1. If this value is less than or equal to the rollout percentage, the user is included in the experiment [source](https://docs.growthbook.io/guide/nextjs-app-router#7-gradually-roll-out-to-your-users). 4. SDK Integration: The GrowthBook SDK, integrated into your application, checks the feature flag status whenever you call it in your code. For example:
Copy code
javascript    if (growthbook.feature("new-signup-form").on) {        // Variation      } else {        // Control      }
This code checks if the "new-signup-form" feature is on for the current user [source](https://docs.growthbook.io/faq#how-do-i-run-an-ab-test-in-growthbook). 5. Tracking Callback: When a user is assigned to an experiment, GrowthBook triggers a tracking callback. This allows you to record the exposure event in your analytics system [source](https://docs.growthbook.io/lib/swift#experimentation-ab-testing). 6. Sticky Bucketing: GrowthBook uses sticky bucketing to ensure that users consistently see the same variation, even across sessions or devices, unless you explicitly change the experiment parameters [source](https://docs.growthbook.io/lib/react#sticky-bucketing). By combining these mechanisms, GrowthBook can accurately and consistently determine when to trigger a feature flag for an A/B test, ensuring that users are assigned to variations in a controlled and measurable way.
w
@brash-pillow-71627 activation metrics are triggers?
o
A message from kapa.ai
No, activation metrics are not the same as triggers in GrowthBook. Let me explain the difference: Activation metrics are used to filter users for analysis after they have been assigned to an experiment variation. They don't trigger the experiment itself, but rather help determine which users should be included in the final analysis of the experiment results. Here's a brief explanation of activation metrics: 1. Activation metrics are specified when setting up an experiment in GrowthBook. 2. They are used to filter the list of exposed users to only those who have triggered a specific event or met certain criteria. 3. This is useful when you need to assign users to variations before knowing if they'll actually be exposed to the experimental changes. For example, if you're testing a modal design: - Users might be assigned to variations when they load the page. - The activation metric could be set to "opened_modal". - Only users who actually open the modal (triggering the "opened_modal" event) would be included in the final analysis. As explained by a GrowthBook team member: "The activation metric is a separate event used to filter users for analysis. It's used when you need to assign your users into variations before they see the actual experiment. For example, they may not have scrolled to see the module, or it's in a modal." [Source](https://linen.growthbook.io/t/22707273/hi-everyone-and-good-week-does-setting-a-activation-metric-e) Triggers, on the other hand, are typically used to determine when a user should be included in an experiment or when a feature flag should be evaluated. They are part of the targeting and assignment process, not the analysis process. In summary, activation metrics are used for filtering results in analysis, while triggers are used for determining when to include users in an experiment or evaluate a feature flag.