Hi all, We are using growthbook to run experiments...
# ask-questions
p
Hi all, We are using growthbook to run experiments using feature flags and have 3 different environments setup (3 separate SDK connections, all Javascript ones). We now wish to enable sticky bucketing feature for which we have implemented the steps 1,2 and 3 given in this doc - https://docs.growthbook.io/app/sticky-bucketing#setting-up-sticky-bucketing but we wish to test if everything is working fine or not on one our test environments for which the sdk version has been updated and configuration updated. If we enable the step 4 toggle - Enable Sticky Bucketing for your organization, will it affect the other environments as well?
h
Essentially yes, all environments and experiments will be eligible. A final gatekeeping measure would be to not provide a
StickyBucketService
to the SDK in production until done testing. Without the service at the SDK level, it will just do normal non-sticky bucketing
šŸ™Œ 1
p
Great! Thanks šŸ™Œ
Can you also help with a probable way to test the sticky bucketing feature? If it's working fine or not after the implementation in the sdk?
h
In a dev environment with a dev-specific experiment, you could try changing the running experiment in some way using the "Make Changes" button (targeting rules, variation weights, or something else). Assuming you've chosen an a change strategy powered by Sticky Bucketing (SB icon), then a non-sticky-bucketed user would likely be affected, whereas a sticky-bucketed user would not be.
or lower level you could try editing the sticky bucketing document for a specific user (localStorage, cookie, etc) and see if they get bucketed as their new sticky variation or not.
p
Sounds great, Thanks! šŸ™‚
Hey! Another question šŸ˜… Is there a way that I can force a user with some attribute (maybe like a unique clientId that we assign to every user on our website) to fall into a particular variant of my experiment every time? I wish to specify which variant this user should fall into.
h
If you are using a feature with a linked experiment rule, then you can put a force rule above the experiment rule which forces a specific value for a specific clientId (force rule with attribute targeting). However, this happens outside of (before) the experiment evaluation and thus the user who hit that forced rule would not have a sticky bucket assigned nor persisted.
p
If this is happening before the experiment evaluation, am I right to assume that the tracking callback will not be fired for that user?
h
that's correct. A force rule that triggers before your experiment rule will negate the subsequent rules; the experiment will not trigger and the tracking callback will not fire. You're essentially skipping the experiment and forcing a bucket instead.
p
So, there is no way to force a particular variant of the experiment for a specific user?
h
Not the way you're describing, at least not in the GB UI. This is by design: experiments are randomized in order for the stats to work out -- overriding the bucketing logic and calling tracking will bias your results. That said... ā€¢ You can force the value of a variant above the experiment rule (as above). ā€¢ You also have the ability to simulate a variation at the SDK level via
gb.forceVariation(key, variant)
, but this will skip sticky bucketing and won't trigger the tracking callback (although you can manually call track with the results if you want). ā€¢ Or if you're trying to test sticky bucketing, you can use the methods I mentioned earlier (I'd probably manually seed the user's sticky bucket doc and observe that experiment enrollment adheres to this manual assignment).
āœ… 2
p
Thanks for the detailed clarification, it's really helpful šŸ™‚
cc @elegant-hydrogen-94541