Hello. Our company uses GrowthBook to manage some ...
# ask-questions
m
Hello. Our company uses GrowthBook to manage some features of our application. And we encountered a problem. Let me explain it with an example. Let's say that at the beginning we have an experiment with 2 variations, set with percentages of 67% and 33% respectively. The total percentage of users is also specified as 70%. At the beginning, ranges are created based on this data inside the GrowthBook SDK. Namely, the numbers 0.67 and 0.33 are taken and multiplied by 0.7 (70%), and then the ranges are obtained: [0 ... 0.469] [0.67 ... 0.901] As you can see, there are gaps between these ranges. Then, based on our special attribute (MID), as well as the value from the Tracking Key field, a hash is calculated and it has a value in the interval [0 ... 1.0]. After receiving this hash, a check is performed to determine whether it belongs to one of the obtained ranges. For example, MID "l8jHc55N4I=" and Tracking Key "android_ab_turns_to_ff" give a hash of 0.807, and this value belongs to the range [0.67 ... 0.901], so this user gets the corresponding variation. And then we stop the experiment and create a temporary rollout with 70% of users. And here's what we get. The hash is calculated (it will be the same 0.807) and compared with the percentage value of 0.7 (70%): 0.7 < 0.807 this means that this user does not fall under this rule and will use the default value or the value from the next rule (if it matches). However, we need to make it so that users who were included in the experiment before it was stopped (i.e. fell into the two ranges that were created at the beginning) get the value for this feature that we specified as WON (in the experiment management menu in the GrowthBook Web UI). But after stopping the experiment, other users get this value. Is there any way to make GrowthBook select the same users after stopping the experiment as before? Can this behavior be considered a bug?
r
By default, a temporary rollout will assign all users to the winning variant. It's meant as a stopgap measure until you've been able to make the needed changes to the codebase. I understand that you're asking to only assign a portion of users to a certain variant. Is that right? What's the goal?
m
Look, when the experiment is running, users get into the experiment whose hash function result is in the ranges: [0 ... 0.469] [0.67 ... 0.901] and after stopping, users get into the percentage rollout whose hash function result is in the range: [0 ... 0.7] We need all users who were in the experiment to receive the winning option, but the logic of GB doesn’t work that way, and I have provided the proof of it.
r
Thanks for clarifying. I've asked the team for additional insight.
Back. I passed on your message to the team, which led them to find a bug as you said. Thanks! They've issued a fix here: https://github.com/growthbook/growthbook/commit/ba7fc91f8adf6ce514ce10d20ce0e4a250b411de This will ensure that a temp rollout will target 100% of eligible users; however, note that it doesn't support your original use case (targeting only 70% of users), which isn't supported at the moment.
m
This is not exactly what I meant. What I meant was that 70% of the experiment’s users is not equal to the 70% of users indicated in the temporary rollout. Do you understand? Because the experiment ranges are: [0 ... 0.469] [0.67 ... 0.901] and the time rollout ranges are: [0 ... 0.7] This could be fixed if after stopping the experiment it was possible to leave the rule with weights, but with the experiment disabled. For example, in the config for the FeatureRule object, the ExperimentStopped flag can be added, which indicates that the experiment is left, but nevertheless the value for this feature continues to be calculated based on this rule.
f
@strong-mouse-55694 I am from Raman's team. I want to show what we mean. This is an example of an experiment Raman is talking about. According to what you say a Temporary Roll out for winning option Variation 1 means 70% of users that were initially in the experiment will get Variation 1. But this is not happening, we keep seeing users who were initially in the experiment, and then they are out when we enable Temporary Roll out. Raman gave you the logics the client SDK work which explains this behavior (which is not expected from our side). Please, assist. For now we are blocked with using Temporary Roll out feature as it can not guarantee users in the experiment getting a winning option.
👀 1
r
I understand. Unfortunately, temporary rollout doesn't work this way but operates independently of the percentage rollout. With a temp rollout, 100% of traffic will go to the winning variant. While we understand there are downsides to this approach, there's isn't any way to change this behavior. If you share more about what you're trying to accomplish, there might be other options.
f
@strong-mouse-55694 please, define what you mean by "100% of the traffic". If we set experiment for only 70% of users - indeed we expect that all 70% users included in the experiment initially will get a winning variant (meaning 100% out of these 70%). 30% os users who were not initially in the experiment should not be included in the temp roll out. Please, confirm. What we are trying to accomplish - we want to make sure that same users who were initially in the experiment (70%) will get the winning option. Users who were not in the experiment (30%) - will stay outside of Temporary Roll out.
r
It will be 100% of all users, not the initial 70%. What's the reasoning behind not rolling the feature out to all users?
m
I'll try to depict what we need graphically:
GrowthBook-distribution-visual.png
r
I shared your image with the team. There's not a straightforward/supported way to achieve this in GB. If you can share, again, why you want this set up, we might be able to offer other ideas. Depending on your ultimate goal, one option is to use holdout experiments: https://docs.growthbook.io/kb/experiments/holdouts