Hello, when using Growthbook for A/B testing, I'm ...
# ask-questions
f
Hello, when using Growthbook for A/B testing, I'm looking to completely reshuffle the traffic for a specific experiment and redistribute it. Is there a way to do this?
f
Hello - yes there is
if you edit the experiment, there should be an option to re-seed
you can start a new phase
❤️ 1
and it will ask you
Screenshot 2024-04-02 at 1.11.38 AM.png
👍 1
f
Can you tell me where I can start a new phase? I couldn't find it.
f
image (8).png
How are you running the experiment?
f
I'm sorry, this is my first time taking over an existing old experiment. Are you saying that after I modify the experiment's configuration and deploy it to production, I will have this option?
Since it's an experiment that is already running, I'm hesitant to explore too much for fear of affecting the online environment and users.
h
When you modify the experiment using the Make Changes button/flow, you will be given these release plan options. A summary of the effects for users will be listed on the final confirmation page, so you can review before publishing. Once you confirm them and publish, the changes will be rolled out (assumes that your SDK automatically pulls the latest feature/experiments, which is the default behavior).
f
It might be because the version we are using is too old, and there's no "Make Changes" button/flow on the page. I'll try upgrading the version first.
f
ya, its relatively new
we make all changes backwards compatible. We recommend you update every month at least
f
Hi, I can't find "release plan" when click "make changes". I have upgrade to version 2024-04-03
h
It looks like you aren't actually controlling any feature flags or visual experiments with this experiment. The Make Changes flow only applies to experiments with linked features, visual experiments, or URL redirect tests.
f
So how can I link this experiment to some feature flags? I create this experiment from "view results" button in a feature page.
Oh I know. I need to "add experiment", not from "view results"
I started an experiment "e" linked to a feature "a". Feature "a" includes three configurations corresponding to three traffic groups: 0, 1, 2. I initiated a new phase in experiment "e", modifying the traffic to only include groups 0 and 1. However, the experiment configuration being pulled by the clients is still the one with group 3. This suggests that the clients are still pulling the configuration from the feature prior to the modification, and the experiment has not taken effect. Could it be that the interface address for pulling JSON configurations from the experiment is different from the original one used for pulling JSON configurations from the feature?
I see, I need to change to experiment key instead
Hi, I have noticed that experiments created in two different ways have different performances. Method 1: Create a new feature configuration simultaneously when creating a new experiment. Method 2: When creating a new experiment, link to an already existing feature. For Method 1, when modifying the traffic ratio in the experiment, the traffic ratio in the feature also changes synchronously, and the configuration actually pulled by the client is consistent. For Method 2, when modifying the traffic ratio in the experiment, the traffic ratio in the feature does not change synchronously, and the configuration actually pulled by the client is from the feature, not the experiment. I would like to confirm whether this situation is part of your design, or is there something wrong with my operation?