Why do we need to have a running experiment to use...
# give-feedback
r
Why do we need to have a running experiment to use features? This feels confusing and also not practical, for me to test them I do not really need to have an experiment running but with these changes I am forced to.
f
How are you running experiments?
h
We definitely support feature flags by themselves (with no experiment). Simply create a feature flag in the GrowthBook app, apply any targeting & attributes (or skip this entirely for an always on/off flag), and implement in the SDK: https://docs.growthbook.io/lib/js#quick-usage
r
Well before all I had to do was create a feature, add the targeting conditions and turn it on, this is what we did to be able to use it. But now I can only turn it on or edit the traffic split if there is an experiment running. Basically before when creating a new feature we had the option A/B Experiment but now it is either New A/B Experiment or Existing A/B Experiment.
By the way @happy-autumn-40938 I am talking about the UI itself not the code part, the flow for features definitely changed.
To test the experiments in our non production environments we are now force to start an experiment which is not necessary and creates a lot of unnecessary phases for the experiment.
h
How are you setting up your forced value or rollout rules? Is this a flag that you're wanting to (eventually) link to an experiment, or just a standalone flag? For the latter, I was able to create a valid feature without referencing an experiment at all by using these buttons:
r
We did not used forced or percentage rollout before, only a/b experiment. I would say this change is more a surprise because the way we use to do our feature testing to confirm it is working is no longer an option because of what I mentioned. So you suggestion is to use percentage rollout to test?
We basically want to be able to “test” the feature without creating an experiment to do so, including traffic split
@happy-autumn-40938 just to continue the feedback here, we did tried to use the options but they do not fully support what we wanted like we had before. Before we could just configure the traffic split, turn the feature on and test locally and in test environments without having to create an experiment which was nice and easy. Now we are forced to create one if we want to test the same way, force values for example do not trigger the experiment callback and this is one thing which is part of our testing, so now we always need to create unnecessary phases to experiments just to test. Could we maybe have the “old” setup back somehow?
👍 1
l
I'm facing the same pain.