Hi there ! I am a PM seeking to conduct some expe...
# ask-questions
s
Hi there ! I am a PM seeking to conduct some experiments on our leadform and trying to figure out the best method of implementation. We have 3 experiments in mind looking at changes to separate form fields within a single lead form. Should we: A ) Set up a feature flag for each change and run experiments in parallel (although this means that we cannot be certain our control is a 'true' control, as it may have other experiments impacting the lead form) B ) Set up a single ABCD experiment with 4 variations (including Control), but this doesn't give us insight on how different experiments interact with each other. (Speaking of which i am not clear on how we would allow for >1 variant within a single feature flag)
f
I would go with B. A might be interesting, but the interaction effects can be hard to test. Detecting AAA vs AAB vs ABA vs ABB... (etc) will be hard and from my experience is unlikely to be interesting
for B - you just have to set up a feature flag that is not a boolean to get multiple variants
🙌 1
s
Awesome thanks graham - is it possible to add new variants whilst an experiment is in flight ? Similarly what about adding new metrics ? Would these be filled out historically for the duration of the experiment?
f
You can add variants - but its not really recommended as people might switch variants (unless you have sticky bucketing enabled), or it can cause SRM warnings
metrics can be added / removed as needed - those are on the analytics side only, and don't affect assignment