Hello! I've got a few questions about the experime...
# ask-questions
b
Hello! I've got a few questions about the experimentation features. We're trialing GrowthBook right now and have an A/A test going on. While the SDK has been great so far (Ruby), I have a couple issues I'm running into on the analysis side: 1. When I check the assignments table, there's ~50/50 split between variations. However, in the UI I see a 100/0 split. 50/50 is actually what I want but I'm confused about why the UI doesn't match. 2. When I try to pull in results, nothing comes up. However, the query for the experiment metric works as does the experiment for pulling identifiers. Any ideas on what might be causing these issues?
f
Hi Jane
1. the experiment reporting assignment weights can be adjusted manually on the experiment. When you import from an running feature we know the weights, otherwise you have to set them (and you can always adjust it manually)
2. can you share what the results growthbook is getting back? it might be related to the weights problem in #1.
b
Ah, makes sense re: #1 — we created the experiment programmatically. The following screenshot shows the message I'm getting when trying to import results:
Re: #1 — if I manually change the split to match what we're actually getting (50/50), will that create a new "phase" in the experiment per the warning in the UI?
f
Hi Jane, to avoid the new phase you can reset it to draft, and then adjust the weights, and start again
b
That's good to know. Do we need to adjust the weights if the 50/50 split was defined in the associated feature? I don't want to actually cause any change in the assignment behavior; just wondering if this is a case of slightly misleading documentation.
f
you shouldn't need to adjust the weights normally. I can look into see what happened there
👍 1
b
Hey Graham, as an update, results are being pulled now but they look like all of the results are tied to the control (Variation 0), which maps with what the experiment UI says and means that there's no comparison effectively. This seems like it might be an issue of unclear instructions on how to setup an experiment when it's tied to a feature rollout. Can you clarify? Are there docs somewhere that gives a step-by-step for how experiments should be setup with features? We understood the Experiment rule on a feature to control the assignment of Control to Variant, which is true, but then if the Experiment split layers on top, that's pretty confusing and means half of Control would see the variant and half of Variant would not see the variant.