Hi! We ran an experiment in growthbook a year ago ...
# announcements
f
Hi! We ran an experiment in growthbook a year ago and I've taken a closer look at our results now.. in growthbook it looked like the randomization was done without issues but looking at the data from before the experiment (outside of growthbook), it looks like the treatment group was significantly different from the control group for most metrics, which could point to something being wrong with the randomization. The graph under shows whether customers were "active" or not. There is a significant difference before the experiment starts. The weirdest part however is that for some reason the difference actually disappears when the experiment starts. Very strange. Since this experiment was run over a year ago it might be tricky to find out exactly how the randomization was done. I guess I'm just wondering if anyone out there has any hunch of what might have happened here or what checks I can run to find out?
f
Hi Nora
that is interesting
roughly how much traffic was included in each group?
h
Chance differences like this can definitely occur; or the randomization could have been problematic. To figure which is the case, I agree looking at the sample size in each condition could be useful as our SRM test is pretty conservative. Then you could look at other metrics that are not totally correlated with this metric. But this kind of plot is great and building them in to GrowthBook is on our radar.
f
Hi! There were 61326 users in the control group and 62203 users in the treatment group. We didn't get any warnings from growthbook that the groups weren't balanced in the main results. After removing some users that were assigned to both groups, there were 61275 users in the control group and 62145 in the treatment group. Definitely, my next step is to check uncorrelated metrics. Could be that the groups are unbalanced in terms of activity level by chance, but it's just so strange that the difference disappears right when the experiment starts 🤔