Hi all, I’m running into some challenges explainin...
# experimentation
p
Hi all, I’m running into some challenges explaining results to eager stakeholders. In a few of our experiments, the metrics naturally lag.For example, when we include users in a variation but don’t expect an effect until several days later. Think of tracking cancellations when users receive an email five days after signup, versus those who don’t. Some stakeholders check results just a few days in (say, days 1-4) and notice differences between the two groups, then ask why. Even after explaining normal metric variance, one person raised a fair question: Wouldn’t variation B have a built-in disadvantage if it starts lower before the treatment even happens? I feel like this falls under the concept of random variance, but it also made me wonder whether there’s something to his point. Has anyone else run into this kind of situation? How did you handle it?
l
I'm not a growthbook expert, but I'll try to help 🙂 I'm not sure about other usecase, for your sign up email case, you should move the experiment allocation as close as to the effect. So instead of checking the experiment during sign up, you can check if experiment is ON before sending the email for example, if that's possible. In my last company, we had experiments of them we only knew the results 3 months later (credit card default rate based on acceptance criteria), this means we had to write in advance how long it'll take to see the results, but as long as they only trigger the value once, it's all good. Like if a user signs up, and you only send the email 5 days later and the metrics is generated 5 days later, it's okay, you just have to see similar conversion metrics since day 1 🙂 happy to discuss more if you'd like.
p
Thanks for your input! Unfortunately it's not possible for me to change the timing of the experiment allocation, and I'm not quite sure how this solution gets to the gist of my question either. Thank you for your reply🙏
s
Hi @plain-cat-20969. This is a great question. It's almost certainly just normal random variance. To confirm, you could always run an A/A test to ensure there's no issues with assignment. Other than that, here are few other strategies: 1. Use metric delays/conversion windows. This prevents stakeholders from seeing "results" during the period when no effect should exist yet. 2. Use the Experiment Decision Framework to set clear guidelines/expectations. This can help align the org on how/when to call an experiment based on agreed upon criteria.
p
Thanks @strong-mouse-55694. I did look in to Metric delays at one point, but it removed all of the data from the metric, even data that was mature enough according to the timeframe I set. I think it was basing it on the age of the data and not the exposure date I had set
@strong-mouse-55694 Is it possible to chat with someone from Growthbook about this? I'm interested in learning more about variance reduction and I see that there are different options, including CUPED, which I've never implemented before