Hello! Loving GrowthBook and thank you so much for...
# give-feedback
f
Hello! Loving GrowthBook and thank you so much for all your efforts! 🙌 I had a couple of questions/feedback/ideas. First, I was wondering if there are any plans to add outlier detection on the metrics? Second, it would be awesome to be able to adjust the time before and after the results when drilling down on date (picture) - would be so useful to ensure that the groups were “identical” prior to the experiment, and also if the experiment has a long term effect / how long it takes before the group align to same effect (if that makes sense…?)
f
Thanks for the feedback. Those are interesting ideas. If we were to include data from before the experiment, how would the conversion windows work? Without an experiment exposure event, when do we start counting?
And for outlier detection, we do plan to add a histogram of metric values as well as min/max and a few percentiles (e.g. p99). We're trying to avoid dealing with any raw conversion data and only working with aggregates so we limit the amount of data processing that needs to happen within GrowthBook
💡 1
🙌 1
f
For you first message. I totally mean we still should indicate when the experiment starts and ends very clearly in the graph. But would have been interesting to add (e.g. greyed out or before a line that indicates the start date) - for those users that are included in the experiment, how did they behave before turning on the experiment. An example - say your metric is revenue per order. For those users then in control and in treatment - what was their revenue pr order for, say ,the last two weeks before starting the experiment? That way we can make sure the groups indeed were the same and our randomization is workin properly. I know I can see the metric graph, but would be useful to compare the groups. Hope that made sense 😅
f
That makes sense. We do plan to implement CUPED at some point, which looks at user behavior prior to an experiment and uses that to reduce variance and shorten experiment times. I think when we do that, we can also add a related data quality check to make sure the groups aren't suspiciously different.
🙌 2
💡 1
One more thing for outliers that you may not have seen. We let you specify a cap for metrics which is useful for limiting the impact huge outliers have on experiment results. For example, it most orders are under $10, you might set the cap at $20 so even if you get a random bulk order for $1000, it will only be counted as $20 in the analysis.
f
Thanks for the response! 🙂 I look forward to following the developments here. Yes, the cap is a good idea. I’ll use that for now. Thanks!
Hey! Just wanted to follow up here with some more context. One is the metrics in the pre-period for seeing that the groups are the same like discussed 👆, the other is the post period to get the long term effect. This article has a nice visualization of what I’m after. Would be nice to follow the experiment participants for a self defined time after the experiment too 🙂