The second thing is related to providing the confidence estimate for the given experiment when analysed at different dimension levels. I understand your concern about multiple-test problem, still, at least in Typeform, we need that number. IMHO, it is better to have it in the toolset and then letting people decide whether use it or not, than not having it and limiting final user at deciding what to do
10/07/2021, 12:49 PM
I think we can come up with a way to make this work, perhaps as an account setting or toggle on the page for whether or not to show full stats. I think by default we don't want to show numbers that we know may be wrong. But I see the value in letting someone dig in, especially if they understand the stats and caveats behind it.
10/07/2021, 2:30 PM
Yeah, we thought about some account-level or data-source level setting to show the prob. to beat and loss when splitting by segments. This would save us the need to export a notebook to validate if one of the numbers we're seeing in the tool is worth reporting to the team or not (as an added insight)
Of course, this is the first step in a longer-term question of how to surface the most salient results from all segment comparisons while avoiding false positives.
10/13/2021, 2:42 PM
Just fixed this in the latest Docker build. It lets you toggle on the full stats engine when breaking down by dimension. For small dimensions (5 or fewer unique values), we turn on full stats by default and for bigger dimensions it's off by default.
10/13/2021, 7:23 PM
You are the best!!!
I am looking at it right now, looks way better! Thank you for listening