We've had a test running for 1-2 days now and it p...
# ask-questions
f
We've had a test running for 1-2 days now and it popped up with a suspicious tag on the results (which not going to lie seems crazy and I'd probably consider suspicious) but is there any things I should be taking a look at? Should I be double checking our implementation or making sure the event is actually firing etc?
Any thoughts still seeing suspicious this morning was hoping it'd go away after running a few more days
s
Not with GrowthBook, but they seem really slow to respond. I would recommend looking into implementation of the variant feature. It's also possible the variant is devastating to your metric. We ran a test that showed suspicious as well. My understanding is that the designation "suspicious" is used to flag a major statistical disparity. When we ran the test, though, we predicted the variant would drastically increase our metric into the suspicious category. So TLDR:
Check implementation, check data is syncing correctly, and anything else you can think of that might make the test results unreliable, if it is none of the those you should start to consider if the variant is inferior to your control.
The variant is a changed feature and the control is the status quo correct?
Do you have hotjar or any other user recording tools? You could see if users are having trouble
f
All good about the not with growthbook I think having a slack gives the ability to have the community help each other out with what we've experienced so thanks for commenting. That is kind of what I was thinking as well which is why I was hoping letting it run a day or two more might right the ship a bit. Haven't checked the numbers today but was informed it still has the tag. Might just recheck things are firing correctly and that it wasn't just a giant change in numbers
s
That's a good proactive approach for sure. I will say GrowthBook may display winners/losers designations for metrics early in the test that seem to lose statistical significance over time and as the sample size increases. I had a metric go from +20% to +1.5% as the sample size went grew 10x. So the variant was declared the "winner" in the metric but then the designation was taken away. So it is possible as the sample size increases the % change will as well. However, if you have a large sample size and have run the test over a variety of days (our weekend users behavior different than weekday users for instance), and it is hurting revenue beyond of acceptable levels I would recommend being proactive to make sure everything is set up correctly. I am also VERY curious what change was made to a homepage button that would cause such a drastic decrease in click through rate. But I'm nosy. Don't mind me