Hey everyone! I was curious on how you track lagg...
# experimentation
h
Hey everyone! I was curious on how you track lagging metrics on users that has been part of an experiment. Growthbook doesn't support (correct me if i'm wrong) that you continue tracking a user after an experiment has finished. So for example lets say we run an experiment for 4 weeks trying to impact our core business metric that we can call C. That metric is lagging and might occur after an additional 2-4 weeks after the experiment has concluded and therefore we have a more sensitive metric (P) that strongly correlates with metric C. The only way I can think of to achieve what I want is to let the experiment continue running but change the traffic included in the experiment to 0%. I'm not sure how that could affect the users already allocated though, if they'll continue seeing the variation or not. Any ideas would be very helpful!
👀 1
h
Hey Nicolaus! I think you have 3 options: 1. Adjust phase end date manually after stopping the experiment. I don't love this option since the phase also contains information about how you rolled out the experiment in that phase, so knowing when it ended is valuable. 2. Create a custom report and just remove the end date in there! Then it's a custom analysis as if your experiment never ended. This is my preferred approach. 3. If you are using conversion windows, you can just create that core business metric
C
with a 5000 day conversion window, which I think will extend beyond the end of the experiment (see https://docs.growthbook.io/app/metrics#metric-windows). This can interact with the Conversion Window Override in your analysis settings, but if that is set to Respect Conversion Windows, then your very long conversion window should extend beyond the experiment and let you refresh your main results dashboard with new data even as the experiment ends.
🙌 1
> The only way I can think of to achieve what I want is to let the experiment continue running but change the traffic included in the experiment to 0%. I'm not sure how that could affect the users already allocated though, if they'll continue seeing the variation or not. > I don't recommend doing stuff to the actual experiment rollout to achieve this, given there are a few "analysis only" options available above! LMK if you have other questions 🙂
h
Thanks for the thorough answer @helpful-application-7107! Option 2 and 3 definitely sounds better than the idea I had 😅 Will test it out and see what approach feels the best. Thanks again!
Adding this comment incase someone in the posterior reads this thread. Suggestion number 2 that Luke proposed worked great and just as intended. One thing you need to remember is to go into configuration of the ad-hoc report and clear the end date so that you can actually get the lagging events. It does not include new users to the results that are generated either. I tested this by first stopping the experiment and then triggering my events. When reloading the results i did indeed see the number go up in the ad-hoc report while in the experiment it was the same as when I stopped it. After that I proceeded to open an incognito window (to get a new cookie) and do the same thing. The results in the ad-hoc report was as anticipated unchanged. IMO you could add this to the documentation for ad-hoc reports since I think it's a powerful "feature" that could be missed by the user. I initially thought (could be my lack of understanding) that ad-hoc reports were only applicable for the timespan of the actual test. For example if you had missed adding certain metrics and so on.
h
Thanks, that's a reasonable suggestion. We're working on improving the power and functionality of custom reports now, so that feedback is useful as we shape their future.
✅ 1