Hi everyone and good week. Does setting a activati...
# ask-questions
s
Hi everyone and good week. Does setting a activation metric event, (which we weren't doing before) for an experiment change the firing of the experiment_viewed event?
r
Hi Ogulcan, Happy Monday and thank you for writing in! Setting an activation metric for an experiment does not change the firing of the ​`experiment­_viewed`​ event. The ​`experiment­_viewed`​ event is fired every time an assignment event happens, and GrowthBook de-duplicates these on the analysis side. The activation metric is a separate event used to filter users for analysis. It's used when you need to assign your users into variations before they see the actual experiment. For example, they may not have scrolled to see the module, or it's in a modal. You can fire an event which will record the actual viewing of the experiment, create a metric for this event in GrowthBook, and then add that as an activation metric for the experiment. So, the ​`experiment­_viewed`​ event and the activation metric event are separate and serve different purposes. The ​`experiment­_viewed`​ event is related to the assignment of users to variations, while the activation metric event is used for filtering users in the analysis. Hope this helps :)
s
Thank you for your swift response.
b
Hi @flaky-noon-11399! I was looking for this topic, but I have furter questions. "The activation metric is a separate event used to filter users for analysis. It's used when you need to assign your users into variations before they see the actual experiment." "The
experiment_viewed
event is related to the assignment of users to variations," As my understanding this is contradiction. Could you please help me better understand, how should we setup an experiment, if we only want users to be in the experiment, e.g. after opening a modal? Thank you so much!
r
Hi Anna, Thank you for your response and apologies for the confusion. You will need to configure activation metrics to answer this KPI. This will allow you to bucket users into variations before knowing that they are actually being exposed to the variation. A common example of this is with website modals, where the modal code needs to be loaded on the page with the experiment variations, but you're not sure if each user will actually see the modal. With activation metrics, you can specify a metric that needs to be present to filter the list of exposed users to just those with that event. This means you can place users in the experiment when the page loads, but only include them in the experiment analysis if they open the modal (i.e., trigger the activation metric event). Hope this helps :)
b
Hi @flaky-noon-11399! Thank you for your response! Just to make sure I understant it correctly I wrote an example: Let's say my XP runs on a page, where we make design variations inside a modal, what needs to be opened. For that we have an event open_modal, and we set it up as activation metric. And I have a test with only 2 versions, original (v0) and a new version (v1), nad let's say my goal is click_inside_the_modal event, then: if • 10.000-10.000 users visits the page for v0 and v1 • 1.000 opens the modal in v0, and 100 in v1 • and 10-10 clicks the conversion event Then without the activation metric it would mean 10/10.000 for both (so equal v0 and v1) And with open_modal activation metric, it would be 10/1000 for v0, and 10/100 for v1, so I would have a huge win for v1? Thank you in advance!
r
Hi Anna, sorry for the delayed response here. I will now check this with the team and get back to you shortly. Thank you for your ongoing patience 🌼
b
Thank you 😊
r
Hi Anna, apologies for the delay (again). Your understanding of how activation metrics work is correct. I've repeated it below in bullet points. ​*The experiment:*​ - The experiment is testing design variations within a modal (control and variation) - There is an ​`open­_modal`​ event to track this and it's set up as an activation metric - 10,000 users visit each variation - 1,000 open the modal in the control; of those, 10 click the conversion event - 100 open the modal in the variation; of those, 10 click the conversion event ​*For results analysis:*​ - Without the activation metric it would be 10/10,000 for each variation - With the activation metric it would be 10/1000 for the control (1%) - With the activation metric it would be 10/100 for the variation (10%)
Something else to add, Anna, is that in the test described above you'd actually see a Sample Ratio Mismatch (SRM) error as well, because your experiment is 50/50 (10,000 in each variation) but with the activation metric you'd see 100 and 1,000 users respectively, or a 9.1% to 90.9% split! You might want to read our long-form documentation on using activation metrics: https://docs.growthbook.io/kb/experiments/troubleshooting-experiments#solution-2-only-use-activation-metrics-that-do-not-differ-across-variations
b
Hi @brief-honey-45610! Thank you so much for the clarification! 🙂
😀 1