Hi, is there a way to exclude a user from an exper...
# announcements
i
Hi, is there a way to exclude a user from an experiment based on their actions/events?
f
There are two ways currently: If the user's actions are known to the SDK, they can be set as an attribute, and you can target the feature based on that. Alternatively, you can exclude a user or set of users when running the experiment analysis (using segments, or custom filters).
i
What do you mean by custom filters?
f
if you go to customize an experiment analysis, there is a custom SQL filter at the bottom
i
Ah, ok. I see it. That's definitely useful. 🙂
And this will be added to experiment assignment queries under data sources?
f
yes, for that one experiment
i
Hm, I'm not sure that would work. In our case the experiment assignment (A/B for example) triggers first, and then based on the next step we'd either exclude you (if you click a link to login - meaning you're not new user) or include you (do nothing) if you continue.
f
you could add an activation metric for that event
i
What does that mean? Add additional criteria to metric SQL?
f
Activation metrics are useful if your assignment and experiment viewing are separate - like if you had an AB test in a modal that not every user saw, but you had to render the test before hand. You could then fire an event on modal open, and use that event as an activation metric, to only include users who actually saw the experiment
i
I saw that in the docs but didn't quite understand it.
So we'd fire another event, then in the metric SQL basically use that as filter to determine only the users who need to be included in the analysis?
f
yes
exactly, you can add that as an activation metric to the experiment analysis
i
Ok, that would work yep. Thanks.
Wouldn't that skew the results since we'll still have more participants? Or would one just add the same logic to the SQL for experiment assignment?
f
the number of activated users will be less than the total bucketed - and as long as its a random assignment, it should be the same percentage split.
it actually reduces noise generally
i
Ok. But the problem is that we don't have that exclusion rule (link) on both versions. 🤔
f
Hard to A/B test if the versions are symmetrical in the metrics
We would be happy to help you talk through ways to measure what you want if you like
i
I think we can try figuring it out by adding an SQL filter and maybe adjusting metrics to not count excluded people. If we fail miserably, I'll definitely reach back to you. 🙂 Thanks