Does Growthbook only work on splitting users? Woul...
# ask-questions
f
Does Growthbook only work on splitting users? Wouldn't it be valuable to have the ability to see how the same user responds to different treatments?
r
You can do both! 1. GrowthBook uses deterministic hashing to assign users to experiment variations, ensuring that the same user always gets the same variation given ​*the same experiment settings*​ (experiment key and user hashing id). This approach is designed to maintain consistency across multiple pages or applications for a given user. Therefore, by default, a user would not be exposed to different variations within the same experiment. 2. However, if you want to test how the same user responds to different variations, you would need to set up ​*separate experiments*​ with different experiment keys or user hashing ids. This way, you could potentially track a user's response to different variations across these separate experiments.
h
I think Aaron is asking about the case where you can compare the same user in both treatments in the context of an experiment. This is an approach used in switchback experiments, but it opens a full set of statistical challenges that standard experimentation stats can't really solve. In fact, getting that "ability to see how the same user responds to both" is impossible, since the user in control in week2 after getting the experiment in week1 is not the same as the user in week1 getting the control. Experimentation explicitly tries to create this "counterfactual" user, but does so across experiments.
f
Are you saying it should statistically be treated as a completely new user? Wouldn't that be ok if we're not testing users, but rather the scenarios themselves, and if the metric is tied to the scenario?
h
This is a pretty nuanced discussion, but basically lets say you randomize and analyze on
scenario_id
instead of on
user_id
. You can do this in GrowthBook if you set it up to hash on a different id type. In this setting, you are violating a central assumption that undergirds experimentation of no spillovers (also referred to as the stable unit treatment value assumption, or SUTVA). Because whether
user_id
got treatment in
scenario 1
affects their reaction to treatment or control in
scenario 2
, the results of
scenario 1
and
scenario 2
are no longer independent. This is the subject of a large body of work in experimentation, and is an issue that experimentation platforms like those at Lyft (where if they affect driver 1 in palo alto that also affects driver 2), Amazon (where if they affect the price of good 1, it may affect the sales of good 2) have to deal with explicitly. There are almost always some spillovers, but we tend to hope they are small. Once you start experimenting within user, it becomes very unlikely they are small.