*Experimentation capabilities*: What is the list o...
# ask-ai
f
Experimentation capabilities: What is the list of all experimentation types Growthbook can support? @brash-pillow-71627
o
A message from kapa.ai
GrowthBook supports a variety of experimentation types. Here are some of them: 1. Server Side and Mobile Experiments: Server-side A/B testing, also known as backend or server-side experimentation, is a technique used in software development and web applications to test measure the impact of any code changes or new features. The decision about what version to serve a user is decided on the server. This allows you to run very complex tests that may involve a lot of different parts of the code, and span multiple parts of your application ([source](https://docs.growthbook.io/experiments)). 2. Feature Flag Experiments: With GrowthBook, the easiest way to do server-side testing is by using feature flags. Each feature flag has conditional logic which controls how a feature should be shown, and if a feature should be shown as part of an experiment. GrowthBook also lets you target any feature or rule based on the targeting attributes you define ([source](https://docs.growthbook.io/experiments)). 3. Custom Assignment or 3rd Party Experiments: GrowthBook can be used for experiment analysis if you are using custom assignment code or another experimentation system for randomization of users into variations. As long as the exposure/assignment information is available from within your data warehouse, you can use GrowthBook to analyse the results of your experiments ([source](https://docs.growthbook.io/experiments#custom-assignment-or-3rd-party-experiments)). 4. API / ML Experiments: GrowthBook's SDKs works well with anywhere code can run, and as such you can use it in the API or when running machine learning models. And, with our deterministic hashing method for assignment, you can even be sure users get assigned the same variation across your platform without needing to store state from GrowthBook ([source](https://docs.growthbook.io/experiments#api--ml-experiments)). 5. Mutually Exclusive Experiments: GrowthBook supports creating experiments that have mutually exclusive visitors ([source](https://linen.growthbook.io/t/2747985/hey-u01t6hchd0a-just-had-a-few-follow-up-questions-on-growth)). Please note that Multi-Armed Bandit experiments are not yet supported, but they are on the roadmap ([source](https://linen.growthbook.io/t/2747985/hey-u01t6hchd0a-just-had-a-few-follow-up-questions-on-growth)).
f
Example of experimentation types: A/B testing, multivariate testing. @brash-pillow-71627
o
A message from kapa.ai
GrowthBook supports several types of experimentation, including A/B testing and a form of multivariate testing. 1. A/B Testing: GrowthBook allows you to run A/B tests using feature flags. This method of running experiments allows any feature to be released as an A/B test. It is ideal for more complex experiments requiring multiple code changes, or for companies that want to have an experimentation development culture, and determining the impact on your metrics of any feature or code change ([source](https://docs.growthbook.io/feature-flag-experiments)). 2. Multivariate Testing: While GrowthBook does not offer built-in support for multivariate testing, you can accomplish a similar thing by listing out all the permutations as separate variations. For example, if you have 2 variables, each with possible 2 states, you could create a 4-way test with all the combos - AA/AB/BA/BB ([source](https://linen.growthbook.io/t/15646260/hey-team-just-wondering-if-growth-book-offers-multivariate-t)). In addition to these, GrowthBook also supports A/A testing. An A/A test is run just as an A/B test would be run, but each variation has no actual difference in the application. This lets you test out that your systems are working correctly, as you should see no significant differences between the variations ([source](https://docs.growthbook.io/kb/experiments/aa-tests#how-do-i-run-an-aa-test-in-growthbook)).
f
What is the link to the Growthbook documentation describing how the results of experiments are presented in the UI? @brash-pillow-71627
o
A message from kapa.ai
You can find detailed information on how the results of experiments are presented in the GrowthBook UI in the [Experiment Analysis section](https://docs.growthbook.io/quick-start#step-3-view-experiment-results) of the Quick Start Guide. This guide explains that once you're inside your experiment, you can click on the Results tab to view the results. You may need to edit your analysis settings and pick the metrics you created earlier if you haven't already. For a more in-depth understanding of these results and how to interpret them, you can check out the dedicated page in the GrowthBook documentation [here](https://docs.growthbook.io/app/experiment-results).
f
Does Growthbook adhere to relevant laws and regulations regarding data protection (e.g., GDPR, CCPA)? @brash-pillow-71627
o
A message from kapa.ai
Yes, GrowthBook takes data protection seriously and is compliant with several key regulations, including the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the Health Insurance Portability and Accountability Act (HIPAA). They perform regular penetration testing, have an active bug bounty program, and maintain strict controls to protect data privacy ([source](https://docs.growthbook.io/compliance)). GrowthBook does not store any personally identifiable information (PII). Users utilize their own data sources to collect event data from their experiments, and GrowthBook is only granted "read" access to those data sources. This means that all user data remains under the user's control ([source](https://linen.growthbook.io/t/16026936/hey-all-can-anyone-tell-me-if-growthbook-is-gdpr-compliant-w)). For companies concerned about data security, GrowthBook offers the option to self-host the platform instead of using their cloud-hosted version ([source](https://linen.growthbook.io/t/16026936/hey-all-can-anyone-tell-me-if-growthbook-is-gdpr-compliant-w)). You can request access to GrowthBook's certifications, as well as their most recent penetration test report, incident response plan, access control policy, and more from their [Trust Center](https://trust.growthbook.io/).
f
Where Can I find the product road map? @brash-pillow-71627
o
A message from kapa.ai
As of the information available, GrowthBook does not currently have a public roadmap. The team has mentioned in several forum conversations that they are working on making a public roadmap available ([source 1](https://linen.growthbook.io/t/2751884/hey-all-is-there-a-page-to-see-the-official-roadmap-for-grow), [source 2](https://linen.growthbook.io/t/5098111/what-s-the-best-way-to-keep-track-of-growthbook-s-roadmap-e-), [source 3](https://linen.growthbook.io/t/12350821/hi-there-i-m-wondering-about-approval-flows-sequences-in-gro), [source 4](https://linen.growthbook.io/t/16932042/does-growthbook-let-you-automatically-let-you-increase-traff)). In the meantime, you can keep track of new features and updates by following the public pull requests on the [GrowthBook GitHub page](https://github.com/growthbook/growthbook/pulls) or by joining the [GrowthBook Slack community](https://slack.growthbook.io?ref=docs-home) where you can chat with the team and other users, and stay updated on new features ([source](https://docs.growthbook.io#documentation)).
f
What is the solution provided by GB to run multiple experiments without the risk of audience contamination? @brash-pillow-71627
o
A message from kapa.ai
GrowthBook provides a solution to run multiple experiments without the risk of audience contamination by using namespaces. Namespaces allow you to segment your audience and prevent interactions between different experiments. This ensures that the same user is not exposed to multiple experiments simultaneously, thus avoiding any potential contamination or interference ([source 1](https://linen.growthbook.io/t/18476029/hi-do-you-know-if-there-is-a-way-via-growth-book-to-filter-u), [source 2](https://linen.growthbook.io/t/18793551/hello-is-there-a-way-to-run-2-experiments-in-an-exclusive-wa), [source 3](https://linen.growthbook.io/t/9020974/hello-guys-in-growthbook-have-a-way-to-run-two-or-more-a-b-t)). If you want to run two experiments in an exclusive way, you would use separate namespaces for each experiment. If you roll out using a forced rule instead of an experiment, that would apply across all traffic and would not be namespace-specific ([source](https://linen.growthbook.io/t/18793551/hello-is-there-a-way-to-run-2-experiments-in-an-exclusive-wa)). You can find more information about namespaces in the [GrowthBook documentation](https://docs.growthbook.io/features/rules#namespaces).
f
is it possible to use multiple forced rules? Any restrictions? @brash-pillow-71627
o
A message from kapa.ai
Yes, it is possible to use multiple forced rules in GrowthBook. Forced rules are a type of override rule that forces everyone who matches the targeting condition to get a specific value. For example, you could have a feature default to OFF and use force rules to turn it ON for a specific list of countries ([source](https://docs.growthbook.io/features/rules)). There are a few things to keep in mind when using multiple forced rules: 1. Order Matters: The first matching rule for a user will be applied, so the order in which you define your rules matters. If there are no matching rules, the default value will be used ([source](https://docs.growthbook.io/features/rules)). 2. Targeting Conditions: Any rule can specify conditions to limit which users the rule applies to. These conditions are evaluated against the attributes passed into the SDK ([source](https://docs.growthbook.io/features/rules)). 3. Environment Specific: Override rules are defined separately for each environment (e.g. dev and production). This way you can, for example, test an experiment rule in dev first before deploying it to production ([source](https://docs.growthbook.io/features/rules)). Here's an example of how you might set up multiple forced rules in code:
Copy code
javascript {   "key": "my-feature",   "defaultValue": false,   "rules": [     {       "id": "rule1",       "type": "force",       "value": true,       "condition": {         "country": "US"       }     },     {       "id": "rule2",       "type": "force",       "value": true,       "condition": {         "country": "CA"       }     }   ] }
In this example, there are two forced rules. The first rule turns the feature ON for users in the US, and the second rule turns the feature ON for users in Canada. The order of the rules matters, so if a user matches both rules, only the first rule will be applied.
f
after running an experiment, I found out that the data provided by the metrics in GB and the data coming from our internal reporting are not the same. However, both of them are using the same data source. What could be the explanations? @brash-pillow-71627
o
A message from kapa.ai
There could be several reasons why the data provided by the metrics in GrowthBook and your internal reporting are not the same, even though they are using the same data source: 1. Different User Counting Methods: GrowthBook might be counting users differently than your internal reporting. For example, GrowthBook could be including users that your internal reporting filters out due to various reasons such as bot filtering or data processing differences ([source](https://linen.growthbook.io/t/16296356/why-are-my-metrics-lower-in-ga4-than-in-growthbook-u064f19f0)). 2. Date Range Discrepancies: If the date ranges being compared in your internal reporting and GrowthBook are not the same, this could lead to differences in the metrics reported ([source](https://linen.growthbook.io/t/16296356/why-are-my-metrics-lower-in-ga4-than-in-growthbook-u064f19f0)). 3. Data Synchronization Issues: There could be issues with how data is being synchronized between your data source and GrowthBook or your internal reporting. For example, if GrowthBook is processing data more/less frequently or differently than your internal reporting, this could lead to discrepancies ([source](https://linen.growthbook.io/t/16296356/why-are-my-metrics-lower-in-ga4-than-in-growthbook-u064f19f0)). 4. Experiment Configuration: The configuration of the experiment itself might be different between your internal reporting and GrowthBook. For instance, if the experiment targeting or segmentation is not consistent across both platforms, this could result in different user counts ([source](https://linen.growthbook.io/t/16296356/why-are-my-metrics-lower-in-ga4-than-in-growthbook-u064f19f0)). 5. Metric Conversion Window: The metric conversion window set in GrowthBook might be different from the one in your internal reporting, leading to a different set of conversions being counted in each platform ([source](https://linen.growthbook.io/t/16296356/why-are-my-metrics-lower-in-ga4-than-in-growthbook-u064f19f0)). 6. Data Source Configuration: Incorrect configuration in the data source connection between your data source and GrowthBook could lead to discrepancies ([source](https://linen.growthbook.io/t/16296356/why-are-my-metrics-lower-in-ga4-than-in-growthbook-u064f19f0)). 7. Sampling with your Data Source: If the number of users exposed gets high, your data source might use some sampling and not actually give you numbers. It is possible that this is the cause of your user counts mismatches ([source](https://linen.growthbook.io/t/16296356/why-are-my-metrics-lower-in-ga4-than-in-growthbook-u064f19f0)). To resolve these discrepancies, you might need to adjust your configurations, ensure that the same date ranges are being used, and check for any data synchronization issues.