https://www.growthbook.io/
Join Slack
Hey team, we are using Attributes for the first time to filter out users from our analysis. We are p...
z

Zane Shad

over 1 year ago
Hey team, we are using Attributes for the first time to filter out users from our analysis. We are passing a boolean flag, which gets updated after our experiment flag is already triggered. We later update the boolean flag using .setAttributes(). It seems that the attribute is set correctly, which we confirmed using .getAttributes(). However, users are not getting filtered at all in our Experiments (we tried using Target by Attributes and Target by Saved Groups). We're stuck at this point and not sure how to proceed, so would appreciate any help.
z
g
  • 2
  • 6
  • 95
Hello everyone, I've noticed an issue with uneven traffic distribution between two landing pages dur...
j

Julio Marroquin

over 1 year ago
Hello everyone, I've noticed an issue with uneven traffic distribution between two landing pages during A/B testing with GrowthBook. We set it to have a 50/50 split, but the distribution seems way off (see screenshot). I'm using a simple JavaScript redirect in the control page to handle the redirection which I know is not the best approach. Could this approach be causing the uneven split? Has anyone encountered this before or have insights on adjusting the settings to ensure an even split? Any advice or resources would be greatly appreciated! An example of the redirection I'm using:
if (window.location.search && !window.location.search.includes('edit_mode')) {
    window.location.href = '<https://www.domain.com/variant-landing-page>' +  window.location.search
}
j
j
  • 2
  • 1
  • 95
What is the difference between URL Targeting rules in Visual Editor Changes and Attribute targeting?...
m

Mario

over 1 year ago
What is the difference between URL Targeting rules in Visual Editor Changes and Attribute targeting? Which one is the one where my experiment will be visible?
m
j
  • 2
  • 1
  • 95
Hello Team I have a question about how Percentage Rollout option work for feature flags. I noticed ...
p

Paras Mathur

over 1 year ago
Hello Team I have a question about how Percentage Rollout option work for feature flags. I noticed that rules are fetched from GrowthBook CDN and rules are evaluated within your SDK on browser-side (I'm using javascript sdk). If a feature is to be shown to 50% of end users i.e. 50% users will be shown image with background color A and another 50% will be shown image with background color B, how does GB app know the statistics of how many users have been shown background color A vs B to maintain 50% ratio here ?
p
p
l
  • 3
  • 4
  • 95
Hi everyone, Is there a way of knowing only the first time a user is assigned to a particular experi...
r

Rajiv Abraham

over 1 year ago
Hi everyone, Is there a way of knowing only the first time a user is assigned to a particular experiment+variant and not sending that information multiple times for the data engineering team to deduplicate later? For context, in the Python SDK, if I specify a function
on_experiment_viewed
def on_experiment_viewed(experiment, result):
  aws_firehose.track(attributes["id"], "Experiment Viewed", {
    'experimentId': experiment.key,
    'variationId': result.key
    "timestamp": now()
  })

# Create a GrowthBook instance
gb = GrowthBook(
  attributes = attributes,
  on_experiment_viewed = on_experiment_viewed,
  api_host = "<https://cdn.growthbook.io>",
  client_key = "<some_key>"
)

gb.load_features()

feature_name = "banner-color"

# Simple on/off feature gating
if gb.is_on(feature_name):
  print("My feature is on!")
else:
  print("My feature is off")
If I run the above code in a script, it'll call our aws firehose each time the script is run, passing the same data(expect for a different
timestamp
)? We'll have to deduplicate it in our data warehouse I believe. Is my understanding correct? If so, is there a way of capturing the first time an user was assigned to a variant only? So if I run the script multiple times, it won't trigger
on_experiment_viewed
. Or any other way you recommend. Maybe, what I want is not a good idea. If so, do let me know.
r
g
  • 2
  • 4
  • 95
Hey, I am looking at the pricing page and trying to calculate if we would be below the fair use guid...
s

Scott Paulin

almost 2 years ago
Hey, I am looking at the pricing page and trying to calculate if we would be below the fair use guideline of 10m api requests per month. Is there any guidance on how to calculate how many Growthbook api requests we would make?
âś… 1
s
p
+2
  • 4
  • 6
  • 95
Hi there :wave:, is it possible to duplicate experiments to copy and maintain all the changes you've...
a

Andy Ho

almost 2 years ago
Hi there đź‘‹, is it possible to duplicate experiments to copy and maintain all the changes you've made to an experiment: title, description, screenshots, visual editor changes etc? When I duplicate an experiment, most of the changes are copied, however the visual editor changes are not. We have previously used Google Optimize where this is possible to duplicate an experiment, quickly amend the title and restart the experiment. From a QA perspective for internal users to test, to duplicating and losing visual editor changes, it makes it both time consuming and slightly risky that the test may not function the same. Has anyone else experienced the same issue?
âž• 1
a
b
  • 2
  • 4
  • 95
Hi, I am trying to setup experiment with GrowthBook. I have setup an experiment <https://app.growthb...
s

Sunny Chan

almost 2 years ago
Hi, I am trying to setup experiment with GrowthBook. I have setup an experiment https://app.growthbook.io/experiment/exp_19g61tlmreelej When I try to fetch the value in my test, it always return the same value. Is it because I did not set a user ID with GrowthBook yet? would all the anonymous user always get the same value in experiment?
s
j
  • 2
  • 16
  • 95
Hi! I am having issues overriding Feature/Experiment rules using the Chrome DevTool. The DevTool loa...
g

Gustav Beck-Norén

about 2 years ago
Hi! I am having issues overriding Feature/Experiment rules using the Chrome DevTool. The DevTool loads and displays the correct features and experiments with correct split and assigned variations etc. My issue is that when I try to override the variation using the DevTool I see no changes in my UI. Shouldn't the devtool be able to force certain variations so I can see & test without custom force-rules or clearing cookies and hope I get lucky? Or is there any special setup I need to do with the SDK apart for setting
enableDevMode: true
(which obviously works since I can see the DevTool)? Thanks!
đź‘€ 1
g
m
  • 2
  • 8
  • 95
Hi. I just discovered something called “Identifier Join Tables” in the data source configuration opt...
h

Home Kralych

about 2 years ago
Hi. I just discovered something called “Identifier Join Tables” in the data source configuration options. Can they be used for cases where I need to join experiment exposures with metrics by TWO columns instead of one? We normally use a single identifier for joining, but there are cases when we need to join experiments exposures with metric data using two separate identifiers using “AND” clause
h
g
  • 2
  • 2
  • 95
Previous596061Next

GrowthBook Users

Open source platform for stress free deployments, measured impact, and smarter decisions.

Powered by