https://www.growthbook.io/ logo
#ask-questions
Title
# ask-questions
b

big-article-38157

10/25/2023, 11:25 AM
Hey guys, we’ve been trying to use GrowthBook for months now and, although we’ve got things setup and getting some results, our experiment impressions are comically low and I’ve spent so much time trying to figure it out. It always comes down to the fact that GrowthBook isn’t considering certain users as part of the experiment so not even logging an experiment impression for it. It happens across the whole site, regardless of the configuration and I’m really stumped. Looking into GrowthBook’s code and the “isOn” function, I can see the logic that GrowthBook uses when considering if a user is an experiment and I see it does return error messages with this.log. Does this get tracked anywhere? Is there somewhere we can see the reasons users aren’t being put in experiments? It’s really affecting our whole development team at this point. I’ve spoken to graham in the past about this but still very stumped
The issue is that GrowthBook is amazing for debugging on the experiment analysis level, because all of the queries are shown and we can dig into things But, with actually sending experiment impressions, we’re completely blind
h

happy-autumn-40938

10/25/2023, 4:27 PM
Sorry you're running into issues. To see live logging for evaluation, you should set
growthBook.debug = true
as soon as the SDK is constructed (where
growthBook
is your SDK instance). You can also use the new "Test Feature Rules" module in the GrowthBook app's Feature page to see how sample users might or might not be bucketed.
b

big-article-38157

10/25/2023, 4:31 PM
Thanks for the reply @wonderful-toddler-35312! How would debug help us in production? And have tried the test feature rules stuff. Doesn't give us much though as the feature haas anonymous id as the identifier and no other ruling
We can't recreate why users aren't being put in the test
@happy-autumn-40938 any thoughts at all? Just realised I tagged the wrong person 🤦‍♀️
h

happy-autumn-40938

10/25/2023, 11:38 PM
It's difficult to say what could be going on... too many unknowns. Could be something about your targeting rules that is doing something unexpected. Could be an issue with user attributes being set incorrectly. Could be a SDK integration problem. Could be a legitimate GB bug. But we'd need to get a bit more into the weeds to figure it out I think. For simply logging as discussed earlier, you have a few options depending on what information you have available and what you're able to deploy to different environments: 1. Find a few example users and their set of attributes and plug that into the "Test Feature Rules" module. 2. Enable debug mode on prod (or in a prod-like environment). You could conditionally set debug = true based on some query string or other secret, just so you can see what's going on.
b

big-article-38157

10/26/2023, 12:37 AM
@happy-autumn-40938 Yeah appreciate it's really difficult to figure out from where you are, definitely not expecting miracles as I know there's very little information about our setup Are we on our own here or does GrowthBook have some sort of support process so we can chat through in more detail and get a second opinion? If not, will continue digging with what you said in mind
@happy-autumn-40938 @fresh-football-47124 Is there some sort of more in depth support available?
h

happy-autumn-40938

11/01/2023, 5:47 PM
Hi Ben, Typically this Slack group is how we can provide service efficiently. Additional levels of service may be available to Pro and Enterprise customers.
b

big-article-38157

11/01/2023, 5:52 PM
Yep, we are on the pro plan
👀 1
h

happy-autumn-40938

11/01/2023, 6:01 PM
Ah sorry, didn't realize. At any rate, we're happy to continue the conversation here or elsewhere. Did you manage to get any debug logging in place and/or use the "Test Feature Rules" module on a feature's page in the GB app, so we could see what might be happening?
f

fresh-football-47124

11/02/2023, 12:56 AM
Could it be a race condition where you’re evaluating the features before setting attributes?
b

big-article-38157

11/22/2023, 11:06 AM
Sorry @fresh-football-47124, I haven’t had much time to look into this but now I’m back on it! What you suggested could well be the issue and it may be fixed by what you suggested a few months ago (don’t wait to set all the attributes but only wait for the ones that we need to wait for). Interesting The useFeatureIsOn is correctly called when the user is loaded but the attributes are set at exactly the same time so the race condition is very possible. That’s very interesting indeed! I’m going to have a dig and get back to you. Assuming the answer to my above question about more in depth debugging (logging for all users) is out of the question?
@fresh-football-47124 Would a race condition even be possible? The component is re-rendered whenever setAttributes is called which calls the isFeatureIsOn hook again. Although I’m not entirely sure why calling setAttributes makes the component re-render tbh. Am I missing something?
f

fresh-football-47124

11/22/2023, 8:58 PM
is this a website?
you can use our devtools to help you debug
b

big-article-38157

11/22/2023, 9:16 PM
Yep! A fairly big one but just a next.js app with GTM and all that good stuff. @fresh-football-47124 unless I’m misunderstanding, dev tools helps me to see what happens on my machine but doesnt help us debug why there’s like 2k people missing from our experiment. Happy to show you our setup again
f

fresh-football-47124

11/23/2023, 1:03 AM
here is a way you can debug your instance:
We have a way to get more detailed information about which features people get, and which experiment they are exposed to and why. Something like this (Typescript):
Copy code
const log: [string, never][] = [];
const gb = new GrowthBook({
  attributes: attributes,
  trackingCallback: ...
  log: (msg: string, ctx: never) => {
    log.push([msg, ctx]);
  },
});
gb.debug = true;
const result = gb.evalFeature(feature.id);
console.log("result", result);
console.log("log", log);
You could add something like this, and then log the results somewhere to see if there is a race condition
b

big-article-38157

11/23/2023, 9:38 AM
This is absolutely perfect, exactly what I need to understand wtf is going on. This is exactly the function I was looking for but didn’t realise it was exposed. I’ll let you know how things go
@fresh-football-47124 So we set up this log function but it didn’t work as expected and I now see it only works in a development environment. This is frustrating as of course we set this up to log for our different users Is there any way to override this?
f

fresh-football-47124

11/23/2023, 9:19 PM
where is this code running? it’s a bit unusual
oh, I see
b

big-article-38157

11/23/2023, 9:21 PM
yeah just an example within your codebase. I assume every call to log() is protected by a similar if statement
f

fresh-football-47124

11/23/2023, 9:22 PM
ah yes
b

big-article-38157

11/23/2023, 9:22 PM
I suppose we could set up the proxy and set the NODE_ENV to development even for production but that’s not ideal
f

fresh-football-47124

11/23/2023, 9:23 PM
you can switch the NODE_ENV just before this check, then put it back again
b

big-article-38157

11/23/2023, 9:23 PM
This is in the GrowthBook codebase ^
We’re not using self hosted
f

fresh-football-47124

11/23/2023, 9:23 PM
oh
yes, but you can add something similar to get debug info from your SDK implementation
b

big-article-38157

11/23/2023, 9:24 PM
Ah interesting. How would that look?
f

fresh-football-47124

11/23/2023, 9:25 PM
if you share your SDK implementation, I can adjust it for that
b

big-article-38157

11/23/2023, 9:26 PM
Have dm’ed
2 Views