Hi, I'm new to GB and having issues getting AB exp...
# ask-questions
r
Hi, I'm new to GB and having issues getting AB experiments to work. My Issue could be with the Java SDK, but I'll start here in case I just have a fundamental confusion about how to set up my test. • I created a new boolean feature called
my-ab-boolean
(defaults to
false
) • I clicked "Add an experiment Rule" • I created a new experiment called
test
and left the default 50/50 distribution of users with 100% of users in the experiment.
id
is left as the default attribute to split on • In my test suite, which has been configured to include a random
id
attribute for each iteration • I ran (via the Java SDK)
growthBook.evalFeature( 'my-ab-boolean' )
100 times. Each iteration used ◦ A fresh GBContext ◦ with a unique (random) GUID as the
id
attribute ◦ a fresh GrowthBook instance using the fresh context I expected to see ~50% of the evals come back true, and ~50% of the come back false, but instead, every single one returned false. Furthermore, when I inspected the
FeatureResult
object, the
experiement
and
experimentResult
value are
null
every time showing no experiment is being used. Furthermore, the
source
shows
defaultValue
every time as well. I even tried the test on a string feature with 3 variations and a default value with the same result. What is preventing my experiment from being used?
image.png
The result every time:
r
The first thing I'd check here is that the trackingCallback in the SDK is configured correctly. It should be using the intended tracking library and actually emitting tracking events to your data warehouse. How have you set that up?
r
I don't have a tracking callback because I don't need one at this point
I just need to verify that GB can eval a rule and use the experiement
Perhaps I'll do something with that information later
But even if I did, I shouldn't need to use a call back because I'd potentially already know what feature I eval'd and what the result was just from the act of checking it, right? (the SDK's evalFeature() method tells me what the value was, what experiment was used, and the source of the value. This could give me all I needed to track the data without a callback)
Are you saying that GB will refuse to use an experiment, even if it's configured for the feature, if I don't have a callback configured?
Because that wouldn't make a great deal of sense
I'll also add here, it feels odd that the work of tracking these things is even pushed off on the developer. In my experience with LaunchDarkly, they track all of this on their own via their API, on their end without the need for a connected datasource or a tracking callback.
r
Yes, this is a big difference between LD and GrowthBook. We work with your native data warehouse. The benefits here are that we never see your data (except in aggregate) and experiment results are far more precise. Instead of having to reconcile your native data warehouse and the data warehouse of your testing platform—there's only a single source of truth (and no vendor lock in).
And, if you just want to test how the tracking cb would run, you can put in a logging function. The results won't show up in GrowthBook, but you will be able to see the payload, etc: https://docs.growthbook.io/lib/java#tracking-callback
r
Yeah, honeslty I preferred LD, lol
way easier to set up and the stats "just work"
they capture so much more useful data
But that aside, I'm still not understanding why my experiment is not being used
Again, a callback should be an optional step. I expect configured experiments to be used any time a feature is evaluated, regardless of whether I've decided to configure a callback
Are we saying that is not the case?
@strong-mouse-55694
And back to this (sorry) > We work with your native data warehouse. This assumes I have a data wharehouse. The client looking to use GB which has me writing this SDK doens't have one. So this is now just more work we have to do to set this all up. > The benefits here are that we never see your data What makes you think you'll see their data anyway, lol. They are going to use the self-hosted version so that's basically a non-existent concern. I mean, the "data" that Launch darkly "sees" is that user "Bob" got variation "X" of flag "Y", which is basically the sort of data I'd expect it to be dealing with in the first place. It offers unparalleled reporting and tracking just right out of the box with no additional configuration or external storage. It's a great value-add. Just something to consider.
r
I understand. Each of these products are addressing different priorities. The easiest set up for a data warehouse is generally using GA4 with BigQuery. https://docs.growthbook.io/guide/GA4-google-analytics
r
Not so sure about "Direct Slack access to engineers", lol 😆
Can you address the original questions I'm still trying to solve in this thread though?
Why the experiment isn't being used when configured for my feature.
@strong-mouse-55694?
r
Are you able to log your feature? Does it show the assignment as expected? Additionally, if you check the SDK connection and look up the results for the ​`features`​ endpoint, do you see the feature flag and assignment rules as expected?
r
@strong-mouse-55694 I'm not entirely sure I'm following your suggestions. > Are you able to log your feature As I stated in my OP, I can log that every call to
growthBook.evalFeature( 'my-ab-boolean' )
returns the default value, ignoring my experiment > Does it show the assignment as expected? No, as per my original post. It does not return the assignment expected. (assuming that by "assignment", you mean "value" as is reported in the feature result object) > if you check the SDK connection and look up the results for the
features
endpoint, do you see the feature flag and assignment rules as expected? You mean like in the JSON? I already supplied a screenshot of the assignment rules, but the JSON is as follows:
Copy code
"my-ab-boolean": {
		"defaultValue": false,
		"rules": [
			{
				"coverage": 1,
				"hashAttribute": "id",
				"seed": "test",
				"hashVersion": 2,
				"variations": [
					false,
					true
				],
				"weights": [
					0.5,
					0.5
				],
				"key": "test",
				"meta": [
					{
						"key": "0",
						"name": "Control"
					},
					{
						"key": "1",
						"name": "Variation 1"
					}
				],
				"phase": "0",
				"name": "test"
			}
		]
	}
which basically mirrors the information in my screenshot. So yes, the feature seems to be configured correctly so far as I can tell, but will only evaluate to the default value.
r
Thanks for sharing that. I wanted to confirm that everything seems good on the GB side. It seems like the features are being logged before the experiment runs. If you set up an experiment tracking callback, do you still get the same non-result?
r
No, as I haven't built the ability to do that yet into the CF SDK. I'm waiting to figure out how to test a feature tied to an experiment before I move on to the callbacks. I'm still evaluating GB for my client and if we can't get a simple feature with an experiment working, there's no need spending effort on any of the callback functionality (which is non-trivial)
r
You don't have to build out the actual cb functionality. The idea is just to println the result the via the SDK's cb. This way, you see how the ff or experiment was evaluated.
r
lol, that's not how it works.
CF is a JVM language which requires me to create wrappers for CF classes to be able to represent them as instances of Java interfaces so I can pass them to the Java methods in the SDK expecting that type
I can't "_just put a println_" in until I have a wrapped class set up to put it in 🙂
It's not a tremendous amount of work, but I would like to test things sequentially here, and a feature should evaluate with its experiment in the absence of any particular callback
If not, that's a bug in the Java SDK that will need addressed
r
I'll try to set up a test case to demonstrate. In the meantime, I'd also try to reach out to the Java SDK's developer who will be able to assist you better. The Java SDK isn't ours but independently developed and maintained.
r
I told him about these questions in the channel dedicated to #sdk-java
👍 1
Sadly, he has yet to address any of my questions
This is the only way I know to reach him
If GB is going to list the Java SDK as a supported SDK, you may want to get a handle on it internally 🙂
@calm-dog-24239 Any chance of getting feedback on this one? This is the second biggest issue preventing me from moving forward with GrowthBook
c
Hi, @rapid-quill-56099. Sorry, we are first looking into removing OkHttp and replacing it. We are also currently working on reproducing the issue. We will write to you as soon as we find a solution.
We tried running the iteration 100 times and received different values, both true and false. Here is how we implemented it.
Here is the result of it: Result = false on 0 Result = false on 1 Result = false on 2 Result = false on 3 Result = false on 4 Result = false on 5 Result = false on 6 Result = true on 7 Result = true on 8 Result = false on 9 Result = true on 10 Result = false on 11 Result = false on 12 Result = false on 13 Result = false on 14 Result = false on 15 Result = false on 16 Result = true on 17 Result = true on 18 Result = false on 19 Result = false on 20 Result = true on 21 Result = false on 22 Result = false on 23 Result = false on 24 Result = true on 25 Result = false on 26 Result = true on 27 Result = true on 28 Result = true on 29 Result = false on 30 Result = true on 31 Result = false on 32 Result = false on 33 Result = false on 34 Result = true on 35 Result = true on 36 Result = true on 37 Result = false on 38 Result = true on 39 Result = true on 40 Result = true on 41 Result = false on 42 Result = false on 43 Result = false on 44 Result = false on 45 Result = true on 46 Result = false on 47 Result = false on 48 Result = false on 49 Result = true on 50 Result = true on 51 Result = true on 52 Result = false on 53 Result = false on 54 Result = false on 55 Result = true on 56 Result = true on 57 Result = true on 58 Result = false on 59 Result = false on 60 Result = true on 61 Result = false on 62 Result = false on 63 Result = true on 64 Result = true on 65 Result = true on 66 Result = true on 67 Result = true on 68 Result = true on 69 Result = false on 70 Result = false on 71 Result = true on 72 Result = true on 73 Result = true on 74 Result = true on 75 Result = false on 76 Result = true on 77 Result = true on 78 Result = true on 79 Result = false on 80 Result = false on 81 Result = true on 82 Result = false on 83 Result = true on 84 Result = true on 85 Result = true on 86 Result = false on 87 Result = true on 88 Result = false on 89 Result = false on 90 Result = true on 91 Result = true on 92 Result = false on 93 Result = false on 94 Result = false on 95 Result = false on 96 Result = false on 97 Result = true on 98 Result = false on 99
Maybe the problem is related to the version. Which version of the SDK are you using?
r
I am using
0.9.9
which I beleive was the latest version.
Your output above doesn't seem to match your code. Do the experiment details come back in your evaluation?
Also, can you show me the configuration of your feature and experiment that you are evaluating?
c
Yes, here it is.