Hello. I have been working on getting remote evalu...
# ask-questions
w
Hello. I have been working on getting remote evaluation to work and I have noticed something that I find confusing. For context, we are self-hosting everything. It is working now but when I initially set it up I justenabled the remote evaluation for the SDK connection and then also set the flag on the call side (an simple browser app). It send did the payload to the API but back came just a HTTP status 200 with an empty JSON document. After some trying I figured out that you actually need to call through the proxy to get a remote evaluation. Is that correct so far? That is a bit of a food.gun. As far as I can tell it says nowhere in the docs that you need the proxy for remote evaluation. It does says in the proxy docs enables remote evaluation on the feature list but that imho does not make it clear that it’s really required. From a developer perspective it’s also pretty bad that the regular API server just answers with a 200. Imho there should be an error response when the API server receives a remote evaluation request
h
You're right, we could do better to call this out in the app itself (SDK Connection) as well as the docs. Sorry about the trouble. If you're self-hosted, you technically should be able to do remoteEval on your GrowthBook API without a proxy. This is more for testing remote eval and is not recommended for production traffic. Cloud has this disabled. In either case, it shouldn't be returning a 200 with empty JSON doc, so perhaps that's a bug. Generally though, you need some sort of private backend running our remoteEval service. The GB Proxy has this built in and is typically the path of least resistance.
w
Thanks for the reply
If you’re self-hosted, you technically should be able to do remoteEval on your GrowthBook API without a proxy.
Yes, I found the not-so-public
/api/v1/sdk-payload/${apiKey}
by snooping in the code (great that the sources are public 😊). I have a follow up questions (I also realized that this will a laggy back-and-forth since I’m in Europe 😅). I have noticed that the response from the remote evaluation is pretty huge. I have created a simple feature flag with and attached experiment and put the responses in this Gist Not only does the remotely-evaluated response the actual decision but a lot of metadata. I guess some of it is relevant for tracking the experiment (i.e. in the stuff in
experimentResult
) but there is also other stuff: • There is duplication: The
experiment
is included twice, once in
.tracks[].experiment
and then again in
.tracks[].result.experiment
- and why is it even there in the first place? That info doesn’t really seem to be necessary. • Also: Shouldn’t the goal of the remote evaluation (apart from hiding potential sensitive information) also be some minimization of data? As is, the responses are almost equal in size. They are both about 1.5kB with just one experiment and a feature flag. If I know think about an setup where we have a couple experiments and then a handful of feature flags (which would be our bare minimum), we are talking about something around 50kb - that seems a lot for what it is. The latter question is relatively important to me since my company has a rather special case where we also want to use the feature flags on some very specific devices that have a rather arbitrary limitation that they can only request plain JSON with a limited total size (and also do not support compression due iirc). I had hoped we could use the remote eval API directly but given the sizes that does not seem possible. So we might need to create our own proxy (which is generally fine, we have done that for those devices a bunch of times). So for that a final question: Is there a proper schema defintion for the remote-eval API? I had a brief look at the API docs and couldn’t find it. But it would be somewhat nice to build a custom proxy (and also knowing that the API will not change on an GrowthBook update)
h
I have noticed that the response from the remote evaluation is pretty huge.
You're right in that the response has a lot in it. This is because, like you said, we need to include all possible information that the SDK may need for tracking. There is also duplication because each tracking call is implemented slightly differently and some require having just the results object while others require a separate param for the experiment. We could likely do some de-duping in the SDK, but its a bit risky since it might break support for other non-JS SDKs that rely on having both fields present. We're looking at some ways to change the response here, but I don't imagine the net effect of this will be reducing the response size by much. Which brings me to this question:
Shouldn’t the goal of the remote evaluation (apart from hiding potential sensitive information) also be some minimization of data
In our view, this is not the primary goal, which is security. Because all of the evaluation happens privately, there is an increased burden to record the tracking info and hydrate it back to the client SDK for accurate tracking. One way around this, as you alluded to, would be implementing both feature evaluation and tracking on a private server or proxy. Our Edge SDKs (CloudFlare, Fastly, etc) support this exact use case for hybrid browser-based implementations. This likely won't directly solve your specific issue, but might provide some inspiration about how you could do both evaluation and tracking on the server/proxy while still keeping client-side reactivity. • The Edge SDK base app (
app.ts
) does evaluation and optional tracking starting on around line 109. It keeps track of deferred tracking calls and then either returns them to the client OR fires them off server-side (edge-side). • Or have a look at the Remote Eval backend, which you could implement in your own private endpoint/proxy. Do the tracking server-side, scrub the track data from the payload, and return it back to the client.
w
Thanks for the detailed response, this clarifies as lot 🙂. We will probably built our own proxy since that way we can easily control response data and also implement remote-tracking for some of the less capable clients 🙂