Hello! I'm looking at getting a Pro license but I ...
# ask-questions
r
Hello! I'm looking at getting a Pro license but I have some questions about API usage limits and some miscellaneous questions I had from reading the documentation. Would someone be able to help me answer these? Thanks! 1. We’re looking to integrate several GrowthBook SDKs with our website – in particular, for C#, Javascript and React. There’s several separate React instances on our website at the moment, which as I understand it would need to separately call loadFeatures(). a. Would these all count as separate calls to the API, i.e. require 4+ per session? b. Would subsequent requests from the same user (as identified by provided attributes) be cached by your CDN, Fastly? c. If they do get cached, is there any guidance on how often it’s fetched from cache vs as a fresh request? 2. The React documentation describes two parameters, subscribeToChanges and backgroundSync. a. We’re planning to set backgroundSync to false, to disable streaming updates. What is the difference between this and subscribeToChanges? (Are they the same thing, where the name changed at some point?) 3. Visual Editor in React a. For use in React components, we need to call gb.setUrl() so active experiments can update. Does this count as an API request to GrowthBook? 4. The React documentation mentions Remote Evaluation, and that Cloud customers need to host a GrowthBook Proxy for this. a. Can you give an example of when I might want/need to use remote evaluation? How do I work out if this is something I need? 5. There is a onFeatureUsage parameter. I’d like to understand how this works with Percentage Rollout. In particular, the documentation states: "Percentage Rollouts do not fire any tracking calls so there's no way to precisely correlate the rollout to changes in your application's metrics." a. Does this mean that whilst we cannot track metrics in GrowthBook directly when performing percentage rollout, we can still track this ourselves e.g. by using onFeatureUsage to send a tracking event to our GA4 implementation? 6. Flickering a. As I understand it, it’s inevitable that there will be some extent of flicker for client side A/B testing. i. Is there any advantage to using the Visual Editor using the SDK over the GTM tag? ii. Would feature A/B tests be affected by flicker at all?
f
1. a. yes, they would count as an API call, though you can cache the payload or use our proxy to cache it for you, which avoids the API call to our servers (or self-host) b. It would be cached by our CDN, but that still counts as an API call c. You can set a reasonable TTL for the cache, or use webhooks or SSE for pushing updates. 2. a. backgroundSync (default: true) turns on streaming. subscribeToChanges (default: false) keeps your SDK updated with live changes that come from streaming. Usually dont' need to mess with this, unless it's backend. You might not want the SDK changing state during a request/response lifecycle 3. Yes, and no. 4. Remote evaluation is normally useful when the assignment data is not available directly in the SDK - so some examples are like if you want to target users who's average order value is greater than $10. With remote evaluation you can create a function which is passed the user id (or whatever attribute) and you can then lookup any additional targeting values, and pass back the features which are enabled. 5. OnFeatureUsage should track percentage rollout when the user is included - but there is no event for the inverse. 6. a. Yes, correct. i. should be similar, they are both loading the SDK. ii. It is possible that flickering may effect the results - we have an anti-flickering script which should help mitigate the differences, but it isn't magic - it just hides the page while things are loading.
🙌 1
r
Thanks Graham! A couple follow-up questions: 1. a. The Javascript SDK docs talk about a built in caching layer - would that be able to cache the payload too? Does the React SDK have a similar inbuilt cachingLayer we can use? 2. Thanks for clarifying - to confirm, if we want to be conservative with the number of api requests used, should we set
backgroundSync
to false? 3. - 4. Thanks for the example - it's still not quite clear to me how this would work. Do you have any code examples of what that would look like? In particular, how it would work with the Proxy? 5. - 6. - a. - i. - ii. Is there anything else that might minimize flickering for Visual Editor tests?
I also have a few additional questions: 1. There are a number of past tests we have run which only wants to include users conditional on the user e.g. performing a search under specific conditions. As covered in your documentation, it's best to assign audience to an experiment as close to the test as possible. a. It seems like from e.g. the React SDK documentation that we'd load all the features from GrowthBook, which we'd expect to do when a user first begins their session. I'd expect to want to refresh features when an attribute changes, e.g. when a user logs in - are there any other instances where I'd want to do that? Would refreshing features on a specific page load assign the user to the experiment any differently, if the page load does not affect any of the user attributes sent? b. An alternative way to do this is activation metrics, where e.g. the activation metric is if the user made a specific search. However, if the bucketing is done before the user reaches the page, the number of users who visit that page from each bucket might not match the specified ratio - is that correct? c. Another way is to add a user attribute for the specific criteria e.g.
madeSpecificSearch
- Are there limitations or downsides to doing it this way? I expect this would necessarily mean the features need to be refreshed when this attribute updates, and possibly a longer load time if we were to wait for the features to load on a page. d. Are there any other ways to do this that I should consider/you recommend? 2. In your Fair Use policy, it’s stated that there’s a soft limit on API requests per month, but that we should get in touch if we expect to use beyond this amount. a. It states: “Where possible, we'll reach out to you ahead of any action we take to address unreasonable usage and work with you to correct it.” i. In the case that usage is looking to go above this 10M limit, what would be the possible next steps to happen? ii. If we are in the process of taking action to address usage, would there be a risk of GrowthBook no longer serving feature flags or A/B tests to users? b. On the Pro license, would there be an option to increase the usage limit (like there is in the Enterprise plan)? 3. As mentioned above looking to also support using the Visual Editor a. I just noticed a warning in the docs saying:
The Visual Editor may not work optimally on client-side rendered apps (e.g. React.js, Vue.js). Consider using Feature Flags instead for smoother integration.
Not sure if I missed it on earlier reads, could you elaborate on why? Is it related to client-side apps rerendering and causing repeated flickering? b. More generally, does GrowthBook support anything like a "component library" of blocks editors that can be pasted into the visual editor 4. For users who have rejected to cookies or have something to block cookies on the website, we'd like to make sure that we can still roll out features to all users, at least at 100% rollout. I understand it's suggested to use a
cookieId
for sticky bucketing purposes, and in the case where users consent to cookies that would be fine to implement, and when they don't we wouldn't be able to collect data to use for experiment analysis anyway. However, for the purposes of feature rollout, if a user has an empty attribute that the rollout is targeting, can I confirm that the user would still receive the feature flagged property when rollout is at 100%? If the rollout is not at 100%, how would an empty attribute be treated by the rollout rules? Would it skew the rollout numbers i.e. at 10% it's actually rolled out to a bit less or a bit more, depending on which side of the rollout users with a blank attribute are assigned to?
f
1a. Changing attributes may cause a user to change what features they are exposed to. 1b. Correct, the ratio may not be perfect, but it is unlikely to trigger SRM warnings as its random 1c. The only downside is that you have to add that data to your code, and sometimes that data may be hard to get there. 1d. The other way to consider is remote evaluation, which will help with data that is hard to get to the SDK realtime (or perhaps you dont want to share), but it won't help with the exposure problem - the easiest way is with activation metrics 2a. We will reach out to you when there are excessive overages. If you ignore the emails for several months we reserve the right to turn off the SDK endpoint. As long as you're working on reducing the requests, then we won't turn it off. 2b. Perhaps - I'd have to check with the sales team 3a. The issue with react/vue apps is that they control the DOM, and so we have to watch for DOM mutations and add the experiment back as required. This can cause some strangeness or flickering. 3b. Not yet - but you're welcome to open a github issue as a feature request 4. I'm not sure on the best way for that - You should be able to do this by setting a default value, but we'd have to confirm
r
Thanks Graham! On 3a - by the 'Visual Editor' does this cover just the WYSIWYG editor, or also applies for to the custom JavaScript editor?