melodic-spring-218703/31/2023, 12:55 PM
on the main growthbook instance.
happy-autumn-4093803/31/2023, 4:55 PM
could be a load-balanced or dedicated entrypoint. If you are horizontally scaling your proxy server, then you really should be using a central store (Redis) to ensure each proxy instance has synchronized state. We use Redis's built in pub/sub for letting the proxy instances fan-out and update each other's local state based on global state change. In the proxy readme, check out these fields for setting up Redis pubsub:
CACHE_ENGINE=redis CACHE_CONNECTION_URL=<your redis connection url> PUBLISH_PAYLOAD_TO_CHANNEL=1 #this is what enables pub/sub for syncing state on scaled proxy clusters
) if you'd like to fine-tune cache availability.
melodic-spring-218703/31/2023, 8:20 PM
happy-autumn-4093803/31/2023, 10:28 PM
green-electrician-4188804/01/2023, 6:50 PM
happy-autumn-4093804/02/2023, 4:52 AM
, which lets you specify things like
; etc...) 2. PROXY_HOST_PUBLIC. This kind of goes back to point 1 above, but if you allow all proxies to keep information about all SDK Connections, then you'd just need to come up with a fan-out mechanism to post updates from your Growthbook app(s) to your proxy webhook endpoints (which is what PROXY_HOST_PUBLIC basically is). We don't currently have something out-of-the box for this, but you could probably rig something up at whatever URL you provide; you'd just need to roll your own fan out. However... 3. Central vs per-Cluster Redis: ...if you did make the concession that each proxy instance (even those in different clusters) could share a common central Redis store, then you would not need to roll your own fan out (you could use the built-in Redis pub/sub mechanism which GrowthBook proxy supports natively). Most of what I've mentioned above will work mostly out of the box, with the exception of rolling your own fan-out. Let me know which way you're leaning and hopefully we can narrow in on a strategy.
green-electrician-4188804/02/2023, 7:50 PM
abundant-airline-4889005/25/2023, 9:17 PM