# ask-questions


11/07/2023, 10:57 AM
Hi All, I've been looking for a way to change experiment variation values and weights using a single API call (also to add/remove additional variations). The only way I found is to call 2 APIs. First - update experiment while setting a list of variations with new weights and second - update feature with variation's values inside the rule (experiment). Executing 2 API calls is not atomic and may cause issues on client side who reads the data. I didn't find a way to update values of variations through update experiment API. Did I miss something? Or there another way of doing so?


11/07/2023, 10:57 AM
@victorious-nightfall-21291 has opened an issue Close Issue button
Our official support hours are 6:30am - 5pm Pacific Time, Monday through Friday. You may occasionally hear from us outside of these hours. Your support request has been logged in our system. Our support team will get back to you very soon!


11/07/2023, 2:20 PM
Hi, I’ve got your support request in my queue and I’ll follow up as soon as I can


11/07/2023, 4:12 PM
There is not a way to do this with a single API call. The approach you outlined - updating the experiment first, then the feature, should work fine though. Even if the 2nd update fails, the feature will not break. The feature will be assigned a default value of
for any new variation that is not set. And if a variation has been removed, but the feature flag hasn't been updated, it will just ignore that extra variation.
👍 1


11/08/2023, 9:41 AM
@future-teacher-7046 Thank you for your response. I'm having trouble understanding the answer. Could you please provide more details or clarify your response? I not really understand if we need to stop the experiment before it updated and if there is a way to expose 1 api that do it with atomic request • how the update will impact on the distribution of user that get the feature flag and participant in the experiment


11/08/2023, 1:36 PM
Changing weights and variations while a test is actively running will likely cause data quality issues and lead to unreliable A/B test results. We recommend only doing that while the test is still in a draft state. It sounds like you might be trying to do something like a multi-armed bandit where you dynamically adjust weights and variations throughout a test? If so, we don't really support this use case right now, but it's on our roadmap.
👍 1
When you do modify the list of variations for an experiment, you will need to update the linked feature flags so the value mapping is pointing to the new variation ids. There is no way to do both in a single API call. You are right that it's not atomic and there's a chance the 2nd API call fails and the feature never gets updated. If that happens, the SDK will fail gracefully and just exclude users from the experiment.
👍 1