Hi Luke! We routinely run one-sided experiments (as a matter of fact, more often than two-sided ones). The thing is, most of the time we are not interested if a certain treatment is worse, all we care is if it’s better and if it is not - we get rid of it. We even launch left-sided tests to see if the treatment is actually worse than the control and only reject it if it is (this type of tests is just like a precaution measure for us to make sure there is no significant drop).
We’ve been able to use this approach with the Bayesian engine, in a way, thanks to this:
https://en.wikipedia.org/wiki/Bernstein%E2%80%93von_Mises_theorem
But honestly, I’d be much more comfortable going frequentist all the way, with confidence intervals and all that.