Hi Alessandro.
You're correct that this does expose people to peeking risk. Three things.
1. We plan to, for both our Bayesian and Frequentist engines, build in estimates of needed sample size to achieve some level of certainty (e.g. power for some effect size in the Frequentist case). This work is being planned and should arrive later this year.
2. For now, you can use the sequential testing (see your org settings) to prevent false positives from peeking, all though this comes at a large cost to power
3. You can also set a Minimum Sample Size on a metric that will hide results until some metric value has been reached. This is a blunt instrument, unfortunately, but it is all that exists. I think you have found this. As for "but I do not see any advice in the UI regarding the fact that the sample size has not been reached yet" we do have something for the Bayesian engine but not the Frequentist engine. I'll open a ticket for this, although I suspect we'll wait until we have a robust power calculator in the app to resolve this issue.