Regulators are generally really conservative. Spiegelhalter et al. already wrote a fantastic textbook on Bayesian methods for trial analysis back in 2004. It is a great synthesis, and used by statisticians from other fields. I have seen it quoted in e.g. DeepMind presentations.
Bayesian methods enable using prior information and fancy adaptive trial designs, which have the potential to make drug development much cheaper. It's also easier to factor in utility functions and look at cost:benefit. But things move slowly.
They are used in some trials, but not the norm, and require rowing against the stream. This is actually a great niche for a startup. Leveraging prior knowledge to make target discovery, pre-clinical, and clinical trials more adaptive and efficient.
Journals are also conservative. But Bayesian methods are not that niche anymore. Even mainstream journals such as Nature or Nature Genetics include Bayesian-specific items in their standard submission checklists [1]. For example, they require you to indicate prior choice and MCMC parameters.
[1] https://www.nature.com/documents/nr-reporting-summary-flat.p...
Bayesian methods are incredibly canonical in most fields I’ve been involved with (cosmology is one of the most beautiful paradises for someone looking for maybe the coolest club of Bayesian applications). I’m surprised there are still holdouts, especially in fields where the stakes are so high. There are also plenty of blog articles and classroom lessons about how frequentist trial designs kill people: if you are not allowed to deviate from your experiment design but you already have enough evidence to form a strong belief about which treatment is better, is that unethical? Maybe the reality is a bit less simplistic but ive seen many instantiations of that argument around.