Statistical Significance and the Replication Crisis
Most scientific publications are wrong. In 2005, Ioannidis published a famous paper which demonstrated why. To simplify dramatically, we use a confidence interval of 5%. If people run 100 experiments, we will see 5 falsely statistically significant results. These results will be published. Since the majority of experiments have no effect, of those hundred experiments, we might see another 5 published with a positive result. Accounting for other errors, published results are dominated by statistical glitches. Several popular publications explain this in a way more intuitive (if less rigorous) than the original Ioannidis paper.
These effects lead to the replication crisis. Most studies, especially in fields like medicine and social science, cannot be replicated. By a conservative estimate, 50-65% of medical results do not replicate, and by a more liberal one 88% do not.
When addressing this, keep in mind publish-or-perish. High publication counts lead to academic jobs, and looking at a scientific result from multiple angles and validating results takes time away from more publications.