Science is hard. If you want to make a new discovery, you not only have to observe an effect, test your hypothesis, have it peer-reviewed and published - your idea also has to stand up to rigorous independent testing.

That's called the scientific method, and it's how we attempt to eliminate most flukes and false positives from published research.

But, as the latest episode of Veritasium explains, despite this lengthy process, a lot of peer-reviewed research out there is actually wrong, and it highlights a serious problem in the way we do science.

So what's going on? A lot of it comes down to one problem: data can't speak for itself, and always has to be interpreted by someone. And unfortunately, humans are an unpredictable variable.

Take this 2011 paper published in The Journal of Personality and Social Psychology called "Feeling the future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect," for example.

In that study, researchers said they'd found evidence that humans could predict the future… all based on the fact that when asked to pick which of two curtains would have an image behind it, participants picked the right answer 53 percent of the time.

The authors claimed that if it was just random chance, the number of correct guesses would have been 50 percent.

That sounds pretty bogus, but here's the thing - the study was statistically "significant", which means that researchers crunched the numbers and got a p value less than 0.05. In other words, there was less than a 5 percent chance that they got that result randomly.

For decades, getting a p > 0.05 score has been the be-all-and-end-all of determining the worth of a result, and that one number generally determines whether a study is worthy of being published or not.

But as we've mentioned before, this is incredibly problematic, and not only produces a whole lot of false positives, it also makes data subject to p-hacking - which is when results are tweaked slightly until the researchers get a significant result.

We'll let Derek talk you through this in the video above, because it's pretty complicated but important stuff, and it's definitely worth investigating more closely.

But the important thing to know here is that most scientists aren't doing this maliciously - a lot of these false results are a symptom of the system: the only way to get jobs is to publish papers, and you don't publish papers with non-significant or replicated results.

The good news is that many scientists now recognise that there's a reproducibility crisis in science, and are actively looking for ways to change the publication process and make it more accurate and transparent. 

We're looking forward to seeing what happens next, because if there's one thing we need, it's a scientific model we can rely on.