A study's p-value is the probability that your measurements could occur even if there's no relationship with the variable you're testing.

Smaller p-values suggest your observations are a good indication that the variables being tested and measured in your experiment are closely connected.

Larger p-values suggest your results look more or less like those you'd get where no relationship exists, meaning you can't be confident that there is a connection after all.

Wait, what? Can you explain p-value in simpler terms?

Experiments in science typically compare certain characteristics of people, objects, or events to understand how they might be related.

Does eating fatty foods make you put on weight? Can a particular drug reduce the symptoms of a disease? How much caffeine is good (or bad) for you? We might assume there is a connection, where one thing causes another.

If we believe one of those things influences the other, we can call the nature of that relationship a hypothesis. (For example, "fatty food makes you gain weight" is a hypothesis.)

In reality, there might be no relationship whatsoever. We call this a null hypothesis. ("Fatty food doesn't make you gain weight" is a null hypothesis.)

It's impossible to know which one is truly at work in the Universe. Sadly, there is no Big Book Of Answers you can peek inside. The best anybody can do is measure each variable and compare them to see if an increase in one means an increase (or decrease) in the other; what we refer to as a correlation.

A p-value is a way to statistically test a possible correlation. It gives you a number between zero and one; the closer to one it gets, the less confident you should be in your hypothesis.

Large p-values don't mean your null hypothesis is more likely to be true; just that your hypothesis probably doesn't explain your observations.

By the same token, the closer to zero the number gets, the less likely it is that you'd get those kinds of results if the null hypothesis was really at work. It doesn't make your hypothesis true either, but it does make it a much better bet.

When can we say our hypothesis is true then?

Technically there is no one number that distinguishes which kind of hypothesis is absolutely correct.

But by convention, we can conclude any p-value higher than 0.05 means the null hypothesis is too likely to ignore. Below 0.05, we can agree that your hypothesis deserves to be taken seriously and tested again.

Each time it gets tested and survives, the more confident we can be that we're on the right track.

Why is 0.05 significant?

The significance of 0.05 is more of an accident of history than an objective mathematical milestone.

An influential book called Statistical Methods for Research Workers, written by British statistician Ronald Aylmer (R.A.) Fisher in the early 20th century, summarised certain tables of variables and their calculated p-values in a way that divided them up. Fisher would later claim the cut-off of 0.05 was convenient "as a limit in judging whether a deviation ought to be considered significant or not."

Convenient as Fisher found it, some researchers argue we shouldn't be so lazy in adopting it for all things scientific. Some say we should ditch it and embrace a degree of uncertainty.

There is a counterargument claiming we should be stricter, reducing the value to an even smaller figure of 0.005 to be super-duper confident in a hypothesis.

Others warn against being so hasty in changing it at all, meaning we can expect a p-value of 0.05 to be a significant number in science for a while to come.

All Explainers are determined by fact checkers to be correct and relevant at the time of publishing. Text and images may be altered, removed, or added to as an editorial decision to keep information current.