A meta-analysis combines results from numerous scientific studies and subjects them to a statistical procedure. It's typically used to find common truths or emphasise clear differences in studies that ask a similar question.

The first clear and official use of the term 'meta-analysis' only dates back to the 1970s, when statistician Gene Glass came up with it in his call for improved ways to sum up the results of a field of studies.

In practice, however, researchers have been pooling data from different experiments and studies for more than a century, if not longer.

In 1904, results from nearly a dozen studies on immunity and mortality among soldiers were mixed together in search of averages that might help better understand the effects of a typhoid vaccine.

How are meta-analyses done?

In general terms, there are five steps. They start with a question that allows for the formulation of a hypothesis.

This is followed by a systematic review of studies already published on a topic relevant to the question. Rarely, unpublished data might also be considered.

Once appropriate studies have been selected, any data they were based on is extracted. Researchers will also note things like sample size and previous measurements on how the data varies between categories, or calculations of averages or risk measures.

Data pools from individual studies are then analysed on their own to identify things like effect sizes. While this might have already been done by previous researchers, different studies often use methods that make it hard to compare outcomes. This often standardises the results for better comparison.

Lastly, an appropriate model is applied to all of the results combined, such as to find a fixed effect that is considered to be common to all of the studies, or a random effect that is thought to vary from study to study.

What are the benefits of a meta-analysis?

While carrying out a simple 'narrative' review of multiple studies could sum up repeating (or contradicting) patterns, it often relies on the experience of an expert in the field and therefore open to biases.

Meta-analyses depend on mathematical formulae to identify trends in a pool of data, which is considered to be more objective.

There is also an advantage in having more data at hand. A few studies that each collect a small handful of results might not be all that powerful. Together, those numbers might add up to reveal a far stronger statistical signal, and be more relevant to a wider part of the population.

Why don't we do meta-analyses all of the time?

As useful as they are, there are disadvantages. It takes a great deal of time and experience to sift through numerous studies in search of a few that might have what you need. Understanding the statistics required also requires specialty knowledge.

Finding the right studies is important. Even the best analysis will come to poor conclusions if the original research was poorly conducted.

Focussing only on published studies might help ensure they're of a higher quality than including work that has not been peer reviewed, but this also risks what's known as 'the filing drawer effect' - a publication bias that excludes otherwise useful observations from studies that don't quite turn out as hoped.

All Explainers are determined by fact checkers to be correct and relevant at the time of publishing. Text and images may be altered, removed, or added to as an editorial decision to keep information current.