The latest experiments testing whether a number of high-profile papers in cancer research can be reproduced has just given most of the results the thumbs-up.

The report is encouraging following five earlier studies where just two papers could be verified as part of an initiative called the Reproducibility Project: Cancer Biology.

The project is a collaboration between the US Centre for Open Science and Science Exchange sparked by claims from companies that in following up on preclinical trials for cancer treatments, as many as 89 percent of the studies couldn't be replicated.

Replication is a big deal in science. By following an experiment's method, we should produce exactly the same results if we're to have much confidence in its conclusions.

That's not to say all scientists have to agree on how to interpret results, but when one method produces more than one set of data, any trust in an experiment goes out the window.

Psychology has come under fire in recent years with what's been termed the "reproducibility crisis", where repeated classic – and often influential – experiments don't end up with the same observations.

Calling it a crisis might seem a little extreme, but a survey conducted by Nature last year found just over half of 1,576 researchers surveyed felt that there is a significant problem.

There is no shortage of opinions on why so many experiments defy reproduction, or on what we can do about it. For example, the pressure to constantly "publish or perish" new findings rather than spend time testing old ones has been blamed for creating a natural selection of bad science.

Opening research to share all results (not just the successful ones) has also been touted as a way to make science as a culture a little more honest.

The Reproducibility Project: Cancer Biology is a practical response to the problem, providing evidence on the nature of reproducibility in cancer research while identifying factors that affect our ability to replicate results generally.

The project initially centred on 50 influential cancer papers published between 2010 and 2012, but not all researchers were keen to have their work so heavily scrutinised, leaving just 29 to test.

In January, the first round of results came out with less than glowing reports.

Of five key studies, two could be reproduced satisfactorily, two experienced confounding technical problems, and one couldn't be replicated at all.

As bad as that might sound, it's hard to know exactly how to interpret the outcome. On one hand, our confidence in the usefulness of such research should be shaken.

But some think it should also show us science itself is complex.

"People make these flippant comments that science is not reproducible. These first five papers show there are layers of complexity here that make it hard to say that," Charles Sawyers, an editor at eLife and a cancer biologist, told Science magazine back in January.

The two latest studies tested a 2010 report in the journal Cancer Cell on mutations found in some forms of leukemia and brain cancer, and a 2011 Nature paper on an inhibitor that could stop leukemia cells from dividing.

Both managed to reproduce important parts of the previous research, meaning four out of the seven experiments so far replicated have backed up findings.

The results weren't all perfect replications, so the news isn't glowing. In spite of the inhibitor in the replicated 2011 study reducing the growth of cancer cells in mice, the new study didn't replicate a prolonging of their lives.

But since the new study deviated slightly from the previous method, some researchers think it's important to not read too much into such a difference.  

"I think we should be careful not to make too much of the absence of statistically significant differences in survival as an endpoint," eLife editor and Harvard University molecular biologist Karen Adelman explains to Science magazine.

The subtle difference also shows reproducibility itself isn't an all or nothing affair, and like science in general serves to inform discussion rather than providing an absolute pass or fail.

In coming years we'll no doubt see more replications that will both disappoint and encourage, so let's not hold our breath.

The real success will be new information we can use to take a good look at how we do science and make it even more robust than ever.

All research for the Reproducibility Project: Cancer Biology is published in eLife.