In an attempt to improve how scientists provide a clearer impression of their research, a major medical science journal has provided a template for a new kind of report, one that foreshadows a study before the results have been uncovered.

The novel 'Registered Report' format won't be a fix-all for what many are seeing as an ongoing crisis in science, but it certainly is a significant step forward in finding a way to counter the forces that risk making science less reliable than it should be.

BMC Medicine is the first clinical science journal to peer-review articles based solely on a rationale and proposed methodology.

Reviewers will still evaluate the research in terms of relevance and potential significance of the work, implications for future research, and reference to existing literature.

If the submission passes the stage one review, the researchers can complete their study and then pass it back for a second peer review, which aims to ensure the research stuck to their method and any conclusions are sound.

"Registered Reports format aims at fostering innovation and addressing concerns about credibility and reproducibility in science," the BMC Medicine team writes.

As useful as the scientific methodology is, researchers don't operate in bubbles. Even the most dedicated scientists feel the pressure to 'publish or perish'.

The act of science itself is dependent on funding that often takes into account the final impact of an experiment or review.

Unexpected results don't always result in 'eureka!' moments – sometimes they're filed away, never published, and conveniently forgotten about.

This 'file-drawer problem' creates a skewed impression of an area of research, especially where statistical representations are important. A negative result might not win a Nobel Prize, but such an increment of discovery does help move understanding forward.

Outright fraud is rare in science (though isn't unknown), but there are plenty of less scrupulous practices that can tempt scientists into searching for an ace in their discard pile.

A practice known as HARKing is one example. It stands for Hypothesizing After the Results are Known, and involves making up the hypothesis to fit the results.

P-hacking, or data dredging, is another research no-no, involving sifting the numbers collected from an experiment in search of possible correlations.

On the surface p-hacking seems innocent enough. Scientists go to a lot of trouble collecting piles and piles of numbers, so what's the harm in re-using all of that data to take a stab at another hypothesis?

Hunting for correlations – especially without blinders in place – poses a risk of finding a pattern in the data that means absolutely nothing, making data dredging a less than reliable way to make a discovery we can trust.

These practices aren't always easy to spot, but they do seem prevalent.

British science writer Ben Goldacre found in a tracking study called COMPare that of 67 trials they evaluated between October 2015 and January 2016, only 9 reported all outcomes without quietly slipping in any results that hadn't been relevant to the hypothesis.

However common these biases might be, they've no doubt contributed to what's become known as the reproducibility crisis in science, where published research has increasingly become challenging to replicate.

Initiatives such as the Reproducibility Project: Cancer Biology are attempting to address the problem by selecting the most relied-upon studies for replication, in an attempt to sift the gold from the dross.

Registered Reports are a potential way to fix the problem from the other end, encouraging engagement with the reporting process early before any temptations to massage the narrative in relation to the results arise.

The process itself isn't new, but BMC Medicine is the first medical journal to implement the format, hopefully setting a trend that could be adopted more generally across the board.

Science is a system that aims to overcome the biases that plague us as social animals, so having a new tool to address a weakness in how we communicate research is a big bonus.

It won't fix all of the problems that face the research community. But science is a self-correcting philosophy, so we can expect more critical measures like these to be trialled in the future.