When it comes to scientific publishing, journals live and die by a ranking called the journal impact factor (JIF), which claims to measure how 'influential' and 'prestigious' a publication is.

But new research has found what many scientists already suspect to be true - journal impact factors don't actually mean as much as we think they do. And they can't truly give an indication of how influential their articles really are.

Scientists have long had a love-hate relationship with the JIF - the higher a journal's impact factor, the more researchers want to publish in it, and the more competitive it gets. Which means academics have to jump through all kinds of hoops to get their papers accepted, even though many people think JIFs are useless.

And at the same time, hiring committees and the people who give out grants use JIF to help decide who'll get certain jobs and tenures. In other words, scientists can't live with JIF, but they certainly can't live without them.

But does this ranking system even work?

Calculating the impact factor of a journal is actually pretty simple - you just take the average number of times a journal's papers have each been cited over the past two years, and voila, you have an impact factor.

So, for example, Nature has a JIF of 41.456, which means that, on average, over the past two years its articles have been cited 41 times each.

Not bad at all, and you can see why it would be appealing for researchers to fight over spots in the publication.

But there are two big problems with that number. First of all, it's calculated behind closed doors by private company Thomson Reuters, which makes people pay to access the citation data it uses to work out JIFs - and also isn't transparent about the processes it uses to count and monitor citations.

And secondly, this latest research - which involved academics from some of the world's biggest publishing houses - shows that the number can be highly misleading (something many scientists have long suspected).

"If the citation counts of articles were like the heights of people, then the average number would be informative," writes journalist John Bohannon for Science.

"But for the articles published in any given journal, the distribution of citations is highly skewed. A small fraction of influential papers get most of the citations, whereas the vast majority of papers get few or none at all."

In the latest study, researchers from University College London and the University of Montreal teamed up with academics from the Public Library of Science, eLife, Springer Nature, the Royal Society, EMBO, and AAS Science - all major journal publishers, which, between them, published more than 366,000 research articles in 2013 and 2014.

To figure out how accurate JIFs were, they paid for access to the Thomson Reuters database and manually counted every single reference to each one of those papers in 2015 - and had the independent university researchers verify the results.

What they found was that up to 75 percent of the articles in any given journal had a much lower citation count than the journal's impact factor.

Which means that researchers who use impact factors to try to predict how much their work will be cited are usually going to be disappointed by the actual results. 

The study also revealed several problems in the Thomson Reuters database, with many citations unmatchable to the original article.

"We hope that this analysis helps to expose the exaggerated value attributed to the JIF and strengthens the contention that it is an inappropriate indicator for the evaluation of research or researchers," the team wrote in a preprint paper posted to the open-access site bioRxiv.

The researchers have put the research there so it can be scrutinised and looked over by their peers before they submit it for publication to a journal.

They've also reached out to Thomson Reuters with their results, who have been responsive to improving the system and starting a better dialogue with publishers - but obviously they don't want to do away with the JIF system altogether, seeing as they make money from it.

But they admit that JIF should only be part of the picture when researchers select a journal - not the whole story, which is often the case now.

"The authors are correct to point out that JIF should only be used as an aid to understand the impact of a journal," James Pringle, Thomson Reuters' head of Industry Development and Innovation, IP & Science, told Science. "JIF is a reflection of the citation performance of a journal as a whole unit, not as an assembly of diverse published items."

So where does that leave us? Pretty much back where we started, except at least now we have some data to back up the general consensus that JIFs aren't that useful.

And, for many researchers, this is just one small problem with the academic publishing industry as a whole - a lot of scientists are beginning to protest against paywall journal models, and more and more are choosing to publish their research on preprint sites where the public can access them freely.

"These results, from a limited population of journals, are interesting and worthy of further and larger investigation," David Smith, an academic publishing expert in Oxfordshire in the UK, who wasn't involved in the study told Science.

"[But] JIF isn't the problem," he added. "It's the way we think about scholarly progress that needs the work."

Baby steps, guys, baby steps.