main article image
(Georgijevic/iStock)

Some of The World's Most-Cited Scientists Have a Secret That's Just Been Exposed

PETER DOCKRILL
29 AUG 2019

A new study has revealed an unsettling truth about the citation metrics that are commonly used to gauge scientists' level of impact and influence in their respective fields of research.

 

Citation metrics indicate how often a scientist's research output is formally referenced by colleagues in the footnotes of their own papers – but a comprehensive analysis of this web of linkage shows the system is compromised by a hidden pattern of behaviour that often goes unnoticed.

Specifically, among the 100,000 most cited scientists between 1996 to 2017, there's a stealthy pocket of researchers who represent "extreme self-citations and 'citation farms' (relatively small clusters of authors massively citing each other's papers)," explain the authors of the new study, led by physician turned meta-researcher John Ioannidis from Stanford University.

Ioannidis helps to run Stanford's meta-research innovation centre, called Metrics, which looks at identifying and solving systemic problems in scientific research.

One of those problems, Ioannidis says, is how self-citations compromise the reliability of citation metrics as a whole, especially at the hands of extreme self-citers and their associated clusters.

"I think that self-citation farms are far more common than we believe," Ioannidis told Nature. "Those with greater than 25 percent self-citation are not necessarily engaging in unethical behaviour, but closer scrutiny may be needed."

 

The 25 percent figure that Ioannidis is referring to are those scientists who self-refer 25 percent of the citations that reference their work (or that of their co-authors).

Being one-quarter of your own fan base might seem like a lot of self-citing, but it's not even that uncommon, the study reveals.

Among the 100,000 most highly cited scientists for the period of 1996 to 2017, over 1,000 researchers self-cited more than 40 percent of their total citations – and over 8,500 researchers had greater than 25 percent self-citations.

There's no suggestion that any of these self-citations are necessarily or automatically unethical or unwarranted or self-serving in themselves. After all, in some cases, your own published scientific research may be the best and most relevant source to link to.

But the researchers behind the study nonetheless suggest that the prevalence of extreme cases revealed in their analysis debases the value of citation metrics as a whole – which are often used as a proxy of a scientist's standing and output quality (not to mention employability).

"With very high proportions of self-citations, we would advise against using any citation metrics since extreme rates of self-citation may herald also other spurious features," the authors write.

 

"These need to be examined on a case-by-case basis for each author, and simply removing the self-citations may not suffice."

It's far from the first time researchers have highlighted serious problems with the way we rate the products of scientific endeavour.

In recent years, scientists have identified technical flaws hidden within citation systems, revealed shortcomings in how we rank science journals, and uncovered serious concerns about citation solicitations.

Others have noticed bizarre citation glitches that shouldn't exist at all, and observed other unsettling systemic trends that cast a shadow over a citation's worth.

Amidst this mess, Ioannidis and his team hope their new data "will help achieve a more nuanced use of metrics" that enables the community as a whole to more easily identify and curtail the improper impact of self-citations and citation farms.

Others, meanwhile, suggest the way to fix this is to get away from quantitative metrics as a whole, and focus instead on a qualitative approach to righting what's wrong here.

"When we link professional advancement and pay attention too strongly to citation-based metrics, we incentivise self-citation," psychologist Sanjay Srivastava from the University of Oregon, who wasn't involved in the study, told Nature.

"Ultimately, the solution needs to be to realign professional evaluation with expert peer judgement, not to double down on metrics."

The findings are reported in PLOS Biology.