There's no shortage of warnings from the scientific community that science as we know it is being drastically affected by the commercial and institutional pressure to publish papers in high-profile journals – and now a new simulation shows that deteroriation actually happening.

To draw attention to the way good scientists are pressured into publishing bad science (read: sensational and surprising results), researchers in the US developed a computer model to simulate what happens when scientists compete for academic prestige and jobs.

In the model, devised by researchers at the University of California, Merced, all the simulated lab groups they put in these scenarios were honest – they didn't intentionally cheat or fudge results.

But they received greater rewards if they published 'novel' findings – as happens in the real world. They also had to expend greater effort to be rigorous in their methods – which would improve the quality of their research, but lower their academic output.

"The result: Over time, effort decreased to its minimum value, and the rate of false discoveries skyrocketed," lead researcher Paul Smaldino explains in The Conversation.

And what's more, the model suggests that the 'bad' (if you will) scientists who take shortcuts in relation to the incentives on offer will end up passing on their methods to the next generation of scientists who work in their lab, creating in effect an evolutionary conundrum that the study authors call "the natural selection of bad science".

"As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one's ability to do so will run rampant," Smaldino told Hannah Devlin at The Guardian.

It's certainly not the first time we've heard claims like this – although it's likely no researchers have actually run the numbers through a computer simulation quite like this before.

Science is at something of a cross-roads at the moment, with researchers highlighting what's called the "reproducibility crisis".

Effectively, this is due to the reporting of 'false discoveries' – hard-to-reproduce results that are kind of like noise in scientific data, but which are singled out for reporting by scientists in their papers because they're new, sensational, or somehow surprising.

These kinds of findings capture our human interest because of their novelty and shock factor – but they risk damaging the credibility of science, especially since scientists feel under pressure to embellish or skew their papers towards making these kinds of impressions.

But it's a vicious cycle, because these sorts of remarkable studies create a lot of attention and help researchers get published, which in turn helps them get grants from institutions to conduct more research.

"The cultural evolution of shoddy science in response to publication incentives requires no conscious strategising, cheating, or loafing on the part of individual researchers," Smaldino writes in The Conversation.

"There will always be researchers committed to rigorous methods and scientific integrity. But as long as institutional incentives reward positive, novel results at the expense of rigour, the rate of bad science, on average, will increase."

And the problem is only compounded further by quantitative measures designed to rate the importance of researchers and their papers – as these kinds of measures, such as the controversial p-value – can be misleading and exploited, creating all kinds of false impressions that ultimately hurt science.

"I agree that the pressure to publish is corrosive and anti-intellectual," neuroscientist Vince Walsh from University College London in the UK, who wasn't part of the study, told The Guardian.

"Scientists are just humans, and if organisations are dumb enough to rate them on sales figures, they will do discounts to reach the targets, just like any other sales person."

So, what's the solution? Well, it won't be easy, but Smaldino says we need to move away from assessing scientists quantitatively at an institutional level.

"Unfortunately, the long-term costs of using simple quantitative metrics to assess researcher merit are likely to be quite great," the researchers write in their paper. "If we are serious about ensuring that our science is both meaningful and reproducible, we must ensure that our institutions incentivise that kind of science."

In the meantime, studies like this that shine a critical spotlight on science – which are fairly 'novel' and attention-grabbing in themselves – may help to keep people aware of just how big of an issue this really is.

"The more people who are aware of the problems in science, and who are committed to improving its institutions," Smaldino told The Guardian, "the sooner and more easily institutional change will come."

The paper is published in Royal Society Open Science (link down at time of writing).