Like justice, science works best when it's blind. For the better part of two centuries, that's been the mantra for sound experimental design.

Hiding observations that risk introducing bias defines a standard of reliability in research. But a group of UK scientists have put forward an argument that in many cases, it could be a waste of effort, and one that might even do more harm than good.

Together with colleagues from the University of Edinburgh and the Centre for Public Health in Belfast, clinical researcher Rohan Anand from Queen's University Belfast makes a case that scientists should think long and hard before working blinding procedures into their experiment.

Their argument boils down to a question of cost versus benefit. We're quick to recognise the potential rewards of a blinded trial, but some of the less convenient consequences might mean it's not worth the fuss.

"Given that the number of new trials is increasing every year, with 25,000 registered since the start of 2019, we are concerned that a substantial amount of time, energy, and funding may be going into considering and implementing blinding without a sound rationale for it," Anand and his fellow researchers state in a recent Analysis article in The BMJ.

That 'sound' rationale is all too easy to take for granted. After all, science evolved as a system of checks and balances to ensure our best ideas explaining the Universe weren't fanciful dreams born of peer pressure and wishful thinking.

Along with replication in experimentation, positive and negative controls, p-values, and randomisation of test subjects, using naïve observers to report and measure variables is just one more way to ensure we don't confuse imagination for reason.

But none of these efforts come for free. Volunteers need to be recruited and screened, for instance, which as any postgrad knows is no easy task. Even then, they don't always stick around to the end.

When it comes to testing new drugs, this can become especially troublesome.

"Key reasons given by patients for not wanting to enrol in these trials were that they wanted a named medication or wanted to know what was in the tablets," the researchers point out.

If one in four of your volunteers expresses concern over the potential of receiving a placebo over the real deal, for example, you either need to recruit more subjects – which takes even more time and resources – or accept your study might be underpowered.

Even if you've found plenty of willing patients to study, those placebos need to look, feel, and even taste pretty authentic. In an ideal world, they'd even come with side effects for that full illusory experience.

A few sugar pills might hardly break the bank. But as Anand and his colleagues put it: "Money spent on blinding has opportunity costs if it reduces funding to optimise other features that would have more influence on the trial's robustness, such as the training of trial staff, boosting the sample size, and comprehensively measuring outcomes".

So you might have better blinding, but if it came at the expense of an adequate sample size or realistic pills, it might have been for naught.

Financial resourcing aside, blind testing of therapies in clinical trials posed some serious ethical questions, especially where moratoriums or adjustments to other treatments are enforced.

Withholding information from volunteers – even if they willingly concede – can also pose a moral dilemma.

At best, this withholding of details might do little more than affect how they behave, tarnishing any evidence they provide.

Blinding students to theoretical methods or new teaching tools in an educational experiment, for example, or hiding branding from potential customers in a market setting, risks producing evidence that doesn't reflect real-world settings.

"Minimising biases with blinding might weaken the ability to predict the future accurately, because blinding is unlikely to be used in routine practice," the team writes.

None of this is to say blinding is itself an intrinsically flawed tool.

Its auspicious origins amid a royal inquiry into the validity of an 18th century healing fad sounds odd enough to feel almost apocryphal. If anything, its invention demonstrates its power in ensuring science resists succumbing to popularity contests.

But there is a risk of the pendulum swinging the other way, especially in a competitive 'publish or perish' landscape.

With an endless stream of experiments competing for attention in countless journals every year, treating the act of blinding as a check-box that makes an experiment automatically appear to be rigorous could inadvertently be detrimental to the very integrity of the process.

"Double blinded designs are not always ideal for providing a reliable answer to the trial's research question," the researchers sum up.

The greatest strength in science, of course, is a value in determining the merit of an idea based not solely on a list of criteria, but on a critical understanding of the history of debate attached to it.

Biases, of course, are a problem across the spectrum of science. Heading into the future, big data is becoming big news as we hunt for solutions in vast libraries of statistics using 'unbiased' algorithms – an assumption that can also help to mask prejudices buried deep within the code.

Blinding will continue to be a powerful way of distinguishing fact from fantasy in science long into the future, just so long as it doesn't come at the expense of the very methodology that makes science so trustworthy.

This analysis was published in The BMJ.