Animal models are unavoidable, if unfortunate, necessities in modern research. In light of our ethical regards towards fuzzy, feathering, scaly test subjects, we would hope every life spent contributes vital data that expands our knowledge.

A study on research carried out at the University Medical Center Utrecht in the Netherlands more than a decade ago suggests as few as a quarter of the animals requested in 67 ethics applications were later represented in a final publication.

The rest simply never made it into a peer-reviewed study, potentially lost as a part of what's metaphorically described as the "file-drawer problem."

The drive among researchers to publish (lest perish!) has fuelled competition for public attention over the decades. Studies that fall short of expectations, whether over failed methods or simply uninteresting results, often don't see the light of day.

This can be a real problem when science relies on assessing the spread of evidence. Unfortunately, there's just no easy way to tell how many studies are abandoned for lack of ongoing interest.

Some investigations found that roughly 12 to 30 percent of phase II and III clinical trials make it to publication, with around half making their data publically available. Others report more than 90 percent of clinical trials are eventually published.

This is a wide discrepancy suggesting that we're nowhere close to getting to the bottom of understanding the full nature of the problem.

The selective publication of animal studies is even less well understood. Details covering any intended animal models are required in applications to ethics boards, but these aren't exactly freely available and are for good reason often confidential.

There are other ways to sniff out possible bias in reporting, though. One evaluation of preclinical neurology studies found the number of studies reporting beneficial treatments is a lot higher than what we might expect from animal studies, for example.

To expand our understanding of such a fundamental problem, a team of Dutch medical researchers tracked a selection of animal studies conducted at three of their university's research departments in 2008 and 2009.

With more than ten years passing since the applications were made, the researchers could be confident anything that hadn't been published was unlikely to be in the future.

Of the 67 applications that involved ethical approval, 30 full-text papers and 41 conference abstracts were produced, making a total of 60 percent of the applications used in completed research.

The tally of animals mentioned in the applications added up to 5,500 individual subjects and included a mix of small animals like mice, rats, and rabbits, and larger ones like pigs, dogs, and sheep.

Following the administrative paper trail, only 1,471 of the animals could be connected with a final data point. While half of the larger animals being tested on were included, just 23 percent of the smaller animals contributed to the published results.

There's no simple way to learn the exact fates of these test subjects. Details outlined in an ethical application might not make it as far as any actual testing.

A follow-up survey issued to the researchers behind the applications found at least one manuscript described a completed study yet to be published.

But the answers left plenty of room for some real concerns.

"The most frequently reported reasons for non-publication were a lack of statistical significance, the study being a pilot study and technical problems with the animal model," the authors write in their report.

While the study is a mere snapshot of biomedical research at one university, the results make for a sobering wake-up call that more or less reflects the conclusions of a previous investigation conducted with the cooperation of more than 450 researchers.

Publication biases are a major problem on their own, admittedly in some fields of research more than others. There are potential solutions, such as registering all work before undertaking any actual research. Making data more freely available and reducing the pressure to publish could also help reduce the bias against negative results.

But in an age when public concerns over the role animals play in research are rising, researchers will need to increasingly justify the benefits of every mouse, rat, rabbit, or dog in their charge.

Of course, not every study will make it into a prestigious journal. Problems arise, budgets are cut, and sometimes experiments just go pear-shaped.

That doesn't mean we can't do better.

This research was published in BMJ Open Science.