The way our brains learn new information has puzzled scientists for decades - we come across so much new information daily, how do our brains store what's important, and forget the rest more efficiently than any computer we've built?

It turns out that this could be controlled by the same laws that govern the formation of the stars and the evolution of the Universe, because a team of physicists has shown that, at the neuronal level, the learning process could ultimately be limited by the laws of thermodynamics.

"The greatest significance of our work is that we bring the second law of thermodynamics to the analysis of neural networks," lead researcher Sebastian Goldt from the University of Stuttgart in Germany told Lisa Zyga from Phys.org. 

The second law of thermodynamics is one of the most famous physics laws we have, and it states that the total entropy of an isolated system always increases over time.

Entropy is a thermodynamic quantity that's often referred to as a measure of disorder in a system. What that means is that, without extra energy being put into a system, transformations can't be reversed - things are going to get progressively more disordered, because it's more efficient that way.

Entropy is currently the leading hypothesis for why the arrow of time only ever marches forwards. The second law of thermodynamics says that you can't un-crack an egg, because it would lower the Universe's entropy, and for that reason, there will always be a future and a past.

But what does this have to do with the way our brains learn? Just like the bonding of atoms and the arrangement of gas particles in stars, our brains find the most efficient way to organise themselves.

"The second law is a very powerful statement about which transformations are possible - and learning is just a transformation of a neural network at the expense of energy," Goldt explained to Zyga.

If you keep in mind the fact that learning in its most simplistic form is controlled by billions of neurons firing inside our brains, then finding patterns in that energy output becomes a little easier.

To model how this works, Goldt and his team set up a neural network - a computer system that models the activity of neurons in the human brain.

"Virtually every organism gathers information about its noisy environment and builds models from those data, mostly using neural networks," the team writes in Physical Review Letters.

What the researchers were looking for is how neurons filter out the noise, and only respond to important sensory input. 

They based their models on something called Hebbian theory, which explains how neurons adapt during the learning process. It's often summarised by the saying "cells that fire together, wire together" - which basically means that, as cells get better at firing in certain patterns, the resulting thoughts get more reinforced in our brains. 

Using this model, the team showed that learning efficiency was constrained by the total entropy production of a neural network.

They noticed that the slower a neuron learns, the less heat and entropy it produces, which increased its efficiency.

What does that mean for you and I? Unfortunately, the result doesn't tell us a whole lot about how to learn better or smarter.

It also doesn't provide any magical solutions for how to create computers that can learn as efficiently as the human brain - these particular results can only be applied to simple learning algorithms that don't use feedback.

But what the researchers have done is put a new perspective on the study of learning, and provided evidence that our neurons follow the same thermodynamic laws as the rest of the Universe.

They're not the first ones to think about our brains in terms of thermodynamics, either.

Last year, a team from France and Canada proposed that consciousness could simply be a side effect of entropy, and our brains organising themselves in the most efficient manner.

"We find a surprisingly simple result: normal wakeful states are characterised by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values," they wrote at the time

We're still a long way off understanding how our brains work - and these are just two studies out of many that have tried to identify why our neurons connect and function the way we do.

But every new clue takes us closer to unlocking the keys to our brains' enormous power - and hopefully learning how to harness that in artificial systems.

"Having a thermodynamic perspective on neural networks gives us a new tool to think about their efficiency and gives us a new way to rate their performance," Goldt told Zyga.

The research has been published in Physical Review Letters, and you can read the full paper online here.