Not everything is knowable. In a world where it seems like artificial intelligence and machine learning can figure out just about anything, that might seem like heresy – but it's true.

At least, that's the case according to a new international study by a team of mathematicians and AI researchers, who discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics.

"The advantages of mathematics, however, sometimes come with a cost… in a nutshell… not everything is provable," the researchers, led by first author and computer scientist Shai Ben-David from the University of Waterloo, write in their paper.

"Here we show that machine learning shares this fate."

Awareness of these mathematical limitations is often tied to the famous Austrian mathematician Kurt Gödel, who developed in the 1930s what are known as the incompleteness theorems – two propositions suggesting that not all mathematical questions can actually be solved.

Now, Ben-David's new research indicates that machine learning is limited by the same unresolvability.

In this argument, a machine's ability to actually learn – called learnability – can be constrained by mathematics that is unprovable. In other words, it's basically giving an AI an undecidable problem, something that's impossible for an algorithm to solve with a true-or-false response.

"For us, it was a surprise," senior researcher and mathematician Amir Yehudayoff, from the Technion-Israel Institute of Technology, explained to Nature.

In their research, the team investigate a machine learning problem they call 'estimating the maximum' (EMX), in which a website seeks to display targeted advertising to the visitors that browse the site most frequently – although it isn't known in advance which visitors will visit the site.

According to the researchers, in this kind of case, the mathematical problem to be solved bears similarities to a machine learning framework known as probably approximately correct learning (aka PAC learning), but it's also similar to a mathematical paradox called the continuum hypothesis, another field of investigation for Gödel.

Like the incompleteness theorems, the continuum hypothesis is concerned with mathematics that cannot ever be proved to be true or untrue, and given the conditions of the EMX example, at least, machine learning could hypothetically run into the same perpetual stalemate.

"[The researchers] identify a machine-learning problem whose fate depends on the continuum hypothesis, leaving its resolution forever beyond reach," mathematician and computer scientist Lev Reyzin from the University of Illinois at Chicago, who wasn't involved with the work, writes in a commentary on the research for Nature.

Of course, the given parameters of the EMX problem are not the same as what machine learning has to contend with in other situations, but academically, the new paper serves as a reminder of how the forefront of computer science can't escape its esoteric, mathematical foundations.

"Machine learning has matured as a mathematical discipline and now joins the many subfields of mathematics that deal with the burden of unprovability and the unease that comes with it," Reyzin writes.

"Perhaps results such as this one will bring to the field of machine learning a healthy dose of humility, even as machine-learning algorithms continue to revolutionise the world around us."

The findings are reported in Nature Machine Intelligence.