We encounter artificial intelligence (AI) every day. AI describes computer systems that are able to perform tasks that normally require human intelligence. When you search something on the internet, the top results you see are decided by AI.

Any recommendations you get from your favorite shopping or streaming websites will also be based on an AI algorithm. These algorithms use your browser history to find things you might be interested in.

Because targeted recommendations are not particularly exciting, science fiction prefers to depict AI as super-intelligent robots that overthrow humanity. Some people believe this scenario could one day become reality. Notable figures, including the late Stephen Hawking, have expressed fear about how future AI could threaten humanity.

To address this concern we asked 11 experts in AI and Computer Science "Is AI an existential threat to humanity?" There was an 82 percent consensus that it is not an existential threat. Here is what we found out.

How close are we to making AI that is more intelligent than us?

The AI that currently exists is called 'narrow' or 'weak' AI. It is widely used for many applications like facial recognition, self-driving cars, and internet recommendations. It is defined as 'narrow' because these systems can only learn and perform very specific tasks.

They often actually perform these tasks better than humans – famously, Deep Blue was the first AI to beat a world chess champion in 1997 – however they cannot apply their learning to anything other than a very specific task (Deep Blue can only play chess).

Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems. Some people believe that AGI is inevitable and will happen imminently in the next few years.

Matthew O'Brien, robotics engineer from the Georgia Institute of Technology disagrees, "the long-sought goal of a 'general AI' is not on the horizon. We simply do not know how to make a general adaptable intelligence, and it's unclear how much more progress is needed to get to that point".

How could a future AGI threaten humanity?

Whilst it is not clear when or if AGI will come about, can we predict what threat they might pose to us humans? AGI learns from experience and data as opposed to being explicitly told what to do. This means that, when faced with a new situation it has not seen before, we may not be able to completely predict how it reacts.

Dr Roman Yampolskiy, computer scientist from Louisville University also believes that "no version of human control over AI is achievable" as it is not possible for the AI to both be autonomous and controlled by humans. Not being able to control super-intelligent systems could be disastrous.

Yingxu Wang, professor of Software and Brain Sciences from Calgary University disagrees, saying that "professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users' interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves."

Dr O'Brien adds "just like with other engineered systems, anything with potentially dangerous consequences would be thoroughly tested and have multiple redundant safety checks."

Could the AI we use today become a threat?

Many of the experts agreed that AI could be a threat in the wrong hands. Dr George Montanez, AI expert from Harvey Mudd College highlights that "robots and AI systems do not need to be sentient to be dangerous; they just have to be effective tools in the hands of humans who desire to hurt others. That is a threat that exists today."

Even without malicious intent, today's AI can be threatening. For example, racial biases have been discovered in algorithms that allocate health care to patients in the US. Similar biases have been found in facial recognition software used for law enforcement. These biases have wide-ranging negative impacts despite the 'narrow' ability of the AI.

AI bias comes from the data it is trained on. In the cases of racial bias, the training data was not representative of the general population. Another example happened in 2016, when an AI-based chatbox was found sending highly offensive and racist content. This was found to be because people were sending the bot offensive messages, which it learnt from.

The takeaway:

The AI that we use today is exceptionally useful for many different tasks.

That doesn't mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.

Article based on 11 expert answers to this question: Is AI an existential threat to humanity?

This expert response was published in partnership with independent fact-checking platform Metafact.io. Subscribe to their weekly newsletter here.