Microsoft is the latest tech company to try and tackle algorithmic bias — that is, artificial intelligence that was fed subpar data and came to mirror society's own prejudices or unfair perspectives.

The company wants to create a tool that will detect and alert people to AI algorithms that may be treating them based on their race or gender, according to MIT Technology Review.

It's great that Microsoft, which touts itself as a company that creates AI to bring people together, is joining the ranks of Google and Facebook to create some sort of tool to find improperly-trained AI.

But Microsoft's new algorithm to find biased algorithms can only find and flag existing problems. That means programs that can lead to increased police prejudice, for example, will still be built and used, just maybe not for as long as they would have if undetected.

To truly create AI that is fair and benefits everyone, more care needs to be taken on the front end.

One possible way companies can cover their bases is through a third-party audit, where a tech company brings in an outside expert to review their algorithms and look for signs of bias either in the code itself or the data being fed into it.

The idea of an AI audit, mentioned in the MIT Technology Review article, has gained traction elsewhere, and some AI companies have begun hiring auditors to take a look at their code.

But this also requires that the AI be simple enough such that someone can walk in and spot problem areas, or the auditor be well-versed in the code. For more complicated deep learning algorithms, this may not always be possible.

Another possible answer is better training for the people who actually create the AI so they might be able to better detect their own opinions and prejudices, and keep them from being interpreted by the algorithm as fact.

That doesn't mean coders are setting out to program racist machines. But, because everyone has some sorts of implicit biases, the world of technology would benefit from helping people better understand their own world views.

These are major solutions that would require an attitude shift in the way technology is developed, even at the companies that want to develop these filters and flags for biased algorithms.

In the meantime, it's a good sign that researchers and companies are starting to pay attention to the problem.

This article was originally published by Futurism.