BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How Businesses Can Help Reduce Bias In AI

Wendy Gonzalez is the CEO of Sama, the provider of accurate data for ambitious AI.

Bias is part of human nature. Everyone has it, consciously or unconsciously. But when it comes to training artificial intelligence software, bias can have substantial consequences.

AI is no longer a futuristic concept but a technology that touches, and frankly powers, much of the technology we rely on every day. From the software that allows us to unlock phones with our faces to the algorithms powering personalized social media feeds, AI is a cornerstone of modern existence. But it isn’t always accurate and currently doesn’t embody the intellectual discerning abilities of the human mind.

Examples Of Bias In AI Applications: Gender And Racial Bias

Data scientist and Weapons of Math Destruction author Cathy O’Neil defined algorithms as “ways of using historical information to make a prediction about the future.” We use data—historical information—to teach machines how to learn and recognize different kinds of patterns. When that data is incomplete or not wholly representative, the results become skewed and biased.

In 2014, Amazon began using an experimental recruiting tool built by their own employees to screen resumes for top talent. The tool used AI to rate resumes from one to five stars, hoping to automate the hiring process and find high performers quickly.

Unfortunately, the tool was trained using resumes predominantly from men. The AI algorithm soon learned a heavy preference for men’s resumes over women’s. The tool penalized resumes with the word “women’s” and downgraded rankings for graduates of women’s colleges. Amazon discontinued the use of the tool shortly thereafter.

And in 2018, the groundbreaking Gender Shades project done by MIT scientist and digital activist Joy Buolamwini found facial recognition algorithms for some of the leaders in the industry are least accurate when identifying women of color. As we see an increase in facial recognition being used by law enforcement, this high error rate can have far-reaching effects on those who are incorrectly identified.

Just this year, The Guardian did an investigative report on how various AI tools labeled images of women. It found that these tools were much more likely to label images of women as “racy” or inappropriate compared to similar images of men. The result was women getting shadow-banned on social media platforms, which means they have less of a presence or voice. The tool learns from that feedback loop, which can result in the underrepresentation of women.

So, what can business leaders do to ensure bias doesn’t creep into their use of AI in the future?

Use Large, Representative Data Sets

Machine learning algorithms are only as good as the data they are trained on. Ensuring your data sets are robust and inclusive of real-world situations is key. Synthetic data sets are helping to round out data gaps, but companies should ensure they are actively creating diverse and realistic data sets when building AI models.

For example, building autonomous vehicle models requires the system to not only recognize people versus animals or signs with pictures of people, but it must also recognize people of different sizes, shapes, colors, heights, dresses and so on.

Validate Your Model

Gather and incorporate feedback on datasets from a diverse team. People from different backgrounds will ask different questions and interact with models in various ways, helping find and identify problems before a model is in production.

Once a model is in production, ensure your team is gathering feedback from end users and incorporating it regularly into model retraining.

While every issue can’t be discovered in the testing phase, diverse user feedback is critical to catching real-world applications of bias. In 2016, Snapchat’s infamous “doggy” filter was unleashed on users and exploded in popularity. Unfortunately, darker-skinned users found it increasingly difficult to get the filter to recognize their faces, leaving them often unable to participate in a popular social media trend. Of course, facial recognition technology has its pitfalls but had extensive and diverse validation methods been used, it's possible this instance of bias could have been avoided.

While eliminating bias in AI is impossible, it’s important for companies at the forefront of the technology to be actively working to reduce it. Taking steps to diversify data, incorporating feedback from many reviewers and proactively validating models before they go into production are just a few ways we can create more responsible and ethical AI tools.


Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Follow me on LinkedInCheck out my website