Biased AI and ethical AI - Another reason why we need diversity in tech

Everyone knows by now that AI is the greatest thing since bread came sliced, and that there is currently a race between all companies to be the first one on the AI train. If you snooze you loose and your competitors will find a way to utilize AI to simplify internal processes and empower their customers before you know it. There is no question that AI has immense possibilities, and that the imagination is the limit, but during this there is something that I feel is not talked about enough in Simployer, AI bias and ethical AI.
Karoline Brynildsen, Team Manager Development Wednesday, May 3, 2023

Earlier this year ODA and Abelia had their annual award ceremony where they named “Norway’s 50 top tech women” for 2023. [1] The theme for this year’s ceremony was AI, and in the introduction of the program Mala Wang-Naveen (communications officer in SINTEF Digital) said “an example of technology that discriminates is artificial intelligence” (translated to English). This really got me thinking. Yes, diversity in tech has always been important, but now it is more important then ever. How can we make the best use of AI without having a diverse group of people working with it? There are several examples of AI gone bad, and several of these examples is because of biases in the algorithms and/or in the underlying data.

What is bias in AI?

“Bias in AI occurs when two data sets are not considered equal, possibly due to biased assumptions in the AI algorithm development process or built-in prejudices in the training data.” -Isaca org. [2]

Examples of this is facial recognition being more accurate with white males than women of color [5], Amazons hiring algorithm that preferred male candidates [3] and Microsoft’s twitter chat bot that turned nasty really fast [4].

Whitin health care, AI bias can lead to wrong or no diagnose for the under represented groups. Like analyzes of chest x-rays being less accurate for woman than men [6], or systems used to help identify skin cancer being more accurate for light-skinned patients then dark-skinned patients.[7]

Biases are often created by data gaps

Data gaps is the lack of adequate and accurate data, while outdated data is historical data that we have but that no longer is representing the reality. Data gaps occur when there is a lack of diversity and when we think that our point of view is universal and the ultimate truth, while outdated data occurs when we only think about what has been but not about where we want to be. We need to have a consciousness about this when working with AI; there is always something here that we don’t know. How can we discover this and continue to improve?

How should Simployer work with biased and ethical AI

Well in my opinion the first step is to acknowledge that we have biases, both known and unknown, and AI alone will not solve this issue, it would rather amplify it if we are not paying attention. Then secondly, I strongly believe that we need to have consciousness regarding this right from the start of implementing AI in our organization and products. We are creating solutions for a large variety of people and there is a large variety of people working here. As an HR tech company, we should be in front when it comes to inclusion and diversity and to openly work with this when it comes to AI as well. And thirdly; you do not know what you do not know. Having diversity in the teams that selects the datasets, writes the algorithms, and tests the outcome will increase the chances of creating features that fits all our customers and users and decrease the changes of data gaps and biases. It is definitely not a guarantee though, so having routines to continuously check and question the outcome is also an important factor here.

Lastly, I want to finish off with a couple of good articles to read if you want to dive deeper into the subject of biased AI and ethical AI, and please hit me up for some discussion both if you agree or disagree.

Karoline Brynildsen

Team Manager Development