Biases in Artificial Intelligence (AI) systems are not limited to representativeness of datasets and computational factors including how algorithms are designed. A new report highlights AI bias can occur as a result of human, institutional and societal factors.
The report “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” by the National Institute of Standards and Technology (NIST) recommends widening the scope of finding the source to improve our ability to identify and manage the harmful effects of AI biases. The report recommends going after the broader societal factors which, the researchers found, are currently overlooked.
The NIST publication suggests that rooting out AI biases, introduced purposefully or inadvertently, will require a holistic approach–addressing systemic/institutional and human biases as well as fairness of data and algorithms.
The problem isn’t just what we see. We need to look at it like the tip of the iceberg. The report suggests that while the statistical and computational biases might be on the surface there is need to go deep to find human and systemic biases. The report acknowledges it is a challenge to address and manage risks associated with AI bias.
“AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI,” says Reva Schwartz, principal investigator for AI bias and one of the report’s authors.
Bias in AI systems can be dangerous and can reduce public trust in AI applications. “These biases can negatively impact individuals and society by amplifying and reinforcing discrimination at a speed and scale far beyond the traditional discriminatory practices that can result from implicit human or institutional biases such as racism, sexism, ageism or ableism,” reads the NIST report.
This is because of the realization of potential harms AI bias can cause, the demand vis-à-vis widespread awareness of risks associated with bias in AI systems and efforts to reduce the same is also growing like never before.