Sunday 29 December 2019

Exploring AI Fairness for People with Disabilities

This year’s International Day of Persons with Disabilities emphasizes participation and leadership. In today’s fast-paced world, more and more decisions affecting participation and opportunities for leadership are automated. This includes selecting candidates for a job interview, approving loan applicants, or admitting students into college. There is a trend towards using artificial intelligence (AI) methods such as machine learning models to inform or even make these decisions. This raises important questions around how such systems can be designed to treat all people fairly, especially people who already face barriers to participation in society.


AI Fairness Silhouette

A Diverse Abilities Lens on AI Fairness


Machine learning finds patterns in data, and compares a new input against these learned patterns.  The potential of these models to encode bias is well-known. In response, researchers are beginning to explore what this means in the context of disability and neurodiversity. Mathematical methods for identifying and addressing bias are effective when a disadvantaged group can be clearly identified. However, in some contexts it is illegal to gather data relating to disabilities. Adding to the challenge, individuals may choose not to disclose a disability or other difference, but their data may still reflect their status. This can lead to biased treatment that is difficult to detect. We need new methods for handling potential hidden biases.

Our diversity of abilities, and combinations of abilities, pose a challenge to machine learning solutions that depend on recognizing common patterns. It’s important to consider small groups, not represented strongly in training data. Even more challenging, unique individuals have data that does not look like anyone else’s.

First Workshop on AI Fairness


To stimulate progress on this important topic, IBM sponsored two workshops on AI Fairness for People with Disabilities. The first workshop in 2018, gathered individuals with lived experience of disability, advocates and researchers. Participants identified important areas of opportunity and risk, such as employment, education, public safety and healthcare.  That workshop resulted in a recently published report outlining practical steps towards accommodating people with diverse abilities throughout the AI development lifecycle. For example, review proposed AI systems for potential impact, and design-in ways to correct errors and raise fairness concerns. Perhaps the most important step is to include diverse communities in both development and testing. This should improve robustness and help develop algorithms that support inclusion.

ASSETS 2019 Workshop on AI Fairness


The second workshop was held at this year’s ACM SIGACCESS ASSETS Conference on Computers and Accessibility, and brought together thinkers from academia, industry, government, and non-profit groups.  The organizing team of accessibility researchers from industry and academia selected seventeen papers and posters. These represent the latest research on AI methods and fair treatment of people with disabilities in society. Alexandra Givens of Georgetown University kicked off the program with a keynote talk outlining the legal tools currently available in the United States to address algorithmic fairness for people with disabilities. Next, the speakers explored topics including: fairness in AI models as applied to disability groups, reflections on definitions of fairness and justice, and research directions to pursue.  Going forward, key topics in continuing these discussions are:

◉ The complex interplay between diversity, disclosure and bias.

◉ Approaches to gathering datasets that represent people with diverse abilities while protecting privacy.

◉ The intersection of ableism with racism and other forms of discrimination.

◉ Oversight of AI applications.

Related Posts

0 comments:

Post a Comment