Wednesday 6 January 2021

IBM

Designing AI Applications to Treat People with Disabilities Fairly

IBM Exam Prep, IBM Prep, IBM Certification, IBM Guides, IBM Learning

Alan was surprised not to get an interview for a banking management position. He had all the experience and excellent references. The bank’s artificial intelligence (AI) recruitment screening algorithm had not selected him as a potential candidate, but why? Alan is blind and uses specialized software to do his job. Could this have influenced the decision? 

AI solutions must account for everyone. As artificial intelligence becomes pervasive, high profile cases of racial or gender bias have emerged. Discrimination against people with disabilities is a longstanding problem in society. It could be reduced by technology or exacerbated by it. IBM believes we have a responsibility, as technology creators, to ensure our technologies reflect our values and shape lives and society for the better. We participated in the European Commission High Level Expert Group on AI and its Assessment List for Trustworthy AI (ALTAI). The ALTAI provides a checklist of questions for organizations to consider when developing and deploying AI systems, and it emphasizes the importance of access and fair treatment regardless of a person’s abilities or ways of interacting.

Often, challenges in fairness for people with disabilities stem from human failure to fully consider diversity when designing, testing and deploying systems. Standardized processes like the recruitment pre-screening system Alan faced may be based on typical employees, but Alan may be unlike most candidates for this position. If this is not taken into account, there is a risk of systematically excluding people like Alan. To address this risk, we offer ways to develop AI-based applications that treat people with disabilities fairly, by embedding ethics into AI development from the very beginning in our ‘Six Steps to Fairness’. Finally, we present considerations for policymakers about balancing innovation, inclusion and fairness in the presence of rapidly advancing AI-based technologies.

Six steps to fairness


1. Identify risks. Who might be impacted by the proposed application?

◉ Is this an area where people with disabilities have historically experienced discrimination? If so, can the project improve on the past? Identify what specific outcomes there should be, so these can be checked as the project progresses.

◉ Which groups of people might be unable to provide the expected input data (e.g. clear speech), or have data that looks different (e.g. use sign language, use a wheelchair)? How would they be accommodated?

◉ Consider whether some input data might be proxies for disability.

2. Involve stakeholders. Having identified potentially impacted groups, involve them in the design process. Approaches to developing ethical AI applications for persons with disabilities include actively seeking the ongoing involvement of a diverse set of stakeholders (Cutler et al., 2019 – Everyday Ethics for AI) and a diversity of data to work with. It may be useful to define a set of ’outlier’ individuals and include them in the team, following an inclusive design method. These ‘outliers’ are people whose data may look very different from the average person’s data. What defines an outlier depends on the application. For example, in speech recognition, it could be a person with a stutter or a person with slow, slurred speech. Outliers also are people who belong in a group, but whose data look different. For example, Alan may use different software from his peers because it works better with his screen reader technology. By defining outlier individuals up front, the design process can consider, at each stage, what their needs are, whether there are potential harms that need to be avoided, and how to achieve this.

3. Define what it means for this application to be ‘fair’. In many jurisdictions, fair treatment means that the process using the application allows individuals with disabilities to compete on their merits, with reasonable accommodations.Decide how fairness will be measured for the application itself, and also for any AI models used in the application. If different ability groups are identified in the data, group fairness tests can be applied to the model. These tests measure fairness by comparing outcomes between groups. If the difference between the groups is below a threshold, the application is considered to be fair. If group membership is not known, individual fairness metrics can be used to test whether ‘similar’ individuals receive similar outcomes. With the key stakeholders, define the metric for the project as a whole, including accommodations, and use diverse individuals for testing.

4. Plan for outliers. People are wonderfully diverse, and there will always be individuals who are outliers, not represented in the AI model’s training or test data. Design solutions that also can address fairness for small groups and individuals, and support reasonable accommodations. One important step is providing explanations and ways to report errors or appeal decisions. IBM’s AI Explainability 360 toolkit includes ‘local explanation’ algorithms that describe factors influencing an individual decision. With an explanation, users like Alan can gain trust that the system is fair, or steps can be taken to address problems.

5. Test for model bias and mitigate.

◉ Develop a plan for tackling bias in source data in order to avoid perpet­uating previous discriminatory treatment. This could include boosting representation of peo­ple with disabilities, adjusting for bias against specific disability groups, or flagging gaps in data coverage so the limits of the resulting model are explicit.

◉ Bias can come in at any stage of the machine learning pipeline. Where possible, use tools for detecting and mitigating bias during development. IBM’s AI Fairness 360 Toolkit offers many different statistical methods for assessing fairness. These methods require protected attributes, such as disability status, to be well defined in the data. This could be applied within large organizations when scrutinizing promotion practices for fairness, for example.

◉ Test with outliers, using input from key stakeholders. In recruitment and other contexts where candidate disability information is not available, statistical approaches to fairness are less applicable. Testing is essential to understand how robust the solution is for outlier individuals. Measure against the fairness metrics defined previously to ensure the overall solution is acceptable.

6. Build accessible solutions. Design, build and test the solution to be usable by people with diverse abilities, and to support accommodations for individuals. IBM’s Equal Access Toolkit provides detailed guidance and opensource tools to help teams understand and meet accessibility standards, such as the W3C Web Content Accessibility Guidelines (WCAG) 2.1.

By sharing these ‘six steps to fairness’, IBM aims to improve the fairness, accountability and trustworthiness of AI-based applications. Given the diversity of people’s abilities, these must be an integral part of every AI solution lifecycle.

Considerations for policymakers


IBM believes that the purpose of AI is to augment – not replace – human intelligence and human decision-making. We also believe that AI systems must be transparent, robust and explainable. Although the development and deployment of AI are still in their early stages, it is a critical tool whose utility will continue to flourish over time.

IBM Exam Prep, IBM Prep, IBM Certification, IBM Guides, IBM Learning

This is why we call for a risk-based, use-case focused approach to AI regulation. Applying the same rules to all AI applications would not make sense, given its many uses and the outcomes that derive from its use. Thus, we believe that governments and industry must work together to strike an appropriate balance between effective rules that protect the public interest and the need to promote ongoing innovation and experimentation. With such a ‘precision regulation’ approach, we can answer expectations of fairness, accountability and transparency according to the role of the organization and the risk associated with each use of AI. 

We also strongly support the use of processes, when employing AI, that allow for informed and empowered human oversight and intervention. Thus, to the extent that high-risk AI is regulated, we suggest that auditing and enforcement mechanisms focus on evidence that informed human oversight is appropriately established and maintained.

For more than 100 years, diversity, inclusion and equality have been critical to IBM’s culture and values. That legacy, and our continued commitment to advance equity in a global society, have made us leaders in diversity and inclusion. Guided by our values and beliefs, we are proud to foster an environment where every IBMer is able to thrive because of their differences and diverse abilities, not in spite of them. This does not – and should not – change with the introduction and use of AI-based tools and processes. Getting the balance right between fairness, precision regulation, innovation, diversity and inclusion will be an ongoing challenge for policymakers worldwide.

Source: ibm.com

Related Posts

0 comments:

Post a Comment