Many organizations have developed AI ethics principles which are meant to help guide developers of AI systems. But abstract principles can be difficult to operationalize so, in response, some organizations have moved to using AI ethics checklists. New research from Microsoft suggests that AI ethics checklists are easily misused and, instead, companies should focus most on the specific cultural factors at play inside an organization.
AI ethics principles encompass concepts around bias and fairness, accountability, transparency and privacy. Principles are good to have but they mask the complexity of ethical decisions – different assumptions, interpretations, personal experiences and biases mean that it’s extraordinarily difficult for people to apply them consistently in the countless small decisions made everyday in the software development process.
In the case of bias and fairness, there are now a range of tools to help AI developers with technical solutions but, on the whole, AI ethics remains a complex sociocultural concept. This means that pursuing purely technical solutions runs the risk of AI ethics considerations being too narrow or ineffective – “ethics washing.” Checklists can help with a middle ground – providing a scaffold between high-level principles and granular technical tools.
Checklists are used for accomplishing three goals:
- supporting task completion
- guiding decision making
- prompting critical conversations
The researchers found that the most important role of a checklist in AI is to prompt critical conversations. AI ethics efforts are often the result of ad-hoc processes driven by passionate individual advocates. In companies where the priority is fast-paced development and deployment, there can be significant social cost by slowing things down to talk about fairness. It takes time and effort to involve more people in decisions and to wrestle with competing or ambiguous topics. An AI ethics checklist can act as a “value lever” and make it acceptable to reflect on risks, raise red flags, add extra work and escalate decisions.
The research highlights some important do’s and don’t’s of AI ethics checklists:
- Introduce an AI ethics checklist in tandem with other organizational culture or processes, preferably at the start of an AI strategic change initiative.
- Use an ethics checklist as a way to prompt discussion and empower people who would otherwise not feel able to contribute to important conversations.
- Design a checklist in line with the way the software development process works in practice in the organization – introduce important checks and decisions at the points that people can most easily deal with them. Use “pause points” so that teams can use the checklist at the right time in the lifecycle.
- Always have the practitioners involved in writing or customizing the checklist.
- If using someone else’s checklist, make sure to understand if practitioners bundle steps or skip items that they feel are redundant. While this might not be inherently bad, it can result in people missing critical actions.
- Use binary “yes/no” questions when the potential risks are complex. This can turn an ethical decision into a deceptively simple compliance process.
- Expect individuals to make decisions alone. This is a particularly important point with technical evaluations and can result in ethics outcomes being purely technical solutions.
- Allow AI ethics to develop in an ad-hoc fashion – corridor conversations where individual advocates raise concerns and then fix on informal basis doesn’t work over the long term.
People are increasingly concerned about the impact of AI. It’s a positive sign that so many companies and organizations are developing AI ethics principles and checklists. However, we need to learn from other fields, as well as people on the front line of development of AI, and understand how human factors come into play. AI ethics checklists should be used as a way to engage in meaningful, sometimes difficult, conversations, not as a way to rush AI into deployment while complying with a simple set of rules.