To make AI fair, here’s what we need to learn to do

Starting in 2013, the Dutch government used an algorithm to wreak havoc on the lives of 25,000 parents. The software was supposed to predict which people were most likely to commit childcare benefit fraud, but the government didn’t wait for proof before penalizing families and demanding they repay years of allowances. Families were flagged based on “risk factors” such as having low income or dual nationality. As a result, tens of thousands of people have been needlessly impoverished and over 1,000 children have been placed in foster care.

From New York to California and the European Union, many regulations on artificial intelligence (AI) are in the works. The intention is to promote fairness, accountability and transparency, and to avoid tragedies similar to the Dutch child benefit scandal.

But that won’t be enough to make AI fair. There must be practical know-how on how to build AI so that it does not exacerbate social inequalities. In my view, this means defining clear ways for social scientists, affected communities and developers to work together.

Currently, developers who design AI work in different fields than social scientists who can anticipate what could go wrong. As a sociologist focusing on inequality and technology, I rarely have the opportunity to have a productive conversation with a technologist, or with my fellow social scientists, that goes beyond the reported issues. When I look at conference proceedings, I see the same thing: very few projects integrate social needs with engineering innovation.

To stimulate fruitful collaborations, mandates and approaches must be designed more effectively. Here are three principles that technologists, social scientists, and affected communities can apply together to produce AI applications that are less likely to distort society.

Include lived experience. Vague calls for broader participation in AI systems miss the point. Almost everyone who interacts online – using Zoom or clicking reCAPTCHA boxes – feeds the AI ​​training data. The goal should be to get the most relevant feedback from participants.

Otherwise, we risk participation-washing: superficial engagement that perpetuates inequality and exclusion. One example is the EU AI Alliance: an online forum, open to everyone, designed to provide democratic feedback to the AI ​​expert group appointed by the European Commission. When I joined in 2018 it was an unmoderated echo chamber of mostly men exchanging opinions, not representative of the EU population, the AI ​​industry or the relevant experts.

In contrast, social work researcher Desmond Patton of Columbia University in New York has built a machine learning algorithm to help identify Twitter posts related to gang violence that draws on the expertise of black people. who have experience with gangs in Chicago, Illinois. These experts review and correct the ratings underlying the algorithm. Patton calls his approach contextual social media analysis (see go.nature.com/3vnkdq7).

Power to change. AI technologies are typically designed at the behest of those in power – employers, governments, trade brokers – which makes job applicants, parole applicants, customers and other users vulnerable. To solve this problem, the power must change. Those affected by AI should not just be consulted from the start; they must select the problems to be solved and guide the process.

Disability activists have already pioneered this kind of equitable innovation. Their mantra “Nothing about us without us” means that those who are affected play a leading role in the development of technology, its regulation and its implementation. For example, activist Liz Jackson developed the transcription app Thisten when she saw her community needed real-time captions at the SXSW film festival in Austin, Texas.

Check AI assumptions. Regulations, such as the December 2021 New York City law that regulates the sale of AI used in hiring, increasingly require AI to pass audits intended to flag bias. But some of the guidelines are so broad that audits could end up validating the oppression.

For example, pymetrics in New York is a company that uses neuroscience-based games to assess job applicants by measuring their “cognitive, social, and behavioral attributes.” An audit found that the company had not violated US anti-discrimination law. But he did not consider whether these games are a reasonable way to examine fitness for employment, or what other dynamics of inequality might be introduced. This is not the kind of audit we need to make AI fairer.

We need AI audits to weed out harmful technologies. For example, with two colleagues, I developed a framework in which qualitative work inspects the assumptions on which an AI is built and uses them as the basis for the technical part of an AI audit. This informed an audit of Humantic AI and Crystal, two AI-based personality tools used in hiring.

Each of these principles can be applied intuitively and will reinforce themselves as technologists, social scientists and members of the public learn to implement them. Vague mandates won’t work, but with clear frameworks we can eliminate AI that perpetuates discrimination against the most vulnerable and focus on building AI that makes society better.

Competing interests

MS is a member of the Advisory Board of the Carnegie Council Artificial Intelligence & Equality Initiative and the Faculty of Fellowships at Auschwitz for the Study of Professional Ethics.

Comments are closed.