Despite the important and growing role of artificial intelligence in many parts of modern society, there are very few policies or regulations governing the development and use of artificial intelligence systems in the United States. leading to decisions and situations that have drawn criticism.
Google has fired an employee who publicly raised concerns about how a certain type of artificial intelligence could contribute to environmental and social problems. Other AI companies have developed products used by organizations such as the Los Angeles Police Department, where they have been shown to support existing racially biased policies.
There are some government recommendations and guidelines regarding the use of artificial intelligence. But in early October 2022, the White House Office of Science and Technology Policy added substantially to the federal guidelines by issuing a blueprint for an AI Bill of Rights.
The Office of Science and Technology says the safeguards described in the document should be applied to all automated systems. The plan sets out “five principles that should guide the design, use and deployment of automated systems to protect the American public in the age of artificial intelligence.” We hope that this document can serve as a guide to help prevent AI systems from restricting the rights of US residents.
As a computer scientist who studies the ways in which humans interact with artificial intelligence systems – and in particular how anti-Blackness mediates those interactions – I find this guide a step in the right direction, even if it has some holes and is not enforceable.
Improving systems for all
The first two principles aim to address the safety and effectiveness of AI systems and the high risk that AI promotes discrimination.
To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with the direct input of the people and communities who will use and influence the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without much say in their development. Research has shown that the direct and genuine involvement of communities in the development process is important for the introduction of technologies that have a positive and lasting impact on these communities.
The second principle focuses on the known problem of algorithmic discrimination within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document requires companies to develop artificial intelligence systems that do not treat people differently based on their race, gender or other protected class status. He suggests that companies use tools such as equity assessments that can help assess how an AI system can impact members of exploited and marginalized communities.
These first two principles address the big issues of bias and fairness in the development and use of artificial intelligence.
Privacy, transparency and control
The last three principles describe ways to give people more control when interacting with artificial intelligence systems.
The third principle concerns data privacy. It works to ensure that people have more say in how their data is used and are protected from data abuse. The purpose of this section is to address situations where, for example, companies use a deceptive design to manipulate users into submitting their information. The plan requires practices such as not collecting personal information unless the person agrees and asking in a way that the person can understand.
The next principle focuses on “warning and explanation”. It emphasizes the importance of transparency – people should know how an AI system is used, as well as the ways in which AI contributes to outcomes that could affect them. Take the New York City Department of Children’s Services for example. Research has shown that the agency uses external artificial intelligence systems to predict child abuse, systems that most people don’t know are being used, even when they are being investigated.
The AI Bill of Rights provides a guideline that people in New York who are affected by the AI systems used in this case must be informed that AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can reduce the risk of error or abuse.
The final principle of the AI Bill of Rights describes a framework for human alternatives, consideration and feedback. The section stipulates that people should be able to opt out of the use of artificial intelligence or other automated systems in favor of a human alternative where this is reasonable.
As an example of how the last two principles can work together, consider the case of someone applying for a mortgage. They would be informed if an artificial intelligence algorithm was used to process their application and would have the option to opt out of this use of artificial intelligence in favor of an actual person.
Smart guidelines, no enforceability
The five principles set out in the AI Bill of Rights address many of the issues experts have raised in the design and use of AI. However, this is a non-binding document and is currently not enforceable.
It may be too much to hope that industry and government agencies will apply these ideas in exactly the ways the White House is calling for. If the ongoing regulatory battle over data privacy is any guide, tech companies will continue to push for self-regulation.
One other issue I see with the AI Bill of Rights is that it doesn’t directly address systems of oppression—such as racism or sexism—and how they can affect the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in healthcare have resulted in poorer care for black patients. I have argued that anti-black racism must be directly addressed in the development of AI systems. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a glaring hole and known problem in AI development.
Despite these shortcomings, this plan could be a positive step towards better AI systems and perhaps the first step towards regulation. A document like this can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems, even if it’s not a policy.
Christopher Dancy, Associate Professor of Industrial and Manufacturing Engineering and Computer Science and Engineering, Penn State
This article is republished from The Conversation under a Creative Commons license. Read the original article.