MAKING AUTOMATED SYSTEMS WORK FOR THE AMERICAN PEOPLE – AI Bill of Rights Blueprint

In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.

 

Controlling these potential harms of artificial intelligence (AI) is “both necessary and achievable”, the US Government has declared in launching a blueprint for an ‘AI Bill of Rights’ that urges adopters of AI to consider its implications across five key areas.

 

🟩 Safe and Effective Systems: In order to ensure that an automated system is safe and effective, it should include safeguards to protect the public from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task at hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of the system.

 

🟩 Algorithmic Discrimination Protections: Any automated system should be tested to help ensure it is free from algorithmic discrimination before it can be sold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly construed.  Some algorithmic discrimination is already prohibited under existing anti-discrimination law. Proactive technical and policy steps need to be taken to not only reinforce those legal protections but extend beyond them to ensure equity for underserved communities even in circumstances where a specific legal protection may not be clearly established. These protections should be instituted throughout the design, development, and deployment process

 

🟩 Data Privacy: Traditional terms of service—the block of text that the public is accustomed to clicking through when using a website or digital app—are not an adequate mechanism for protecting privacy. The American public should be protected via built-in privacy protections, data minimization, use and collection limitations, and transparency, in addition to being entitled to clear mechanisms to control access to and use of their data—including their metadata—in a proactive, informed, and ongoing way. Any automated system collecting, using, sharing, or storing personal data should meet these expectations.

 

🟩 Notice and Explanation: An automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and explanations as to how and why a decision was made or an action was taken by the system. These expectations are explained below.

 

🟩 Human Alternatives, Consideration, and Fallback: An automated system should provide demonstrably effective mechanisms to opt out in favor of a human alternative, where appropriate, as well as timely human consideration and remedy by a fallback system, with additional human oversight and safeguards for systems used in sensitive domains, and with training and assessment for any human-based portions of the system to ensure effectiveness.

 

Source:

Blueprint for an AI Bill of Rights | The White House

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top