Biden Proclaims AI Bill of Rights (Kinda/Sorta)

Oct 14, 2022 | Techonomy

As AI assumes more and more previously human tasks, ethicists, futurists, and legal minds are getting apoplectic, scurrying to get out ahead of the issues.

You’re autonomously driving down the road in your future car and hit someone. Is it your fault?  A faulty car part?  A bad program developer?  Or a faulty AI algorithm?  Or say a physician assisted by AI makes a fatal mistake in a diagnosis because of errors in an AI-generated database. Perhaps an AI-generated message hurts a vulnerable teen so deeply it leads to suicide. These are not purely hypothetical examples. Risks such as these are coming closer to reality by the day. AI is already everywhere–quietly working to see if you qualify for a home loan, if you’ve cheated on an exam, of if you’re the right candidate for a job. It’s even helping lawmakers, drafting contracts and suggesting sentencing to judges, based on massive databases of case law.

As AI assumes more and more previously human tasks, ethicists, futurists, and legal minds are getting apoplectic, scurrying to get out ahead of the issues. In real life, the Bill of Rights, the first ten amendments to the U.S. Constitution, ratified in 1791, was created to broadly guarantee such rights as the freedoms of speech, assembly, and worship.

In the U.S., we now have the AI equivalent, kinda/sorta. The Biden Administration recently released what it’s calling “A Blueprint for an AI Bill of Rights”. Think of it as a framework that points out areas where we’re most likely to get ourselves in AI pickles.  It identifies where we may experience algorithmic harms — for example financial services, health care provisioning, hiring, and more.

In contrast to the good old Bill of Rights created 200+ years ago by lawmakers, this AI Bill of Rights was developed by the White House Office of Science and Technology Policy (OSTP) with input from companies including Microsoft and Palantir as well as AI auditing startups, human rights groups, and the general public.

Critics say “there is no bite behind the bark”. But this “Bill of Rights” is simply a framework of principles to guide our fast-moving transformation to an AI-assisted society. Most critics focused on what was missing. They said the blueprint gives short shrift to important issues like social media, communications and personal identification, and critical infrastructure. It doesn’t mention, for example, law enforcement, one of the more contentious areas where use of AI is fraught. Facial recognition, to name one technology, has already led to wrongful arrests.

The harshest critics have called the AI Bill of Rights “toothless”, saying it’s little more than a broad white paper. Some say Europe has taken a more precise and detailed approach to identify areas where AI may cause harm, and that the U.S. is late to the party. “Go easy,” OSTP Deputy Director Dr. Alondra Nelson told Wired in response to the criticism. “We too understand that principles aren’t sufficient. This is really just a down payment. It’s just the beginning and the start.” And it’s true—this is way more than we had before. The Biden Administration is making a good faith effort to set necessary processes in motion.

Like the Bill of Rights we know, the framework for this AI version is actually simple yet profound. It provides AI companies with a set of instructive principles, even though they are non-binding and unenforceable. The OSTP is not a lawmaking body, and is powerless, for now, to oversee how tech companies abide by the framework. It’s likely that the principles will be applied to government agencies first and that, we hope, the tech industry feels some pressure to pay attention. Tech leaders like Microsoft President Brad Smith have already acknowledged their need to address such matters. (His 2019 book Tools and Weapons: The Promise and the Peril of the Digital Age is a welcome contrast to much tech industry insouciance.)

The Bill of AI Rights outlines 5 basic principles:

  1. Keep it Safe: This is more than an obligatory “do no harm” clause. It goes further because it says that AI algorithms should be designed, tested and approved by diverse communities of stakeholders.
  2. Notification and Explanation: you have the right to know who you’re talking to–AI or human–just like when you watch sponsored content.
  3. Data Privacy: You should be protected from abusive data practices via built-in protections, and you should have some control over your data and the meta-data about how your data is used. The framework calls for simplicity and clarity in informing people.
  4. Discrimination: That bias is often built into algorithms has been well documented. Just about any group working on AI ethics calls addressing that risk essential.
  5. Human Alternatives: The last principle caused us a bit of a chuckle. It says a person has the right to talk to a human if they disagree with an AI’s decision. As we’ve learned from years of tech support frustrations — good luck with that one.

Lawfare’s blog reiterates that these principles will most likely be addressed by government agencies first. But it agrees this is a clarion call to the burgeoning AI economy that regulation and oversight is clearly on it’s way.

Artificial intelligence and machine learning are like chef’s knives. They are morally neutral. In skilled and well-intentioned hands, society could use them for all kinds of valuable purposes. The Biden Administration and OSTP deserve credit for starting a necessary process to help improve the possibility that AI does not lead us into a dystopia of machine control and manipulation.

Source: https://techonomy.com/welcome-to-bidens-new-kinda-sorta-ai-bill-of-rights/

RecentNews And Media

techonomy.com

Read all Robin Raskins’ Techonomy Articles