Here's What's In President Biden's AI Bill Of Rights
Artificial intelligence (AI) might be something that makes you think of the far-distant future, but it's actually already all around us. You can find it in your car, your smart home, your phone, on the other end of the customer support line you call when those things break, your bank's website, your doctor's office, and possibly behind the security cameras that are watching you go about your day. AI might be inescapable, but the law has a lot of catching up to do. In an attempt to fix this, the White House has released what it has called a "Blueprint for an AI Bill of Rights" aimed at protecting the public from the actions of artificial intelligence.
While the Biden Administration's bill is a start, the reception to it has been mixed. There are many who don't believe it goes far enough. MIT Technology Review has quoted a number of industry professionals highlighting the lack of solid legislation, including director of policy for the Stanford Institute for Human-Centered AI Russell Wald who laments the "lack of coherent federal policy" on the subject.
The blueprint directly accuses "technology, data, and automated systems" of limiting "our opportunities and preventing our access to critical resources or services." The suggestions outlined in the document aim to combat that and protect American citizens from the negative traits some AI models have been known to exhibit. It is worth noting that the White House's "AI Bill of Rights" is not an executive order issued by the president, nor is it in any way law. The document is essentially a set of recommendations the Biden Administration has made that lawmakers could use as a framework for AI-related legislation they may draft in the future.
The bill has nothing to do with sentient AI
If you've seen a film from the "Terminator" franchise, "The Matrix," or "2001: A Space Odyssey," you might be aware of what a highly advanced rogue AI could potentially do — and you wouldn't be alone. Key industry figures like Elon Musk have frequently warned about the dangers of AI, with the Tesla founder petitioning the UN to ban its use in war and helping fund OpenAI, a non-profit dedicated to the study of the concept.
There is also concern among non-billionaires, with 43% of SlashGear's readers claiming that they find the prospect of a sentient AI worrying. The closest the document comes to saving us all from Skynet is a section on "safe and effective systems," which states: "Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community."
On the flip side, there's also the question of rights for the AI itself, and the bill doesn't address that either. In recent months, we've had an AI that has developed its own language, and another that was so convincingly sentient that a Google engineer got it a lawyer. Still, this blueprint isn't law, and even if it were, sentient AI still wouldn't have any legal rights.
Potential discrimination is a concern
As AI becomes a bigger part of day-to-day life, discrimination is a real worry, and that concern isn't unfounded. There have been numerous examples over the years of AI exhibiting racism, sexism, and other discriminatory traits. One recent example involves a study conducted by Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington. The AI used in the study was trained using a neural network model, which pulls information from a large data source like the internet to teach an AI how to react to and navigate through situations. What the researchers conducting the study ended up with was an AI that identified women as "homemakers," Black men as "criminals," and Latino men as "janitors."
The blueprint outlines what it calls "algorithmic discrimination," describing it as an individual or group being unfairly singled out and discriminated against because of one of the many "classifications protected by law." These classifications include things like race, nationality, veteran status, sexuality, and disability. It goes on to suggest: "Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way."
There's an attempt to protect data and keep people informed
A whole section of the blueprint relates to how data is gathered and used. A large part of the data protection section revolves around limiting what data companies can collect. The AI Bill of Rights suggests limiting it to the minimum amount required to perform necessary functions. There is also a section that references surveillance, which should be interesting for anyone who has a home assistant blinking away on their bedside. It says: "Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access."
There is also a requirement for companies to provide notice on why an AI is being used and how it could impact you. The notice section seems to suggest an AI making a decision should also explain how and why that decision was reached. This part makes sense, as it makes the rest of the bill far more workable. As an extreme example, if an AI was racist, it would be far easier to spot if it justified its decision by saying "because you are X skin color." The issue could then be flagged. A blanket "no" may leave people assuming other factors led to an outcome and allow problems to go unchecked for far longer.
AI might not take all of our jobs
Two groups of people may be happy with one section of the framework. Those who are worried AI is going to take everyone's jobs, and people who can't stand talking to automated systems on the phone could have some cause for celebration if the AI Bill of Rights ever makes it into law. One of the sections states that a "human alternative" should be available "where appropriate." At best, this could mean you can tell an AI to patch you through a living organism if things get frustrating. A more conservative reading of this section would suggest it just means an AI's decision can never be final, and you should have the option of asking a person to review the situation and override the AI's choice if necessary. It also states the humans involved should receive appropriate training for their respective roles.
For truly important decisions, you may not even have to ask, as the bill says, "Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions."
Whether the AI Bill of Rights — or elements of it — make it into law remains to be seen. But AI is already a huge part of the modern world and it's only going to grow from here, so you would expect legislation to follow closely behind.