We need legal protections from AI’s risks
The first mortgage-backed security trader I met in the summer of 2007 wore a rumpled T-shirt and a look of consternation while telling me that he’d just quit. I’d recently been working at a housing clinic, so I knew something strange was going on with risky mortgages, but he was adamant that I didn’t have a clue about the scale of the mess.
Surely, I asked, banking regulations would prevent the worst of it? He laughed.
We know what happened next. The financial markets collapsed, devastating individual lives and the global economy. We learned that without consequences for reckless behavior, the powerful have every incentive to chase massive profits, knowing that others will pay the price if things go wrong.
Now we’re embarrassingly on track to repeat the same mistakes with artificial intelligence. As in the run-up to 2008, we’ve let powerful systems shape our daily lives with little understanding of their workings and almost no say in how they’re used.
AI can now decide whether you get a mortgage, how long you go to prison and whether you’re evicted from public housing for minor rule breaches. These systems scan the transactions you make online, influence the products you buy and mediate the information you consume.
But this is just the start. AI chatbots weren’t widely used 18 months ago; now researchers can produce long-form videos from a prompt. AI agents, which work without needing constant human oversight, already exist (for example, your social media feed), but the next frontier, their mass proliferation, is almost here.
At my company, we’re enthusiastic about what agentic AI can do, but we also understand firsthand how it can be misused and exacerbate the harms that we already see from less powerful AI systems.
A just society prepares for this. It doesn’t allow the powerful to take risks at our expense or exploit gaps in the law, the way banks and lenders did when they raked in wild profits while undermining financial markets.
But we’re falling shamefully behind on meaningful accountability. Elon Musk’s Tesla can sell cars with a feature called “full self-driving” and yet avoid responsibility when the feature causes a crash. Imagine if airlines or aircraft manufacturers could deny liability for crashing planes. This failure also explains why courts can still use AI to decide prison sentences, despite the demonstrated unreliability of such systems, and why law enforcement agencies use AI to predict crime, incorrectly and with racial bias, despite congressional scrutiny.
Most proposed AI laws ignore oversight and liability, instead trying to make the AI systems themselves safe. But this doesn’t make sense — you can’t make AI inherently safe, just as you can’t make power drills or cars or computers inherently safe. We need to use our laws and regulations to minimize long-term risks by addressing near-term harms.
To do this we need to make much better use of our existing institutions to regulate AI. I see three main priorities.
First, banning harmful actions. Governments and agencies should not surveil citizens without explicit justification, just as police cannot invade your home without a warrant.
Second, enshrining rights of explanation. The 1970 Supreme Court ruling Goldberg v. Kelly held that the government can’t arbitrarily withhold benefits without a right of explanation and appeal. As AI decision-making becomes more pervasive, we need to enshrine a similar right for the judgments that govern the most important areas of our lives.
Third, we need to bolster our liability doctrines. The legal principle that if you hurt someone you need to remedy the harm is centuries old — but we seem strangely reluctant to apply the same principle to AI companies. This is a mistake.
A simple but powerful idea is to make AI developers above a certain threshold strictly liable for the misuse of their products, just as we have for injuries that are caused by product defects. We can soften this with a safe harbor for companies to register ambiguous uses, conditioned on accepting government oversight and guidelines. Combined with an outright ban on egregious applications, putting the cost of AI’s harms on the people and companies causing it can shield us from the bulk of what can go wrong.
As builders of powerful AI systems, we reject the argument that laws governing AI will hold us back. It’s the opposite. Good rules level the playing field. They take the burden off individual entities to fight for the public good, instead letting us focus on building things that people find valuable in their lives within clear parameters mandated by a democratic process.
The reason to chase the wild dream of AI is to create a world worth celebrating. Better laws will help ensure that future includes everyone — not just the handful of billionaires who control it today.
Matt Boulos is head of policy and safety at the AI research company Imbue, a member of NIST’s US Artificial Intelligence Safety Institute Consortium.
Date: |
Topics
Filter
-
Developer of AI tool used in thousands of criminal cases accused of lying under oath
Law enforcement agencies and prosecutors from Colorado to New York have turned to a little-known artificial intelligence tool in recent years to help investigate, charge and convict suspects accused of murder and other serious crimes.NBC News - Top stories -
Aid to Israel is a moral and legal calamity Washington doesn't need
President Biden still has the power to stop this.The Hill - Politics - Israel -
We need to make a housing decision fast, and the only place to get the funds is our retirement savings
A couple needs to make a housing decision fast, and the only place to get the funds is their retirement savings.MarketWatch - Business -
Settlement could cost NCAA nearly $3 billion; plan to pay athletes would need federal protection
The NCAA and major college conferences are considering a possible settlement of an antitrust lawsuit that could cost them billions in damages and force schools to share athletics-related revenue with their athletesABC News - Sports -
What can we expect from the final Gavin and Stacey?
The Christmas special in 2019 left 17.1 million viewers on tenterhooks with a cliffhanger ending.BBC News - Top stories -
Key charity prepares Rwanda legal challenge
Ministers are now facing attacks on two fronts - with more cases expected from as early as next week.BBC News - Top stories -
Hamas Took More Than 200 Hostages From Israel. Here's What We Know.
Israel says 128 hostages abducted on Oct. 7 remain in captivity in Gaza, including the bodies of at least 35.The Wall Street Journal - World - Israel -
Denver police chief: No legal way to sweep Auraria Campus protesters
Yahoo News - World -
Microsoft Tweaks Service Terms, Bans Police From Using Its Facial Recognition AI
The software giant doesn't want its technology to be misused by law enforcement. Should people worry how others might harness Microsoft's AI power?Inc. - Business - Microsoft -
Yellen Calls Threats to Democracy Risks to U.S. Economic Growth
In a rare comment on politics--and an indirect swipe at former President Donald Trump--the Treasury Secretary says "democracy is critical to building and sustaining a strong economy."Inc. - Business
More from The Hill
-
University of Vermont cancels UN ambassador's address amid Gaza protests
The University of Vermont (UVM) has canceled United Nations Ambassador Linda Thomas-Greenfield's commencement speech amid pro-Palestinian protests on campus as protests sweep colleges across the nation. “We are looking forward to the upcoming ...The Hill - Politics - United Nations -
George Conway: Hope Hicks 'absolutely' corroborates Cohen story on Trump
Conservative lawyer George Conway said former Trump aide Hope Hicks’ testimony at former President Trump's hush money trial “absolutely” corroborates his ex-lawyer Michael Cohen’s story about Trump in the case. “She put in Donald Trump's mouth, ...The Hill - Politics - Donald Trump -
Underwood to immediately replace Cuellar in prominent funding role after indictment
Rep. Lauren Underwood (D-Ill.) is set to become top Democrat on the House subcommittee that crafts Homeland Security Department (DHS) funding, replacing Rep. Henry Cuellar (D-Texas) shortly after he was indicted on bribery charges. Aides confirmed ...The Hill - Politics -
How much time is left before Rafah invasion?
Welcome to The Hill's Defense & NatSec newsletter {beacon} ou Defense &National Security Defense &National Security The Big Story How much time is left before the Rafah invasion? Reports emerged Friday that the Israeli military told the Biden ...The Hill - Politics -
Stormy seas pause construction of Gaza pier, Pentagon says
The U.S. military had to pause its project of building a pier off the coast of Gaza to deliver humanitarian due to bad weather, the Pentagon said Friday. U.S. Central Command officials “temporarily paused offshore assembly of the floating pier” ...The Hill - Politics