Move fast and break things? Not again, and not with AI.
It was only 12 years ago that Mark Zuckerberg, CEO of Facebook, declared that the company’s culture was to “move fast and break things.” Perhaps he was thinking narrowly about software, and telling engineers that they shouldn’t be afraid to try new things just because it might break some old lines of code or program functions.
But in practice, we now know it’s not just software. What this motto gave birth to is an industry culture that systematically privatizes the upside benefits of technology (more revenues and higher stock price) while socializing the downside human risks (to privacy, mental health, civil discourse and culture).
The problem with “move fast and break things” is that the company now named Meta, along with the other tech giants that adopted this worldview, keeps the money and power for itself while letting everyone else — users, societies, communities and cultures — bear the costs of what gets broken along the way.
It’s painful for me to see Meta trying to do the same thing today with a new generation of technology in artificial intelligence, specifically large language models. And in a new twist of cynical self-interest, this time Meta is trying to position itself as the champion of open source, a community that has done so much to advance and spread the benefits of digital technology in a more equitable manner.
Don’t buy what Zuckerberg is selling. When you hear, as we all do almost every day now, that artificial intelligence has enormous promise but also significant risks, ask yourself this question: Who is going to benefit from the promise, and who is going to suffer and pay for the risks?
Consider Zuckerberg’s eloquent explanation for why Meta is releasing its Llama AI models as “open source.” He claims that open-source AI is good for developers because it gives them more control and freedom; good for Meta because it will allow Llama to develop more quickly into a diverse ecosystem of tools; good for America because it will support competition with China; and good for the world because “open source is safer than the alternatives.”
But the most important parts of this story are false. The societal risks of open-source AI models are higher and the benefits smaller than those of standard open-source software code. Giving full access to AI model weights significantly lowers the barriers for bad actors to remove safeguards and “jailbreak” a model. Once the model weights are public, there’s no way to rescind or control what a criminal or hostile nation tries to do with them. Meta gives up even visibility into what end-users are doing with the models it releases.
For Meta, open source AI means taking no responsibility for what goes wrong. If that sounds like a familiar pattern for this company, it should.
Who is this good for? The answer, of course, is Meta. Who bears the risk of misuse? All of us. That is why I think Meta’s concerted PR push around open source as the path forward for AI is a cynical mask. It doesn’t serve the public interest. What Meta wants is a “corporate capture” of the open-source ethos, to once again benefit its own business model and bottom line.
Governments shouldn’t be fooled. Thankfully, the California state legislature isn’t. That body is considering a first-in-the-nation effort to ensure AI safety, SB 1047, as a light-touch regulatory regime to rebalance the scales of benefit and risk between companies and society.
The legislation protects the public interest in a way that equally protects the scientific and commercial upsides of AI technology, and particularly open-source development. The bill makes sense because the public interest should matter as much as Meta and other AI companies’ stock price when it comes to AI.
But Meta, along with several other tech giants, opposes the bill. Big Tech would prefer the old way of being, of privatized benefits and socialized risks. Many top AI labs acknowledge the possibility of catastrophic risk from this technology, and have committed to voluntary safety testing to reduce those risks. But many oppose even light-touch regulation that would make reasonable safety testing mandatory. That’s not a defensible position, particularly for Meta, given its history of systematically shirking responsibility for the harms its products have caused.
Let’s not make the same mistake with generative AI that we did with social media. The public interest in technology shouldn’t be an afterthought — it should be our first thought. If tech titans are able to successfully leverage their money and power to defeat SB 1047, we’re headed back to a world where they get to define “innovation” as a blank check for tech companies to keep the winnings — and make the rest of us pay for what’s broken.
Jonathan Taplin is a writer, film producer and scholar. He is the director emeritus of the Annenberg Innovation Lab at the University of Southern California. His recent books on technology include “The End of Reality” and “Move Fast and Break Things.”
Date: |
Topics
-
CBS Sports - Sports
Astros' Justin Verlander hit hard again vs. Angels, says he returned from neck injury 'a little fast'
Verlander has an 8.89 ERA in six starts since coming off the injured listYesterday - MLB -
Financial Times - Business
Move over copilots: meet the next generation of AI-powered assistants
Microsoft, Salesforce and Workday latest to centre their plans on ‘AI agents’ as technology advances15 hours ago -
ABC News - Health
In NYC and elsewhere, climate protesters say pace of change isn't fast enough
Six years after a teenage Greta Thunberg walked out of school in a solitary climate protest outside of the Swedish parliament, young people around the world staged fresh marches and said their ...Yesterday - Climate -
Financial Times - World
Short cuts: a fast-track to Bhutan and skiing comes to Covent Garden
Direct flights from Dubai to Bhutan, a pop-up ski contest in London, and Bangkok claims the ‘world’s best’ hotelYesterday -
Yahoo News - World
Hospitals in northern Israel to move patients to underground bunker
10 hours ago - Israel -
The New York Times - Lifestyle
A Milan Fashion Week Full of Surprising Stranger Things
Standout shows from Bally and Bottega Veneta bring Milan Fashion Week to a close. Gucci, Versace and Moschino do some recycling.10 hours ago
More from The Hill
-
The Hill - Politics
Colorado governor calls Aurora a ‘wonderful city’ in response to Trump attacks
Colorado Gov. Jared Polis (D) on Sunday defended the city of Aurora as a diverse and growing place with declining violent crime, after former President Trump attacked the city and the governor ...36 minutes ago - Donald Trump -
The Hill - Politics
Dan Evans, former Washington governor and senator, dies at 98
Dan Evans, the former governor of Washington who also served in the Senate, died on Friday night at his home in Seattle at the age of 98. Evans was a popular three-term Republican governor. He ...1 hour ago -
The Hill - Politics
Take back the public square: Why silence won’t slow polarization
The silent majority is contributing to polarization by avoiding expressing views due to fear of public response, allowing the loud minority to dominate public discourse and create a perception gap ...1 hour ago -
The Hill - Politics
Democrats go on offense against false claims about Haitians: ‘Racist fearmongers’
House Democrats have launched an aggressive campaign to push back against the false claims from some top Republicans that Haitian immigrants in Springfield, Ohio, are eating their neighbors’ pets. ...1 hour ago -
The Hill - Politics
Pennsylvania Democrat stresses need for bipartisanship in probe of Trump assassination attempts
Rep. Chrissy Houlahan (D-Pa.) said Sunday that it is critical for the bipartisan task force investigating the assassination attempts against former President Trump to work swiftly and in a ...1 hour ago - Donald Trump