Politics

/

ArcaMax

Commentary: California's AI safety bill is under fire. Making it law is the best way to improve it

Herbert Lin, Los Angeles Times on

Published in Op Eds

On Aug. 29, the California Legislature passed Senate Bill 1047— the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — and sent it to Gov. Gavin Newsom for signature. Newsom’s choice, due by Sept. 30, is binary: Kill it or make it law.

Acknowledging the possible harm that could come from advanced AI, SB 1047 requires technology developers to integrate safeguards as they develop and deploy what the bill calls “covered models.” The California attorney general can enforce these requirements by pursuing civil actions against parties that aren’t taking “reasonable care” that 1) their models won’t cause catastrophic harms, or 2) their models can be shut down in case of emergency.

Many prominent AI companies oppose the bill either individually or through trade associations. Their objections include concerns that the definition of covered models is too inflexible to account for technological progress, that it’s unreasonable to hold them responsible for harmful applications that others develop, and that the bill overall will stifle innovation and hamstring small startup companies without the resources to devote to compliance.

These objections are not frivolous; they merit consideration and very likely some further amendment to the bill. But the governor should sign or approve it regardless because a veto would signal that no regulation of AI is acceptable now and probably until or unless catastrophic harm occurs. Such a position is not the right one for governments to take on such technology.

The bill’s author, Sen. Scott Wiener (D-San Francisco), engaged with the AI industry on a number of iterations of the bill before its final legislative passage. At least one major AI firm — Anthropic — asked for specific and significant changes to the text, many of which were incorporated in the final bill. Since the Legislature passed it, the CEO of Anthropic has said that its “benefits likely outweigh its costs … [although] some aspects of the bill [still] seem concerning or ambiguous.” Public evidence to date suggests that most other AI companies chose simply to oppose the bill on principle, rather than engage with specific efforts to modify it.

What should we make of such opposition, especially since the leaders of some of these companies have publicly expressed concerns about the potential dangers of advanced AI? In 2023, the CEOs of OpenAI and Google’s DeepMind, for example, signed an open letter that compared AI’s risks to pandemic and nuclear war.

A reasonable conclusion is that they, unlike Anthropic, oppose any kind of mandatory regulation at all. They want to reserve for themselves the right to decide when the risks of an activity or a research effort or any other deployed model outweigh its benefits. More importantly, they want those who develop applications based on their covered models to be fully responsible for risk mitigation. Recent court cases have suggested that parents who put guns in the hands of their children bear some legal responsibility for the outcome. Why should the AI companies be treated any differently?

The AI companies want the public to give them a free hand despite an obvious conflict of interest — profit-making companies should not be trusted to make decisions that might impede their profit-making prospects.

 

We’ve been here before. In November 2023, the board of OpenAI fired its CEO because it determined that, under his direction, the company was heading down a dangerous technological path. Within several days, various stakeholders in OpenAI were able to reverse that decision, reinstating him and pushing out the board members who had advocated for his firing. Ironically, OpenAI had been specifically structured to allow the board to act as it it did — despite the company’s profit-making potential, the board was supposed to ensure that the public interest came first.

If SB 1047 is vetoed, anti-regulation forces will proclaim a victory that demonstrates the wisdom of their position, and they will have little incentive to work on alternative legislation. Having no significant regulation works to their advantage, and they will build on a veto to sustain that status quo.

Alternatively, the governor could make SB 1047 law, adding an open invitation to its opponents to help correct its specific defects. With what they see as an imperfect law in place, the bill’s opponents would have considerable incentive to work — and to work in good faith — to fix it. But the basic approach would be that industry, not the government, puts forward its view of what constitutes appropriate reasonable care about the safety properties of its advanced models. Government’s role would be to make sure that industry does what industry itself says it should be doing.

The consequences of killing SB 1047 and preserving the status quo are substantial: Companies could advance their technologies without restraint. The consequences of accepting an imperfect bill would be a meaningful step toward a better regulatory environment for all concerned. It would be the beginning rather than the end of the AI regulatory game. This first move sets the tone for what’s to come and establishes the legitimacy of AI regulation. The governor should sign SB 1047.

____

Herbert Lin is senior research scholar at the Center for International Security and Cooperation at Stanford University, and a fellow at the Hoover Institution. He is the author of “Cyber Threats and Nuclear Weapons.”


©2024 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Christine Flowers

Christine Flowers

By Christine Flowers
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
Joe Guzzardi

Joe Guzzardi

By Joe Guzzardi
John Micek

John Micek

By John Micek
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Michael Reagan

Michael Reagan

By Michael Reagan
Mona Charen

Mona Charen

By Mona Charen
Oliver North and David L. Goetsch

Oliver North and David L. Goetsch

By Oliver North and David L. Goetsch
R. Emmett Tyrrell

R. Emmett Tyrrell

By R. Emmett Tyrrell
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Al Goodwyn Rick McKee Dick Wright John Deering Gary Markstein Adam Zyglis