California Passes Act to Curb AI's Destructive PotentialDate:
08/29/2024Tag: #ai #california #gavinnewsom #psd #powerelectronics California Passes Act to Curb AI's Destructive PotentialCalifornia is taking major steps to make AI just a bit easier to control and possibly defang. According to Reuters, the California State Assembly recently passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), the first of its kind in the U.S. We’ve all seen the killer robot movies, and we all know about AI’s destructive potential – at June’s Yale CEO Summit, 42% of surveyed CEOs claimed AI could destroy humanity in 5-10 years (I mean, we’re already about 27 years overdue for Terminator’s Judgment Day). And there’s legitimate concern that AI might be impossible to fully control (or that we’ve passed that point, already). After all, AI has long since passed the Turing Test and is skilled at manipulating humans. In response to all this agita, California introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act earlier this summer, attempting to coral AI’s more destructive tendencies (and creating a myriad of specific obligations for industry, which was none-too-pleased with the act). Well, California has now passed the aforementioned act, and after the Senate votes on (and likely approves) the amended version, it’ll head to the desk of Governor Gavin Newsom for final approval. So what’s in the act? For one, it forces developers to include what amounts to an easy kill switch, protecting the training model against “unsafe post-training modifications,” and enacting safety procedures to check whether the model or its derivatives is capable of “causing or enabling a critical harm.” While industry has attacked the bill as overly burdensome and a roadblock for innovation, Senator Scott Wiener, the bill’s main author, is undeterred. “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill,” he said. “SB 1047 is well calibrated to what we know about foreseeable AI risks, and it deserves to be enacted.” We should find out the act’s ultimate fate by early September, possibly by the time you read this. |