Technology,
Government,
Constitutional Law
Jul. 16, 2025
Lawmakers advance AI, online safety bills amid free speech concerns
As industry groups warn of overreach and First Amendment violations, California lawmakers voted to advance sweeping measures targeting AI training, chatbot use and content labeling.




California lawmakers advanced several bills Tuesday aimed at regulating online marketplaces and limiting artificial intelligence. Several authors acknowledged their measures could face tough challenges in federal court.
The debate around SB 243, which would impose limits on so-called "companion chatbots," was typical. It passed the Assembly Judiciary Committee 7-0, but Sen. Steve Padilla, D-San Diego, raised concerns about potential constitutional pitfalls and federal preemption issues that could derail the bill if signed. Technology industry lobbyists said they supported its intent but criticized vague language and other flaws.
"We are willing to continue to work on addressing the First Amendment concerns and issues here," Padilla told the committee during a Tuesday morning hearing. "We've tried to tailor that language very narrowly."
First Amendment violations have derailed several recent laws in California and elsewhere, particularly measures addressing election misinformation. On Friday, a federal judge declined to rule on First Amendment arguments when finding a California law aimed at curbing online sales of stolen goods was likely preempted by federal law.
Padilla also discussed his reasoning behind including a private right of action in the bill. He acknowledged the state court system is "on the verge of collapse" but added there needed "to be a remedy here that is specific to this harm."
"Children cannot be used as Guinea pigs to test the safety of evolving new technology and products," Padilla said. "We've seen the consequences of our inaction towards the dangers posed by social media, and the stakes are too high to make the same mistakes again.
"We completely agree with the intent of the bill to create strong, sensible guardrails," Robert Boykin, TechNet's executive director for California and the Southwest, told the committee.
He added, "However, we have serious concerns that the definitions in the bill are far too broad and we're sweeping in a wide array of general-purpose systems."
Boykin said that the language of SB 243 failed to provide a definition of "companion chatbot" that would rope in tools like Google Gemini "because they are capable of human like conversations." He said the bill could also rope in education software, and that its remedies, including potential damages, were "overly punitive."
TechNet and other organizations representing the industry have filed--and often won--a series of cases in California and elsewhere. A committee analysis prepared for the hearing even gamed out potential legal challenges by looking at a recent California law that sought to force social media companies to report on their content moderation platforms.
"The Ninth Circuit's recent decision in X Corp. v. Bonta (2024) 116 F.4th 888, which enjoined key portions of AB 587 (Gabriel, 2022), is particularly instructive... The court held that the reporting requirements compelled non-commercial, content-based speech and were therefore subject to strict scrutiny," it noted.
Two other artificial intelligence bills widely opposed by industry groups were heard in the Senate Judiciary Committee Tuesday. AB 1064 would restrict companies from training AI systems on children under 13 without parental consent. AB 853 would require companies to label content created by artificial intelligence. The Computer & Communications Industry Association, which has filed several recent lawsuits, has been urging lawmakers to vote no on both.
AB 853's author, Assemblymember Buffy Wicks, D-North Oakland, told the committee her bill would help people distinguish between increasingly convincing deepfake videos and other content. But Aodhan Downey, the CCIA's state policy manager, asked lawmakers to reject AB 853 and give a bill passed last year more time to go into effect. SB 942, signed in 2024, requires developers to create AI detection tools and includes non-binding guidelines for labeling content.
"We're concerned that AB 853 imposes premature and overly rigid requirements on emerging technologies, particularly watermarking and provenance tools that are still under development," Downey said. "This legislature took a major step forward last year with the passage of Senate bill 942 which is still in the early stages of implementation."
Neither bill had received a vote by press deadline. Wicks also defended another measure, AB 1043. It would set up a framework for online age verification, including systems that would let parents bar children from looking at age-inappropriate content.
Wicks told the committee a recent U.S. Supreme Court decision gave lawmakers a green light by upholding a Texas age verification law designed to block children from accessing sexually explicit content. She added that she had accepted a series of amendments designed to narrow the scope of the law and make it more likely to stand up in court.
Malcolm Maclachlan
malcolm_maclachlan@dailyjournal.com
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com