This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology

Jul. 24, 2024

California's Bolstering Online Transparency Act targets bot disclosures in elections and transactions

See more on California's Bolstering Online Transparency Act targets bot disclosures in elections and transactions

California passed the Bolstering Online Transparency Act (Senate Bill 1001), which became effective July 1, 2019, and requires clear and conspicuous disclosures of bots. By Tracy Rubin, Vince Sampson and Patrick Van Eecke

Tracy Rubin

Partner Cooley

Tracy Rubin is a member of Cooley's Technology Transactions Group.

Vince Sampson

Special Counsel Cooley

Government Analytics

Vince Sampson leads Cooley's Government Analytics practice.

Patrick Van Eecke

Partner Cooley

Patrick Van Eecke co-chairs Cooley's Global Cyber/Data/Privacy Practice.

Shutterstock

In September 2018, California passed the Bolstering Online Transparency Act (Senate Bill 1001), which became effective July 1, 2019, and requires clear and conspicuous disclosures of bots ("automated online account[s] where all or substantially all of the actions or posts of that account are not the result of a person"), in the context of incentivizing a purchase or sale of goods or services in a commercial transaction or influencing a vote in an election. Notably, the former has generally been interpreted to include any customer service-related bot, making the law far-reaching. With the explosion of generative artificial intelligence technology, this transparency concern has become a common theme, stretching across US states and the federal government, as well as internationally.

Within the US, individual states are taking varied approaches to regulating AI, but a number include transparency as a core tenet. For example:

•Utah's Artificial Intelligence Policy Act (Senate Bill 149), signed into law March 13, 2024, provides that companies using generative AI to provide services of "regulated occupations" (e.g., medical professions) must always disclose that an individual is interacting with generative AI, and companies using generative AI to interact with individuals for other commercial activities must disclose that the person is interacting with generative AI, if that person asks.

•The Colorado Artificial Intelligence Act (Senate Bill 24-205), signed into law May 17, 2024, requires that companies deploying AI systems to consumers ensure disclosure to each consumer that they are interacting with an AI system, unless it would be obvious to a reasonable person.

•New York City Local Law 144, which became effective July 5, 2023, requires specified disclosures to be provided at least 10 days in advance to certain employees and job candidates when using automated employment decision tools.

•California's wealth of proposed legislation directed to AI and transparency, includes:

•Assembly Bill 3211, which would require AI providers to place and test watermarks on AI-generated content and provide tools to identify content created by the provider's generative AI system, and certain large online platforms to make disclosures regarding AI-generated content.

•Assembly Bill 2013, which would require a high-level summary of datasets used in the development of AI systems or services to be posted to the developer's website.

•Senate Bill 942 (the California AI Transparency Act), which would require providers of generative AI systems with an average of one million or more monthly users to provide tools allowing consumers to identify content created by the provider's generative AI system. 

At the federal level, comprehensive AI legislation has yet to surface, but the concept of transparency has emerged as a central issue for policymakers. Three examples of legislation introduced to address transparency are:

•The Protecting Consumers from Deceptive AI Act, which would ensure that audio or visual content created or substantially modified by generative AI includes a disclosure acknowledging the generative AI origin of such content.

•The REAL Political Advertisements Act, which would require a disclaimer on political ads that use images or video generated by AI.

•The Stop Spying Bosses Act, which would require disclosures and prohibitions for employers engaging in surveillance of workers to help empower and protect workers.

In addition to these bills, the executive branch is working toward creating a regulatory framework that will include AI transparency. In April 2024, the Department of Commerce released guidance on training and use of AI systems and "understanding the provenance and detection of synthetic content." This action followed a March 2024 Office of Management and Budget memorandum to the heads of all federal agencies to increase the transparency of AI used in the federal government. Under this memorandum, agencies will be required to "improve public transparency in their use of AI," and "publicly ... release expanded annual inventories of their AI use cases, including identifying use cases that impact rights or safety and how the agency is addressing the relevant risk."

On the other side of the ocean, the European Union just adopted its far-reaching Artificial Intelligence Act, which will enter into force later this summer. The EU AI Act lays down rules that AI developers and users should comply with to guarantee a "human centric, secure, trustworthy and ethical AI."

Next to rules on prohibited and high-risk AI, the EU AI Act introduces transparency obligations for AI systems that interact with people, as such as chatbots that may "pose risks of impersonation or deception":

•Providers of AI systems shall ensure that systems intended to interact directly with people, are designed and developed in such a way that the people concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.

•Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.

•People should be notified when they are exposed to AI systems that, by processing their biometric data, can identify or infer the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate to aspects including gender, age, hair color, eye color, tattoos, personal traits, ethnic origin, personal preferences, and interests.

This information shall be provided to the end user in a "clear and distinguishable manner" at the time of the first interaction or exposure with the chatbot - at the latest.

The EU AI Act will impact a broad range of companies even when they are not established in the EU. As soon as they are offering or operating an AI system on the EU market, or even if only the output produced by such system is intended to be used in the EU, the EU AI Act will be applicable.

While there is still much uncertainty concerning the future of AI regulation, transparency is one area with growing consensus around what is required of companies and what consumers can expect.

Tracy Rubin is a Cooley partner and member of the Technology Transactions Group, Vince Sampson is special counsel with the firm and leads the Government Analytics practice, and Patrick Van Eecke is a partner and co-chairs Cooley's Global Cyber/Data/Privacy Practice.

#379909

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com