Securities
Jan. 26, 2026
The rise of 'AI-washing' claims in securities class actions
As AI hype accelerates, a wave of "AI washing" securities lawsuits--alleging companies overstated or misrepresented their AI capabilities--highlights the need for careful, fact-based disclosures, forward-looking safe-harbor language and proactive risk management to mitigate litigation exposure.
In the race to tout "cutting-edge" AI capabilities,
corporate optimism can outpace reality--and investors have taken note. In 2024
and 2025, shareholder plaintiffs brought approximately 29 federal securities
class actions alleging "AI washing"--the overstatement or misrepresentation of a
company's AI use, novelty or performance.
Several recent cases highlight the importance of
exercising caution in making public disclosures. For example, in Black v.
Snap Inc., plaintiffs alleged that Snap misled investors about the adoption
of an alternative advertisers tracking tool and its effect on advertising
revenue. No. 2:21-cv-08892 (C.D. Cal. filed Nov. 11, 2021). The parties have
since agreed to a $65 million settlement, currently pending approval.
Similarly, in Hoare v. Oddity Tech Ltd., plaintiffs alleged that the IPO
offering documents overstated the company's AI capabilities and AI's
contribution to sales. No. 1:24-cv-05037 (E.D.N.Y. filed July 19, 2024), transferred
to 1:24-cv-06571 (S.D.N.Y. 2024).
Companies must also disclose risks posed by third-party AI
tools. In Tamraz v. Reddit, Inc., investors alleged Reddit did
not adequately disclose the risk Google's AI capabilities posed to Reddit's
revenue. No. 3:25-cv-05144 (N.D. Cal. filed June 18, 2025). The complaint
alleges that a significant portion of Reddit user traffic originates with a
Google search and Google's AI overview reduces the need to click to Reddit for
an answer, the complaint claims this feature diverted user traffic--and with it,
advertisement revenue--away from Reddit.
While several AI-washing cases remain in early stages,
others offer instructive lessons at the pleading stage. In Lamontagne v.
Tesla, plaintiffs alleged that Tesla made materially misleading statements
about the timeline, safety and capability of its autonomous driving technology.
No. 23-cv-00869, 2024 WL 4353010 (N.D. Cal. Mar. 28, 2025). Ultimately, the
court granted Tesla's motion to dismiss, finding that many of the challenged
statements were protected by the safe harbor because they were forward-looking,
constituted mere puffery or contained adequate cautionary language. For
example, the court found that statements regarding projected launch dates were
plainly forward-looking statements, and statements that the technology is
"super," "superhuman," and that the company wants to "get as close to
perfection as possible" were held to be vague statements of corporate optimism
that are not actionable. Although the case was dismissed, statements that
safety "bears out in the statistics" and references to "objective numbers"
regarding the low likelihood of injury were found to be actionable.
A similar lesson, albeit with a different outcome, is
highlighted by In re GigaCloud Technology
Inc. Securities Litigation. No. 23-cv-10645, 2025 WL 307378 (S.D.N.Y. Jan.
27, 2025). Plaintiffs' claims that GigaCloud misrepresented the sophistication
of its technology were held to be insufficient at the motion to dismiss stage
because "vague corporate-speak" describing the strength of technology is mere
puffery. However, plaintiffs' specific allegations that GigaCloud's claimed use
of AI in its logistics operations was false were adequately pled. Accordingly,
while the alleged misstatements regarding GigaCloud's marketplace activities
were dismissed, the alleged misstatements regarding GigaCloud use of complex AI
software to optimize logistics survived the motion to dismiss. In support of
the plaintiffs' argument that GigaCloud did not use AI in its logistics,
plaintiffs detailed that GigaCloud's logistics system involved manual
computations and at least 100 IT employees. Plaintiffs' claims were supported
by information from nine former employees who directly worked on the company's
logistics systems. The court concluded that plaintiffs' claims rested on
specific factual statements about GigaCloud's use of AI--alleged to be false and
supported by specific contradictory information--and that those allegations met
the motion to dismiss standard.
The trend of AI-related securities suits will continue to
test the balance between rapid innovation and cautious disclosure. Companies
can reduce their exposure by:
• Framing descriptions of in‐development AI as forward-looking,
employing safe-harbor disclaimers, and distinguishing between exploratory and
commercial applications.
• Limiting statements about current technology use to
those well-supported by current data--vague or grandiose claims can become the
basis for fraud allegations if outcomes disappoint.
• Proactively assessing competitive technology and
disclosing material uncertainties.
• Acting swiftly when litigation arises by marshaling
data, engaging experts and pressure-testing the complaint while maintaining
consistent and measured communications.
By staying proactive, companies can effectively navigate
the challenges of securities litigation in this dynamic landscape.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com