Rezan Bilge Nisli, L.L.M., Class of 2026
Venture capital has always rewarded compelling narratives, and, in today’s market, no narrative commands more investor attention than artificial intelligence. However, as the AI boom dominates the market, how much of that narrative reflects reality? At the center of this question lies a practice that has attracted growing scrutiny known as “AI-washing.” A startup claims that AI powers its core product and raises capital on that basis, yet, behind the curtain, the technology may not exist at all.
The AI label has become a pricing mechanism, and the more a startup exaggerates its capabilities, the higher its valuation climbs. But when that label is fabricated, founder optimism crosses into securities fraud. The U.S. Securities and Exchange Commission (SEC) and the Department of Justice (DOJ) have made clear that they intend to enforce these provisions aggressively in the AI context. In February 2025, the SEC established a dedicated Cyber and Emerging Technologies Unit (CETU) and placed fraud committed using emerging technologies, including artificial intelligence, among its top enforcement priorities.
The Gap Between Marketing and Reality
On April 9, 2025, both the SEC and the DOJ initiated simultaneous civil and criminal actions against Albert Saniger, the founder and former CEO of the shopping app Nate, Inc. (“Nate”). According to the SEC’s complaint, Saniger falsely represented to seed and Series A investors that the app autonomously completed online purchases using AI, machine learning, and neural networks, boasting automation success rates between 93% and 97%. In reality, contract workers based in the Philippines and elsewhere handled the vast majority of orders. The complaint further contends that Saniger directed employees to conceal this reliance on human labor and had engineers manually process orders behind the scenes during investor demonstrations. Between 2019 and 2022, he raised over $42 million through these representations. Once a news report exposed these claims in June 2022, Nate failed to close its next funding round, ceased operations, and eventually dissolved in January 2023, leaving investors with tens of millions in losses. In announcing the parallel enforcement actions, Acting U.S. Attorney Matthew Podolsky stated that such fraud not only harms unsuspecting investors but also redirects funding away from genuine startups, fosters investor distrust toward authentic innovations, and ultimately hinders the advancement of AI technology.
On April 22, 2025, the SEC and DOJ charged Ramil Palafox, the founder of Praetorian Group International, commercially known as PGI Global, with orchestrating a scheme built on a similar foundation of fabricated AI claims. PGI Global marketed an AI-powered “auto-trading” platform for crypto assets and foreign exchange, but no such platform existed, and the company conducted little to no actual trading. Between 2020 and 2021, Palafox sold “membership” packages promising guaranteed returns, raised approximately $198 million worldwide, and misappropriated more than $57 million for personal expenses while paying earlier investors with funds from new investors. A federal court sentenced him to 20 years in prison in February 2026.
Applying Existing Securities Fraud Doctrine to AI Misrepresentation
Prosecuting these cases has not required the SEC to invoke novel legal theories. The Saniger complaint charged violations of Section 17(a) of the Securities Act of 1933 and Section 10(b) of the Securities Exchange Act of 1934 and the corresponding Rule 10b-5. These are the SEC’s core anti-fraud provisions, prohibiting misstatements and omissions in connection with the offer or sale of securities. To be actionable, the misstatement must be material, meaning there is a substantial likelihood that a reasonable investor would consider the information important. These provisions draw no distinction between public and private issuers. Whether a founder raises capital from three venture funds in a conference room or from tens of thousands of investors on the New York Stock Exchange, the same core anti-fraud prohibitions can apply.
However, the SEC’s ability to detect violations in private markets is structurally limited. Under Regulation D, which governs most private fundraising, companies raising capital from accredited investors are exempt from the registration regime and from the public disclosure framework. Although issuers relying on Regulation D generally must file a Form D notice, their fundraising materials are not subject to routine ex ante SEC review. If no investor or whistleblower comes forward, the fraud may simply go undetected for some time.
As for materiality, while the AI claims in Saniger were specific and quantified, not every case will be this clear. When the overstatement is one of degree rather than fabrication, the boundary between marketing and material misstatement becomes unclear. The SEC’s 2026 Examination Priorities attempt to address this by scrutinizing registrant representations regarding AI capabilities and whether operations and controls in place are consistent with disclosures made to investors, but that examination program primarily reaches SEC registrants, not unregistered early-stage startups conducting private fundraising. As a result, these companies operate in a regulatory blind spot.
Despite these structural limitations, federal authorities appear determined to police this emerging frontier. In March 2026 at the Financial Stability Oversight Council Roundtable on Strategy and Governance Principles, Paul S. Atkins, Chairman of the SEC, reiterated the agency’s commitment to protecting investors, including from AI related fraud and misconduct. Yet, the question remains as to whether the current enforcement infrastructure reliant on public disclosures can muster the investigative reach and technical agility to uncover sophisticated fabrications in early-stage venture capital.