Gloss Key Takeaways
  1. “AI-powered” has become a common marketing label, and many companies are overstating or fabricating AI capabilities (“AI washing”) to attract investors and customers.
  2. The SEC is actively cracking down, creating the Cybersecurity and Emerging Technologies Unit (CETU) in Feb 2025 and naming AI a formal exam focus for FY2026.
  3. Early enforcement actions show consequences: Delphia and Global Predictions paid penalties for unsubstantiated AI claims, signaling the SEC will police misleading AI marketing.
  4. Cases like Presto Automation and Nate Inc. show the risk escalates when claims diverge sharply from reality, including potential DOJ involvement and criminal exposure.
  5. Even legitimate AI users face compliance risk because the line between enthusiastic marketing and material misrepresentation can be thin and fact-dependent.

The SEC Is Coming for Your AI Claims

AI powered label being peeled back

Somewhere in the last two years, "AI-powered" became the new "organic." Slap it on the label, watch the valuation climb, hope nobody checks what's inside the box.

The Securities and Exchange Commission checked.

The term is AI washing

AI washing works exactly like greenwashing. A company makes bold claims about artificial intelligence capabilities it doesn't actually have, or dramatically overstates what its AI can do. The goal: attract investors, win customers, juice stock prices by riding the hottest trend in tech.

The scale of the problem is hard to overstate. A recent survey found that 88% of companies report using AI in some form. Only 39% see significant bottom-line impact. That gap between what companies say AI is doing and what it's actually doing has become one of the defining features of this era.

Regulators noticed.

The crackdown has teeth

In February 2025, the SEC established its Cybersecurity and Emerging Technologies Unit (CETU) specifically to police emerging tech claims. This wasn't a press release about "monitoring trends." It was the creation of an enforcement body with investigative power and the ability to bring charges.

The SEC also identified artificial intelligence as a formal examination focus for fiscal year 2026. Every public company making AI claims should expect scrutiny. Cases are already landing.

Judge's gavel next to laptop showing market data

First blood

March 2024: the SEC settled with two investment advisers, Delphia and Global Predictions, for false claims about their use of AI. Delphia told investors its AI could "predict which companies and trends are about to make it big." Global Predictions marketed itself as the "first regulated AI financial advisor."

Neither company could back up these claims. Delphia paid $225,000. Global Predictions paid $175,000. Small penalties, but the SEC Chair's comment was pointed: investment advisers should not mislead the public by claiming they're using AI when they're not.

That was the warning shot.

Presto Automation, a restaurant tech company, faced charges for misrepresenting its "Presto Voice" product, marketed as an AI-driven solution for drive-through ordering. The reality didn't match what investors were told.

Then Albert Saniger, CEO of Nate Inc., who raised $42 million claiming his shopping app was powered by artificial intelligence. Both the SEC and DOJ brought charges. When the Department of Justice gets involved, you've crossed from regulatory action into potential criminal territory.

Why this is happening now

The AI gold rush created perverse incentives. Venture firms poured money into anything with "AI" in the pitch deck. Public markets rewarded companies that announced AI initiatives. Customers prioritized vendors who claimed AI capabilities.

When saying "we use AI" can add billions to your market cap, the temptation to stretch the truth is enormous. Some companies stretched it past breaking.

The gap between claim and reality shows up the same way every time. A company announces an "AI-powered" feature that turns out to be a rules-based system with an if/else tree. A startup raises a Series B on the strength of its "proprietary AI engine" that's actually a thin wrapper around a commercial API. An enterprise vendor markets "AI-driven insights" that are really pre-programmed dashboards with new labels.

None of this is new in the history of tech hype. What's new is the speed. AI washing exploded faster than greenwashing because the financial incentives are larger and the technology is harder for non-experts to evaluate. You can visit a factory to verify "green" claims. You can't visit a model to verify "AI" claims, not easily.

Pitch deck with AI claims marked with skeptical red pen

The compliance problem for legitimate companies

The crackdown creates real risk even for companies that genuinely use AI. The line between marketing enthusiasm and material misrepresentation isn't always obvious.

If your company tells investors that AI is "core to your product," you need to demonstrate what that means concretely. If your earnings calls reference AI-driven revenue growth, the AI needs to be actually driving that growth. If your S-1 describes AI capabilities, those capabilities need to exist in production, not in a research prototype.

The standard is simple: can you substantiate what you're claiming?

Companies that genuinely deploy AI should welcome this. AI washing hurts legitimate AI companies by flooding the market with noise. When every company claims to be AI-powered, the term loses meaning.

What comes next

The SEC's CETU is not going away. The examination focus on AI for 2026 means more investigations, larger penalties, and potentially more criminal referrals to the DOJ.

For investors: develop better frameworks for evaluating AI claims. Ask what specific models a company uses. Ask where in the product pipeline AI is deployed. Ask what percentage of revenue comes from AI-driven features. Companies that answer precisely are probably telling the truth. Companies that respond with vague language about "leveraging the power of AI" are waving a flag.

For companies: the short-term benefit of exaggerating AI capabilities now carries legal and financial risk. A $175,000 settlement might seem manageable. A DOJ investigation is not.

The correction

AI washing was always going to hit a wall. You can't sustain a gap between claims and reality forever, especially not when public markets and investor capital are involved.

The companies that survive this scrutiny will be the ones that show their work. Not companies that sprinkle "AI" into press releases, but companies that point to specific models, specific data pipelines, specific measurable outcomes.

That's a higher bar than most of the market has been clearing. That's the point.


Marco Kotrotsos writes about practical AI implementation at gloss.run and acdigest.substack.com.

Gloss What This Means For You

Treat AI claims like regulated disclosures, not hype: make sure anything you tell investors or customers is specific, provable, and consistent with how the product actually works. Keep documentation that substantiates what “AI” does (and doesn’t do), and avoid implying proprietary models or predictive capabilities if you’re really using rules, dashboards, or a thin wrapper over third-party APIs. If AI is described as “core” to the business, align marketing, product, and legal/compliance so you can defend that statement under SEC scrutiny.