Unpacking the FTC’s Warning To “Keep Your AI Claims In Check”

1 year ago 55
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

On February 27, 2023, the Federal Trade Commission (FTC) business blog released a not so thinly-veiled warning to AI developers, sellers, and marketers that they have a duty of care when using the term “artificial intelligence” to market a product, stating: “one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.” The article advised that the FTC is prepared to use enforcement actions to keep AI claims in check. (See Federal Trade Commission Business Blog Keep your AI claims in check.)

Specifically, the article noted that “the companies that do the developing and selling” and the “marketers” behind the products are now on notice that AI products (1) must work as advertised and (2) must pass the efficacy versus risk test. (See id.) To punctuate the seriousness of this point, the FTC business blog warned: “false or unsubstantiated claims about a product’s efficacy are our bread and butter.” (bold emphasis added.) In other words, the FTC intends to take enforcement action against deceptive AI business advertising. (See id.)

The article set forth the following list of mistakes to avoid when developing, selling, and marketing AI products:

  1. Avoid deceptive advertising of AI products through exaggeration of AI claims. If the developer, seller, or marketer of an AI product has exaggerated what the AI product (or any AI product for that matter) can do, or has failed to rigorously test the AI product, or has failed to account for and address potential bias in outcomes, the FTC could bring an enforcement action for deceptive advertising. Specifically, the FTC warns: “Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users under certain conditions.” (See Federal Trade Commission Business Blog Keep your AI claims in check.)
  2. Avoid overpromising the benefits of an AI product. The FTC guidance in this area was short and sweet: if you are advertising AI product enhancement to increase the price of a product or to influence human decision-making, you must be able to provide “adequate proof” to support any claims made about the AI product. Period. (See Federal Trade Commission Business Blog Keep your AI claims in check.)
  3. Know the risks of your AI product. AI products, often produced by third parties, may be developed through machine learning (ML) and/or deep learning (DL). The FTC warns that those who put AI products to market cannot rely on the “it wasn’t me” defense or the AI “black box”[1] defense to avoid liability for the “reasonably foreseeable risks and impact of your AI product[.]” This means that developers, sellers, and marketers of AI products alike must seriously consider and explore the risks and impact of any AI product prior to the AI product being released. (See Federal Trade Commission Business Blog Keep your AI claims in check.)
  4. Make straightforward and accurate AI claims. If the product claims to be AI-powered, the AI claim must be accurate. The FTC warns that inaccurate AI claims can and will be sniffed out by the FTC. (See Federal Trade Commission Business Blog Keep your AI claims in check.)

What is the Takeaway?

The takeaway from this recent FTC business blog guidance: the FTC is warning developers, sellers, and marketers of AI products to be very careful about testing, labeling, and advertising AI products and to “Keep your AI claims in check” or risk experiencing the serious p(ai)n of a FTC investigation or enforcement action.

In December 2021, Shannon Boettjer, Esq. successfully completed the course Artificial Intelligence: Implications for Business Strategy through the Massachusetts Institute of Technology (MIT) in conjunction with MIT Sloan School of Management and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).


[1] AI black box refers to “any artificial intelligence system whose inputs and operations aren’t visible to the user or another interested party. A black box, in a general sense, is an impenetrable system.” (See What is Black Box AI?) For a further explanation of how deep learning and neural networks operate to create the so-called AI black box, see id.; see also, my prior blog on Artificial Intelligence Beware: Quack-Quack is in the Air.

Read Entire Article