top of page

Federal Trade Commission Examining OpenAI for Potential Consumer Damage

An examination is being conducted as artificial intelligence has aroused significant interest in Washington, with legislators attempting to ascertain whether regulations must be instated to safeguard intellectual property and user data in the time of generative AI. The Washington Post, first to report the news, made available the FTC's 20-page civil investigative demand detailing key aspects of the investigation. This civil investigative demand requests OpenAI to provide a roster of the third parties having access to its large language models, their ten most important customers/licensees, how they obtain information to train their LLMs and more. The Federal Trade Commission is investigating OpenAI, the maker of ChatGPT, to determine whether the firm has broken any consumer protection laws. The Washington Post was the first to report this development, and they published the FTC's civil investigative demand (CID) of 20 pages, which is similar to a subpoena. A source familiar with the matter confirmed to CNBC that the document is authentic. The FTC has declined to comment.The document states that the probe will center around whether OpenAI has "engaged in unfair or deceptive privacy or data security practices" or "engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm, in violation of Section 5 of the FTC Act." With artificial intelligence becoming an increasingly important topic in Washington, legislators have begun to consider if new laws are needed to defend intellectual property and consumer data in the era of generative AI, which depends on massive datasets to learn. The FTC and other agencies have indicated that they have legal authority to target any harm caused by AI.The CID requests OpenAI to provide a list of third parties that have access to its large language models, their top ten customers or licensors, clarify how they retain and use consumer information, detail how they get data to train their LLMs, review how OpenAI evaluates risk in LLMs, and then describe how they monitor and respond to disparaging or incorrect statements about people. Additionally, the CID questions OpenAI about a bug they revealed in March 2020 which "allowed some users to see titles from another active user's chat history" and "may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window."Until now, OpenAI CEO Sam Altman has had a positive relationship with lawmakers in Washington, with legislators praising his willingness to discuss the technology and seek regulations surrounding it. Nonetheless, some AI experts have cautioned policymakers to be aware that the company has its own motives in describing its vision of regulation and suggested that they talk to a varied group of voices.OpenAI did not immediately reply to CNBC's request for comment.Subscribe to CNBC on YouTube.WATCH: Investing in AI: What to Consider

Comments


bottom of page