OpenAI Counters The New York Times Lawsuit Allegations with Evidence of Manipulated Prompts

San Francisco – OpenAI has responded to The New York Times’ recent lawsuit, accusing the newspaper of using manipulative techniques to provoke its AI model, ChatGPT. The organization argues that the lawsuit is based on the misuse of ChatGPT to selectively generate lengthy excerpts of New York Times content.

In the lawsuit filed against OpenAI and Microsoft, The New York Times claims that ChatGPT “recites Times content verbatim.” They have presented evidence showcasing how GPT-4 produced substantial amounts of New York Times content without proper attribution, asserting that this infringes on their copyright.

This accusation is significant as it challenges OpenAI’s argument that its use of data is transformative, which is legally associated with fair use. According to the United States Copyright Office, fair use permits the unlicensed use of copyrighted works in certain circumstances, particularly if the use is transformative and does not replace the original work.

To counter OpenAI’s fair use defense, The New York Times asserts that their use of content is not fair use. They highlight that the output of OpenAI’s GenAI models competes with and closely mimics the content used to train them, making it an infringement on their original works.

OpenAI, on the other hand, strongly refutes the claims made by The New York Times. The organization expressed surprise at the newspaper’s decision to file a lawsuit, as they believed the negotiations were progressing towards a resolution. OpenAI debunked The New York Times’ allegation that GPT-4 outputs verbatim content by explaining that the model is specifically designed not to do so. They argue that The New York Times deliberately used manipulative prompts to subvert GPT-4’s safety measures and generate the disputed verbatim content.

OpenAI accuses The New York Times of engaging in adversarial prompting, a technique that breaks the AI model’s guardrails to produce unintended output. They claim this usage of adversarial prompting is not typical and goes against OpenAI’s terms of use.

OpenAI emphasized their commitment to building resilience against adversarial prompt attacks, particularly the ones witnessed in The New York Times lawsuit. They cited a previous response to reports in July 2023, where they stressed their intent to rectify any unintentional display of content.

The lawsuit between The New York Times and OpenAI showcases contrasting narratives. OpenAI alleges that The New York Times employed adversarial tactics and misused ChatGPT to elicit verbatim responses, undermining the validity of the lawsuit. OpenAI reassures its support for journalism, partnerships with news organizations, and maintains that The New York Times’ lawsuit is meritless.

In summary, OpenAI and The New York Times are locked in a legal battle over allegations of copyright infringement. OpenAI contends that The New York Times manipulated ChatGPT to produce verbatim content, whereas The New York Times argues that OpenAI’s use of their content is not fair use. Both parties are presenting evidence and giving their perspectives on the matter.