The creator of ChatGPT prepares for a potential conflict with the New York Times and writers over the use of copyrighted material under the concept of "fair use."

The creator of ChatGPT prepares for a potential conflict with the New York Times and writers over the use of copyrighted material under the concept of “fair use.”

Numerous prominent legal cases in a federal court in New York will examine the fate of ChatGPT and other artificial intelligence products that would not possess such articulate abilities if they had not consumed vast amounts of copyrighted human content.

Is it possible that AI chatbots, specifically those created by OpenAI and Microsoft for commercial use, are violating copyright and fair competition laws? It will be a challenging legal battle for professional writers and media companies to make that claim.

“I have a desire to remain hopeful for the authors, but unfortunately, I am not. In my opinion, they face a difficult challenge,” stated Ashima Aggarwal, a copyright lawyer who was formerly employed by the major academic publishing company John Wiley & Sons.

The first legal action is filed by The New York Times. The second is brought forth by a coalition of renowned authors, such as John Grisham, Jodi Picoult, and George R.R. Martin. The third is initiated by popular nonfiction writers, including the writer of the Pulitzer Prize-winning biography that inspired the successful film “Oppenheimer.”

The lawsuits each have varying claims, but they all revolve around the San Francisco-based company OpenAI using other people’s intellectual property to create their product. Attorney Justin Nelson, who is representing the nonfiction writers and whose law firm is also representing The Times, stated this.

According to Nelson, OpenAI claims that they have the right to use anyone’s intellectual property without consequences, as long as it has been accessible on the internet.

In December, The Times filed a lawsuit, claiming that ChatGPT and Microsoft’s Copilot are in direct competition with the sources they are trained on. This diversion of web traffic is causing a loss of revenue for the newspaper and other copyright holders who rely on advertising from their websites to continue producing journalism. The lawsuit also presented proof of the chatbots reproducing Times articles verbatim. Additionally, the chatbots have falsely credited false information to the paper, which has negatively impacted its reputation.

A single senior federal judge is currently overseeing four cases involving three nonfiction authors, with a recent lawsuit being filed by two additional authors. Judge Sidney H. Stein, who was appointed to the Manhattan court in 1995 by former President Bill Clinton, is presiding over all of these cases.

At this time, OpenAI and Microsoft have not officially responded to the New York cases. However, OpenAI released a statement this week stating that The Times’ lawsuit is baseless and that the chatbot’s ability to repeat certain articles word for word was due to a rare error.

According to a blog post from the company on Monday, utilizing public internet sources to train AI models is considered fair use, citing established and widely acknowledged precedents. The post also implies that The Times either directed the model to repeat certain information or selectively chose examples from multiple attempts.

Last year, OpenAI mentioned their licensing agreements with The Associated Press, Axel Springer, and other organizations as a way of promoting a positive news environment. They have paid an undisclosed amount to obtain a license for AP’s collection of news articles. The New York Times was also discussing a similar agreement, but ultimately chose to take legal action instead.

Earlier this year, OpenAI announced that utilizing AP’s “highly reliable and accurate textual database” would enhance the functionality of its AI systems. However, in a recent blog post, the company downplayed the significance of news content in AI training. They argued that large language models acquire knowledge from a vast amount of human data and that no single source, such as The New York Times, holds significant value for the model’s intended learning.

A significant portion of the AI sector’s justification is based on the “fair use” principle in U.S. copyright law, which permits certain limited uses of copyrighted materials for purposes such as education, research, or creating a new version of the copyrighted work.

The legal team for The Times responded on Tuesday, stating that the actions of OpenAI and Microsoft are not considered fair use as they are utilizing the newspaper’s journalism without permission or compensation to create competing products.

Up until now, courts have generally supported technology companies in determining how copyright laws should apply to artificial intelligence systems. In a setback for artists who use visual media, a federal judge in San Francisco dismissed a significant portion of the initial major lawsuit against AI-generated images last year. However, the artists have since modified their complaint. Another judge in California rejected some of comedian Sarah Silverman’s claims against Meta, the parent company of Facebook. However, Silverman’s case was revised in December and combined with another one that features authors Ta-Nehisi Coates and Michael Chabon.

The latest legal cases have presented more specific proof of purported damages. However, Aggarwal stated that when it concerns employing copyrighted material to educate AI systems that provide only a fraction of it to users, the courts are not likely to consider it as copyright violation.

Technology corporations refer to Google’s triumph in defending against legal disputes over its digital library of books. In 2016, the U.S. Supreme Court upheld lower court decisions which dismissed the argument made by authors that Google’s digitization of millions of books and display of excerpts constituted copyright infringement.

But judges interpret fair use arguments on a case-by-case basis and it is “actually very fact-dependent,” depending on economic impact and other factors, said Cathy Wolfe, an executive at the Dutch firm Wolters Kluwer who also sits on the board of the Copyright Clearance Center, which helps negotiate print and digital media licenses in the U.S.

According to Wolfe, the fact that something is available for free on a website does not give the right to copy and email it, much less use it for business purposes. The outcome of this situation is uncertain, but I strongly believe in protecting copyright for everyone as it encourages innovation.

Certain media companies and creators of content are considering options beyond the judicial system and urging legislators or the U.S. Copyright Office to enhance copyright safeguards for the age of AI. A group assembled by the U.S. Senate Judiciary Committee listened to statements on Wednesday from media leaders and supporters during a hearing focused on the impact of AI on journalism.

Roger Lynch, chief executive of the Conde Nast magazine chain, planned to tell senators that generative AI companies “are using our stolen intellectual property to build tools of replacement.”

According to a statement prepared by Lynch, we are of the opinion that a solution can be easily achieved through legislation – specifying that the utilization of copyrighted material in conjunction with commercial Gen AI is not considered fair use and therefore, necessitates obtaining a license.

___

This narrative was initially released on January 9, 2024. It was revised on January 10, 2024 to specify that lawsuits filed by creators against AI image-generating programs and against Meta by authors, such as Sarah Silverman, have been modified following the dismissal of certain aspects by judges.

Source: wral.com