AI and Defamation

While ChatGPT 4.0 is far more cautious about making statements about specific individuals, it still hallucinates to disseminate damaging misinformation about people.  Litigation concerning prior versions of ChatGPT are winding their way through the courts right now with cases not being dismissed on purely legal arguments.  Earlier this year, a Georgia court ruled that OpenAI must defend itself against defamation charges because ChatGPT made false statements about a radio host being sued for embezzling funds from an organization for which he never served as an officer.

How does this happen?

ChatGPT and other similar large language models are trained to imitate large language datasets without intentionally copying text from specific works.  So how do they generate responses?  They generate “predictive text” using the words in the inquiry itself and its searches to “predict” what the next logical words in responses should be. These logical words are then used to generate answers rather than simply copying previously created works in a manner that speaks generally using proper grammar and writing style. To no surprise, the greater number of written materials fed into the AI systems about a person or event, the more detailed and accurate an answer to a query is. But this does not guaranty accurate results as public individuals are learning daily.  And for those of us who do not have much written about us, the greater the probability AI tools use its programming to fill in the blanks.  This is what the term “hallucinate” means in the AI context.

What Has Been Said By AI?

The types of false “information” that has been produced by ChatGPT and other AI systems is frightening.   The most egregious examples are those where the AI declaratively states people purportedly engaged in intentional misconduct involving physical, economic, and political crimes.

Even worse, AI tools have given user advice to engage in self-harm including committing suicide and leaving their spouses.

Defamation: Today and a Prediction for the Future

Defamation claims require plaintiffs prove defendants “intentionally” defamed them in writing (libel) or in conversation (slander).  But it is impossible to attribute intent to a piece of software.  We predict states generally transition to a “gross negligence” standard that initially places a burden on the defamed to lodge complaints to the AI company publishing the information.  We do not believe “ordinary negligence” will suffice to prove a defamation complaint.

We also believe individuals or groups of individuals—regardless of whether they are internal or external—will need to inform the AI company through a “notice and takedown” regime.  We further believe there will be a two-tiered system in which individual complaints will need to be addressed quickly (five days) to avoid a successful liability claim.  After a certain number of complaints about a particular issue, the problem will need to be corrected almost immediately (two days) otherwise the state could impose civil fines. The corrective time periods must be short—after all, AI companies should be able to move quickly.

David Seidman is the principal and founder of Seidman Law Group, LLC.  He serves as outside general counsel for companies, which requires him to consider a diverse range of corporate, dispute resolution and avoidance, contract drafting and negotiation, and other issues. He can be reached at david@seidmanlawgroup.com or 312-399-7390.

This blog post is not legal advice.  Please consult an experienced attorney to assist with your legal issues.

Share:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *