Graciela Dela Torre copy a message to ChatGPT. It’s an email from your lawyer. You want to know if he is manipulating her. The chatbot is clear: Yeah.
Those three seconds mark the beginning of a chain of decisions that will end up in the American courts.
Dela Torre lives in a Chicago suburb and works as a logistics coordinator. After break up with your lawyer Get started using ChatGPT to prepare legal documents. Some cite judicial precedents that have never existed.

This story mixes artificial intelligence, labor litigation and an uncomfortable question for the judicial system: what happens when a chatbot begins to behave like a legal advisor.
A conflict that began years before
Graciela’s insurer, Nippon Life Insurance Company of America, has sued OpenAIcreator of ChatGPT. The story begins in 2019. Dela Torre presented a claim for work disability after suffering carpal tunnel syndrome and tennis elbow at work. The conflict led to litigation related to disability insurance.
The process ended with a agreement. As part of that agreement, signed in January 2024, the plaintiff waived new claims related to the case.
This type of clause is common in judicial agreements in the United States. Its objective is to definitively close the dispute and prevent it from returning to court.
For a time, the conflict was left behind. But months later doubts arose. The affected person began to think that the agreement could have been signed with errors or with incomplete information. He decided to consult his lawyer.
The answer was clear. The case could not be reopened. ChatGPT said otherwise.
According to the lawsuit, the chatbot interpreted his lawyer’s message as a case of gaslightinga term that describes a form of psychological manipulation that seeks to make another person doubt their own perception of the facts.
From that moment on, Dela Torre began consulting the AI on how to challenge the court agreement. The chatbot even generated a draft motion to try to reactivate the process. She filed the document in court on her own.
In the American judicial system this is known as acting for yourself. That is, without legal representation. The court analyzed the request. The answer came on February 13, 2025. The case could not be reopened. But the story didn’t end there.
An avalanche of writings
After the judicial rejection, new documents began to arrive at the court. Many documents.
According to the lawsuit, Dela Torre presented at least 44 judicial writings prepared with the help of the chatbot. Among them there were 21 motions, a subpoena and several procedural notices.
Some quotes from these texts were non-existent. One of the examples mentioned in the lawsuit is the alleged case Carr v. Gateway, Inc. According to the company, such a precedent has never existed in American jurisprudence. “It only exists in the plaintiff’s documents and in the mind of ChatGPT“, states the judicial document.
Responding to this avalanche of documents has had a cost. The insurer estimates that it has spent about $300,000 in legal costs to respond to motions and briefs filed with the court. That is why he has decided to sue OpenAI.
The company claims that amount in damages, in addition to $10 million in punitive damages.
OpenAI has rejected the allegations and maintains that the lawsuit is meritless. Now it’s up to the courts.
It’s not a new problem
Although the case is unusual, it is not the first time that artificial intelligence has caused problems in court.
In 2023, a federal court in New York sanctioned two lawyers for filing a court brief based on non-existent jurisprudence generated by ChatGPT in the case Mother v. Avianca.
The judge P. Kevin Castel discovered that the document cited precedents that did not exist. In its resolution it described the cases presented as “false judicial decisions with false citations.”
The episode became one of the first public scandals related to the use of artificial intelligence in judicial processes.
Since then some courts have begun to react.
In Texas, the federal judge Brantley Starr established a rule for its court that requires lawyers to review any content generated by artificial intelligence before presenting it to a judge.
The reason is simple. Current tools can be wrong. And they do it very safely.
Lawyer IA’s “hallucinations”
The phenomenon has a name within the technology industry. It is known as hallucinations of artificial intelligence. It occurs when a generative model produces false information that looks convincing.
In the legal field the risk is especially high because the responses usually include legal citations, references to precedents and technical arguments. For a user without legal training it may be difficult to distinguish a real sentence from a fabricated one.
Experts have been warning about this problem for some time.
The teacher Daniel Martin Katza specialist in artificial intelligence applied to law, explains that language models can produce very convincing legal texts. But that doesn’t mean they understand the legal system. In reality, he points out, these systems do not know the law. They predict words.
Other jurists highlight the same risk.
The teacher Ashley Deeksfrom the University of Virginia, warns that these systems can generate responses that they seem authoritative even when they are incorrect. That is precisely the type of error that worries the courts. Because when a fake quote enters a judicial document, it is no longer just a technological failure. It becomes a legal problem.
The question that opens this case
The Illinois litigation now adds a new element to the debate. In previous cases, errors had been made by lawyers using artificial intelligence without verifying the information. This time the lawsuit claims something different. That a virtual assistant acted as legal advisor.
If the courts accept that argument, the case could become an important precedent for the entire artificial intelligence industry because More and more people consult ChatGPT before a professional.
And when someone starts trusting a machine to make legal decisions, the issue is no longer whether the AI can make mistakes. The problem is who responds when they do it.
You may also like
-
Breaking news of the war in Iran today, live | US and Israeli attacks and latest news
-
He refused to evacuate when Israel invaded Lebanon and died under bombs helping his neighbors
-
two weeks of bombing and bridgeheads on the islands
-
11,000 ex-convicts are already fighting in the frontline trenches
-
The US temporarily authorizes the sale of Iranian oil stranded at sea to try to stop the rise in prices
