Lawyers blame ChatGPT for their citation of bogus past cases - Los Angeles Times
Advertisement

Lawyers blame ChatGPT for tricking them into citing bogus past cases in court

ChatGPT app icon
A judge is deciding whether to sanction two lawyers who blamed ChatGPT for tricking them into including fake legal research in a court filing.
(Richard Drew / Associated Press)
Share via

Two apologetic lawyers responding to an angry judge in federal court blamed ChatGPT for tricking them into including fake legal research in a court filing.

Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases Schwartz thought were real but were actually invented by the artificial intelligence-powered chatbot.

Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client’s case against the Colombian airline Avianca for an injury incurred on a 2019 flight.

Advertisement

The chatbot, which has fascinated the world with its production of essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn’t been able to find through usual methods used at his law firm.

Problem was, several of those cases weren’t real or involved airlines that didn’t exist.

Schwartz told U.S. District Judge P. Kevin Castel in Manhattan on Thursday that he was “operating under a misconception ... that this website was obtaining these cases from some source I did not have access to.”

We interviewed ChatGPT, a chatbot that has garnered widespread attention for its ability to mimic human conversation. Then we brought in experts in artificial intelligence and the arts to analyze ChatGPT’s responses.

Dec. 9, 2022

He said he “failed miserably” at doing follow-up research to ensure that the citations were correct.

Advertisement

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz said.

Microsoft is investing some $10 billion in OpenAI, the company behind ChatGPT.

Its success, demonstrating how artificial intelligence could change the way humans work and learn, has generated fears in some people. Hundreds of industry leaders signed a letter in May warning that “ mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Cover letters are notoriously hard to write. These job seekers decided to outsource the task to ChatGPT, an AI chatbot, with impressive results.

Feb. 16, 2023

Castel seemed both baffled and disturbed by the unusual occurrence in his courtroom and disappointed that the lawyers did not act quickly to correct the bogus legal citations when they were first alerted to the problem by Avianca’s lawyers and the court. Avianca pointed out the bogus case law in a March filing.

Advertisement

The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline, only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses.

“Can we agree that’s legal gibberish?” Castel asked.

Schwartz said he erroneously thought that the confusing presentation resulted from excerpts being drawn from different parts of the case.

It was a short list that ignored a swath of cuisines and neighborhoods. No tacos, Chinese, Thai, Ethiopian, Vietnamese, Japanese or anything beyond sandwiches and fried chicken.

April 24, 2023

When Castel finished his questioning, he asked Schwartz if he had anything else to say.

“I would like to sincerely apologize,” Schwartz said.

He added that he had suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and extremely remorseful.”

He said that he and the firm where he worked had put safeguards in place to ensure that nothing similar happens again.

LoDuca, another lawyer who worked on the case, said he trusted Schwartz and didn’t adequately review what he had compiled.

After the judge read aloud portions of one cited case to show how easily it was to discern that it was “gibberish,” LoDuca said: “It never dawned on me that this was a bogus case.”

He said the outcome “pains me to no end.”

Ronald Minkoff, an attorney for the law firm, told the judge that the submission “resulted from carelessness, not bad faith,” and should not result in sanctions.

Microsoft plans to bring Bing search to ChatGPT. The software giant also plans to watermark the AI content it generates, so users know it’s not human-made.

May 23, 2023

He said lawyers have historically had a hard time with technology, particularly new technology, “and it’s not getting easier.”

“Mr. Schwartz, someone who barely does federal research, chose to use this new technology. He thought he was dealing with a standard search engine,” Minkoff said. “What he was doing was playing with live ammo.”

Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he introduced the Avianca case during a conference last week that attracted dozens of participants in person and online from state and federal courts in the U.S., including Manhattan federal court.

Advertisement

He said the subject caused shock and befuddlement at the conference.

China’s ruling Communist Party has warned of potential risks from advances in artificial intelligence and called for stronger security measures.

May 31, 2023

“We’re talking about the Southern District of New York, the federal district that handles big cases — 9/11 to all the big financial crimes,” Shin said. “This was the first documented instance of potential professional misconduct by an attorney using generative AI.”

He said the case demonstrated how the lawyers might not have understood how ChatGPT works because it tends to hallucinate, talking about fictional things in a manner that sounds realistic but is not.

“It highlights the dangers of using promising AI technologies without knowing the risks,” Shin said.

The judge said he would rule on sanctions at a later date.

Advertisement