EletiofeLawyer Blames ChatGPT For Fake Citations In Court Filing

Lawyer Blames ChatGPT For Fake Citations In Court Filing

-

- Advertisment -

A lawyer who relied on ChatGPT to prepare a court filing for his client is finding out the hard way that the artificial intelligence tool has a tendency to fabricate information.

Steven Schwartz, a lawyer for a man suing the Colombian airline Avianca over a metal beverage cart allegedly injuring his knee, is facing a sanctions hearing on June 8 after admitting last week that several of the cases he supplied the court as evidence of precedent were invented by ChatGPT, a large language model created by OpenAI.

Lawyers for Avianca first brought the concerns to the judge overseeing the case.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” U.S. District Judge P. Kevin Castel said earlier this month after reviewing Avianca’s complaint, calling the situation an “unprecedented circumstance.”

The invented cases included decisions titled “Varghese v. China Southern Airlines Ltd.,” “Miller v. United Airlines Inc.” and “Petersen v. Iran Air.”

Schwartz ― an attorney with Levidow, Levidow & Oberman who’s been licensed in New York for more than 30 years ― then confessed in an affidavit that he’d used ChatGPT to produce the cases in support of his client and was “unaware of the possibility that its content could be false.”

Schwartz “greatly regrets having utilized generative artificial intelligence to supplement to the legal research performed herein and will never do so in the future without absolute verification of its authenticity,” he stated in the affidavit.

Peter LoDuca, another lawyer at Schwartz’s firm, argued in a separate affidavit that “sanctions are not appropriate in this instance as there was no bad faith nor intent to deceive either the Court or the defendant.”

The sanctions may involve Schwartz paying the attorneys’ fees that the other side incurred while uncovering the false information.

This isn’t the first time ChatGPT has “hallucinated” information, as AI researchers refer to the phenomenon. Last month, The Washington Post reported on ChatGPT putting a professor on a list of legal scholars who had sexually harassed someone, citing a Post article that didn’t exist.

“It was quite chilling,” the law professor, Jonathan Turley, said in an interview with the Post. “An allegation of this kind is incredibly harmful.”

Popular in the Community

You May Like

Latest news

Inside the Race to Develop a Test for the Rare Andes Hantavirus

As passengers return to the US from the cruise that saw a rare hantavirus outbreak, much of the country...

OnlyFans’ First-Gen Creators Are Retiring—and Some Are Begging You to Forget They Exist

On April 28, just before noon, Win White logged onto X and posted a series of messages to his...

Sony Bravia Theater Bar 5 Review: Basic Bar, Big Sound

Review: Sony Bravia Theater Bar 5The latest Bravia Theater soundbar strips away the nice-to-have extras, but its crisp and...

A Conspiracy Theory About QR Codes Has Led to Chaos Ahead of Georgia’s Midterms

QR codes are at the center of the latest conspiracy theory in Georgia’s elections. And it’s largely thanks to...
- Advertisement -

Meet the Sad Wives of AI

If i had to listen to another minute of my husband talking about Claude Code, I might have actually...

29 days to the World Cup: Who designs the kits for the teams in the tournament?

The countdown to the 2026 World Cup is on! Each day ahead of the tournament’s return to North America,...

Must read

Inside the Race to Develop a Test for the Rare Andes Hantavirus

As passengers return to the US from the cruise...
- Advertisement -

You might also likeRELATED
Recommended to you