NYC lawyer admits using ChatGPT for research in case where client sued Avianca airlines over knee injury – as it’s revealed the AI chatbot cited cases that it had MADE UP
- Roberto Mata claims his knee was injured when he was struck by a metal serving cart on a flight
- Mata’s lawyer Steven Schwartz submitted a 10-page brief featuring half a dozen relevant court decisions
- But at least six of the cases ‘appear to be bogus judicial decisions with bogus quotes and bogus internal citations’
A New York City lawyer has found himself in hot water after admitting he used fake information provided by ChatGPT for research in a lawsuit against Avianca airlines.
In the lawsuit, Roberto Mata claims his knee was injured when he was struck by a metal serving cart on a flight from El Salvador to Kennedy International Airport in New York back in 2019.
After the airline asked a Manhattan judge to toss out the case because the statute of limitations had expired, Mata’s lawyer Steven Schwartz submitted a 10-page brief featuring half a dozen relevant court decisions.
But the cases cited in the filing – including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines – did not exist.
Avianca’s lawyer told the court they did not recognize the cases, with Bart Banino, of Condon & Forsyth, telling CBS MoneyWatch ‘We figured it was some sort of chatbot of some kind.’
Steven Schwartz submitted a 10-page brief featuring half a dozen relevant court decisions that turned out to be made up by ChatGPT
Roberto Mata claims his knee was injured when he was struck by a metal serving cart on an Avianca flight from El Salvador to Kennedy International Airport in New York
It turned out that at least six of the cases ‘appear to be bogus judicial decisions with bogus quotes and bogus internal citations,’ said Judge Kevin Castel of the Southern District of New York on May 4.
ChatGPT was first unleashed in November, sparking excitement and alarm at its ability to generate convincingly human-like essays, poems, form letters and conversational answers to almost any question.
But, as the lawyer found out, the technology is still limited and unreliable.
Schwartz, of the firm Levidow, Levidow & Oberman, apologized last week in an affidavit after being called out by the case’s judge, saying he used ‘a source that has revealed itself to be unreliable.’
This week, Schwartz said he ‘greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.’
The lawyer added that he had never used the technology for research purposes before, and ‘was unaware of the possibility that its content could be false.’
He said he had even asked ChatGTP to verify the cases were real, and that it had said they were.
In a copy of the exchange submitted to the judge, Schwartz asked ChatGPT: ‘Is varghese a real case’ and ‘Are the other cases you provided fake.’
The bot replied: ‘No, the other cases I provided are real and can be found in reputable legal databases.’
Fellow attorney Peter Loduca, whose name also appeared on the bogus court filing, said he had no role in the research but ‘no reason to doubt the sincerity’ of Schwartz’s research.
A hearing has been set for June 8 to discuss potential sanctions against Schwartz, who has 30 years of experience as a lawyer in New York.
Schwartz has been ordered to show the judge why he shouldn’t be sanctioned for the ‘use of fraudulent notarization.’
By Daily Mail Online, May 30, 2023