Lawyer Faces Sanctions After Submitting a Brief Written by ChatGPT With Made-Up Legal Cases
A US attorney is facing charges in a reported by The New York Times. court for using fake citations generated by OpenAI’s ChatGPT in legal research for a case he was handling, as first (no relation) admitted to using ChatGPT to research for the case, representing Colombian airline for injuries sustained onboard one of its planes in 2019.
Despite ChatGPT’s widely known warnings that it can sometimes produce incorrect information, Schwartz defended himself in an affidavit attesting that he was “unaware that its content could be false.” Judge was skeptical of the brief when it was submitted. He questioned the references in the brief to previous cases, including Varghese v. , v. , v. EgyptAir, v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines. It turned out that those cases did not exist.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Castel said.
Castel issued an order demanding an explanation for their citation by Mata’s legal team. In response, Schwartz has vowed to never use AI in future to “supplement” his legal research “without absolute verification of its authenticity,” but he now faces sanctions. A hearing on the matter is now scheduled for June 8.
This is very different from the Colombian judge who used ChatGPT to help write a legal ruling. In that case, judge Juan Manuel Padilla was open about using ChatGPT and asking it about the case, though not how much it had helped in drafting the ruling. He asked the chatbot several questions, including, “Is [an] autistic minor exonerated from paying fees for their therapies?” ChatGPT responded, “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.” Padilla made a point in a radio interview that he saw ChatGPT as an assistant akin to a secretary able to quickly respond and theoretically improve the justice system’s response time. He did make a point of not blindly following the chatbot’s response before issuing his response, however, checking up on it before publication.