Louisiana Digital News

Lawyer faces punishment after ChatGPT gave him fake cases to cite on a court-submitted brief

0



One of the problems with conversational AI chatbots at this stage is that they tend to hallucinate. In other words, they make up information to fit the user’s request. ChatGPT is a language model designed to give the user a response to a question and in doing so, the AI chatbot will come up with information to fill any voids even if what it comes up with isn’t true.

The New York Times (via Mashable) reported about an attorney named Steven Schwartz of Levidow, Levidow & Oberman who has worked as an attorney for 30 years. But thanks to ChatGPT, Schwartz could find himself looking for a new profession. You see, Schwartz was representing a client named Roberto Mata who was suing Colombia-based airline Avianca after his knee was hurt by a serving cart that smacked into him during a flight.

The decision by Schwartz to use ChatGPT could cost him his 30-year legal career

Avianca tried to get a judge to dismiss the case and Mata’s lawyers-including Schwartz-submitted a brief containing similar cases that were heard in court in an attempt to prove to the judge that the case should not be dismissed. But this is where ChatGPT and Schwartz screwed up. Schwartz filed the case when it first landed in State court and provided the legal research when it was transferred to Federal Court in Manhattan.

Looking to enhance his filing, Schwartz turned to ChatGPT to help him find similar cases that made it to court. ChatGPT produced a list of these cases: Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.” That sounds like a decent list of cases to cite to the judge. There was just one teeny tiny issue; none of those cases were real! They were all made up by ChatGPT.

The attorney never considered that the AI chatbot could give him fake information

Avianca’s legal team and the judge quickly realized that they could not find any of the cases cited in Schwartz’s filing. In an affidavit he filed with the court, Schwartz included a screenshot of his interaction with ChatGPT and said that as far as the conversational AI chatbot was concerned, he was “unaware of the possibility that its content could be false.” The judge has scheduled a hearing for next month to “discuss potential sanctions” for Schwartz.

Let this be a warning to anyone who plans on having AI do some of their work for them. You might think that you’re saving time, but you could end up in more trouble if you replace your own hard work with results from an AI chatbot. I would never want my articles to be written by AI and not only are you hurting yourself by using AI, you are lying to your audience either by claiming authorship of something you didn’t write, or because you are possibly providing your audience with made-up information.

It doesn’t matter if you are a student planning to use an AI chatbot to help you with a paper you need to write or a lawyer looking to cite cases. AI chatbots can hallucinate and give you fake information. That’s not to say that they can’t be useful and might be able to point you in the right direction. But once the pointing is done, it is up to you to make sure that the information you are obtaining is legitimate.





Source link

Leave A Reply

Your email address will not be published.