Humiliated lawyers fined $5,000 for submitting ChatGPT hallucinations in court: ‘I heard about this new site, which I falsely assumed was, like, a super search engine’
So, they used CHATGPT to do their work, didn't validate it and and used made up cases to support theirs and when got cought, they lied and got only $5K fine? Wtf?
which I falsely assumed was, like, a super search engine
A "super search engine" is still a search engine, if you're incapable of validating the results, or if you don't know you should, you shouldn't be a lawyer at all.
I mean, dude apparently didn't even know how to read a case reference and guessed Fd3 stood for Federal District 3 (spoiler: it's Federal reporter Edition 3). Not that I would have guessed any different, but I'm not and never was in any kind of law education, not to mention being a lawyer, who supposedly deal with case references all the time.
Everyone wonders why ChatGPT is highly censored, this is a good example as to why. However, maybe instead of "As an AI language model" it should say something like, "Large language models like me tend to hallucinate/make up things and confidently convey them in my response. I will leave it up to you to validate what I say." The ultimate problem is the general public is treating LLMs like they are super sci-fi AI, they are basically fantastic autocomplete.
Super autocomplete doesn't sound as appealing as "AI". The general public need to know that though so they can adjust their expectation. For example, it doesn't make sense to expect an autocomplete system to solve complex math problems (something people unexpectedly use ChatGPT for).
Here's the thing, even if you had zero intention of actually reading a case, there are STILL next steps once you get a cite. There is an entire "skill" you're taught in law school called Shepardizing (based on an older set of books that helped with this task) where you have to see if your case has been treated as binding precedent, had distinctions drawn to limit its applicability, or was maybe even overturned. Back when I was learning, the online citators would put up handy-dandy green, yellow, and red icons next to a case, and even the laziest law student would at least make sure everything was green before moving on in a Shepardizing quiz without looking deeper. And even THAT was just for a 1-credit legal research class.
These guys were lazy, cheap (they used "Fast Case" initially when they thought they had a chance in state court; it's a third-rate database that you get for free from your state bar and is indeed often limited to state law), and stupid. They didn't even commit malpractice with due diligence. I can only assume that they were "playing out the string" and extracting money from their client until the Federal case was dismissed with prejudice, but they played stupid games and won stupid prizes.
Here lies the problem, ChatGPT is not a search engine, instead you can think of it as a compressed JPEG of the Internet (Credits to Ted Chiang). It can get you things that LOOK right if you squint your eyes a bit, but you just can't be sure that it is not just some random compression artifacts.
The problem is that OpenAI is hyping ChatGPT up as something that it is not.