Samuel Edwards
February 21, 2024
Human beings aren't perfect.
We misremember things. We overestimate our abilities to think rationally and logically. We all lie, on average about 1-2 times per day. And we’re wracked with cognitive biases that render us practically unreliable at times.
But we expect machines to be better.
Certainly, automated robots in factories are more consistent and make fewer mistakes than their human counterparts. So surely, now that we have sophisticated AI-powered robots helping us conduct searches and create content, we can rely on a similar level of consistency and perfection.
Right?
Let's enter the world of AI hallucinations to figure out exactly why this is such a misguided take. And if you're a lawyer, or if you're working in the legal industry, take note: if you aren't familiar with AI hallucinations, or if you don't know how to prevent or handle them, you could end up jeopardizing your entire career.
What exactly is an AI hallucination?
This is a relatively new term that's making the rounds, though the concept has been acknowledged since the introduction of generative AI models.
The basic definition of an AI hallucination is any sort of falsehood or nonsense generated by an AI that's presented as if it's real, factual, and verifiable.
If you have used AI tools for any amount of time, you've likely experienced at least one for yourself – and you may not have even realized it. Part of the trouble is that it's very difficult to tell the difference between a correct response and a response that's presented as correct, especially if you're reviewing many responses and lines of content simultaneously.
These hallucinations come in many forms. If you deliberately ask ChatGPT about a historical figure that you made-up on the spot, it might give you a detailed response on who this person was and what they did. If you give ChatGPT a math problem, even a simple one, it will give you a confident response that may or may not be the correct solution. And even if you ask it an open-ended question about a general topic, it might fabricate elements of the story it returns to you.
Put simply, an AI hallucination is a complete fabrication. It is an empty conjuring of content with no verifiable basis in reality, and yet, it is presented to the user as if it is true.
If you’re wondering what AI hallucinations have to do with lawyers and legal professionals, look no further than the Mata v. Avianca Inc. (2023) incident, where we witnessed the destructive potential of AI hallucinations for lawyers in real-time.
The story goes like this. A man named Roberto Mata had a case against Colombian airline Avianca pending in the Southern District of New York. Steven A. Schwartz of Levidow, Levidow and Oberman proactively disclosed that he consulted ChatGPT as a supplement to his legal research while drafting a response to an Avianca motion to dismiss.
As part of his submission, Schwartz included several cases, complete with judicial decisions, quotes, and internal citations. But Judge P. Kevin Castel found that the plaintiff’s filing contained six cases that were completely fabricated – including fabricated quotes and citations.
Right now, you might be thinking, “Wow, what an idiot! Why would he blindly trust AI to do his work for him?” But there are two things you need to keep in mind.
First, ChatGPT also generated real, verifiable cases and quotes – and it’s almost impossible to distinguish between these and false cases on the surface. If you verified that three cases were completely legitimate, you might find it instinctively reasonable that the remaining cases were also legitimate.
Second, Schwartz asked the AI directly whether these cases were real – and whether they “can be found in reputable legal databases such as LexisNexis and Westlaw.” The AI, as you might expect, answered “yes,” despite this being an outright lie.
Part of Schwartz’s defense was that he was “unaware of the possibility that [ChatGPT’s] contents could be false.” You, reading this article, can no longer make such a claim.
Steven A. Schwartz and fellow attorney Peter LoDuca were both forced to pay $5,000 fines. Their firm has been held jointly and severally liable for this action.
Obviously, AI hallucinations are bad. No one doubts this.
No user wants to be caught with their pants down in front of a judge, forced to defend fake cases. And no AI developer wants to be responsible for outcomes caused by a machine with a seeming sense of overconfidence.
So why do AI hallucinations exist?
If you understand how generative AI works, you probably already know the answer to this question. But in case you don't, remember that most generative AI engines aren't actually “intelligent” in the way that we usually think about intelligence. It’s tempting, at first glance, to conceptualize AI as a simulacrum of the human brain, doing research blindingly fast, analyzing the information, and coming to more formal conclusions.
In reality, generative AI is a simple prediction engine. Drawing upon vast stores of data and a network of resources, the AI engine semantically analyzes your prompt, then generates a response, word by word, based on what it would predict to find in similar sources.
You can think of it as a souped-up version of the automatic predictive texting feature found on most smartphones. Just as your phone is capable of speculating about the next word you're going to type, a generative AI engine is capable of confidently asserting the next word in a sequence, sometimes thousands of times over.
It's a bit reductive to compare these features, because most generative AI models are truly robust and gobsmackingly impressive. But at the end of the day, they are still just prediction engines.
If the machine attempts to make a prediction or demonstrate an ideal response, but there is no hard, factual data available to provide, it's going to find a substitute.
There are many factors that can make AI hallucinations more likely (which we’ll explain in our “Common Causes” section), but this is the foundation of the problem.
Granted, many AI developers caught onto this problem early and introduced safeguards to minimize occurrences. For example, ChatGPT is likely to tell you when it doesn't know the answer to something or when it doesn't feel confident in what it is saying. But even the best safeguards sometimes fail, and not all AI products have similar safeguards in place.
AI hallucinations are a problem for lawyers, and pretty much anyone else who uses these types of tools, because they give you false information. In a formal context, you could damage your professional reputation by presenting these falsehoods as facts or unfairly relying on them. In an informal context, you could walk away with false confidence in understanding a topic – and end up embarrassing yourself at a dinner party later.
Lawyers should be especially concerned. The legal industry is a very precise and meticulous one, where even the most seemingly trivial exchanges can have incredibly high stakes. In an industry where a simple misplaced comma can cost a company millions of dollars, there can be no room for guesswork or sloppy research.
AI hallucinations are also a problem at a broader scale because not everyone knows they exist or how to handle them. It’s easy to be impressed with or enamored by AI, to the point where a new user may not take the time to think about whether what they’re reading is actually true. Because of this, it’s likely we’ve only seen the very beginning of this problem.
These are some of the most common causes of AI hallucinations for lawyers. But do keep in mind that avoiding these causes, while helpful, isn't a guarantee that you'll never see an AI hallucination.
· Bad training data. Sometimes, AI hallucinations emerge because of bad training data. If an AI engine is trained on resources that contain falsehoods or different sets of data that have contradictory information, it's going to end up genuinely confused. Fortunately, most of the better and more popularly used products on the market have overcome this problem, using thoroughly reliable resources for most applications.
· Inaccurate prompts. Inaccurate prompts can also lead AI to more hallucinations. As a simple example, if you ask in AI to tell you about the least commonly discussed American presidents, it will probably bring up people like Millard Fillmore or William Henry Harrison. But if you ask it why more people don't talk about American president Thanos Jackson, it might be tempted to spin you a story about when Thanos Jackson served and what his greatest accomplishments were (depending on the sources available to it).
· AI overfitting. AI models trained with limited data tend to fall into a trap of simply memorizing inputs and outputs. If there is no other outlet for improvement, the AI may become practically incapable of generalizing new prompts and data. It’s a bit like a history student who only studies a small selection of reading material; they may be able to tell you about certain facts, dates, and actions, but they can’t extrapolate the bigger picture, nor can they tell you about related events.
· Nonsensical prompts. Nonsensical prompts and prompts that the AI has difficulty understanding for various reasons can also trigger AI hallucinations. This might give the AI a sense that it understands the prompt, when it really doesn't, or it might prompt the AI to follow a similar style of nonsense.
· User manipulation. It's also worth noting that AI hallucinations can be manifested through intentional user action. For experimentation or amusement, a user can easily trick a generative AI model into fabricating responses, especially when exploiting the other causes above. Fortunately, this cause of AI hallucinations is rarely a problem for lawyers, since the lawyers using this approach will know when a response is a hallucination – or at least be extra judicious when reviewing the outputs.
As a lawyer, it's unwise to totally dismiss AI tools just because of AI hallucinations. AI is already starting to completely transform the legal industry, and it has the potential to save you inordinate amounts of time and money.
That said, it's imperative that you use AI responsibly and work to prevent AI hallucinations as a lawyer. Your reputation in the legal world depends on it.
These are some of the best strategies for preventing AI hallucinations as a lawyer:
· Use the right AI tool. Make sure you're using the right AI tool. Do your due diligence to figure out who made this tool, who's using this tool, which sets of data are being used to train the tool, the limitations of the tool, and experiences of other users who have used the tool. You should have a thorough understanding of how this AI model works and what its strengths and weaknesses are before you rely on it for anything.
· Constrain the possibilities. In your prompts, consider constraining the possible outcomes to limit the potential for fabrication. Consider asking someone the question, “What were you doing last night?” instead of asking them the question, “Last night, at 9 pm, were you at home or were you at the bar?” In the former question, the responder can answer in an infinite number of ways. In the latter question, the responder is constrained to answering with a location and with regard to a specific time – and there are only two possible outcomes. In which scenario is the responder more likely to get away with lying? The answer is obvious. Narrow the range of possible answers and you'll inherently limit the capacity for an AI to hallucinate.
· Provide details and key information whenever possible. Similarly, you should provide details and key information whenever possible. For example, if you proactively tell ChatGPT or a similar AI model that “Thanos Jackson is a fictional American president I just made up,” it will not attempt to convince you otherwise. This is a crucial strategy for overcoming the “overfitting” problem; if the AI has limited access to data, you’ll need to fill in the gaps.
· Avoid areas of weakness. Many generative AI models are incredibly good with specific tasks, but downright terrible with others. As a simple example, ChatGPT is notoriously incapable of solving even rudimentary math problems. It’s your responsibility as a lawyer to understand the critical strengths and weaknesses of your AI engine, so you can deliberately avoid areas of weakness.
· Provide an easy out. You should also consider giving the AI an easy out. For example, you can prompt it with, “If you don’t know the answer to this question, just say ‘I don’t know.’” Or you can say, “if there are no cases that match this description, tell me so.”
· Ask for exclusions. In some situations, it may make sense to ask for exclusions in the responses your AI is about to provide. For example, if you're asking about historical figures, you can include in your prompt something like “please exclude mentions of any fictional or literary characters with no basis in real history.” You can also constrain the results in some contexts, such as asking for only examples from a specific database.
· Tinker with the settings (when available). With some AI models, you may have access to settings that can help you control outputs and minimize the occurrence of AI hallucinations. For example, ChatGPT users have access to a feature known as “temperature.” The lower the temperature is, the more reliable and predictable the responses are going to be. The higher the temperature is, the more experimental and creative the responses are going to be. There are use cases for both ends of this spectrum, but if you want to reduce the prevalence of AI hallucinations, you should keep the temperature lower.
All the above strategies can help prevent AI hallucinations, but the following should be treated as absolute, unbreakable rules that every lawyer must follow when using AI.
· Only use AI to accelerate your work – not to do it for you. AI is not going to replace human lawyers, at least not anytime soon. Don't treat it as if it is. AI should be used to assist and accelerate your work, not to do it for you. Your work and your ideas need to come from your human brain. In this pursuit, the best way to think about AI is as an inexperienced, unreliable assistant; helpful, but in need of guidance and supervision.
· Practice full, accurate disclosure. Maintaining high ethical standards should be one of your highest priorities, so always practice full, accurate disclosure. If you use AI tools in your work, make sure everyone knows about it.
· FACT CHECK! Finally, and perhaps most importantly, fact check everything. In fact, you should double and triple check it. If you follow the rule of only believing what the AI model has generated if you have verified it independently, you will never have to worry about falling victim to an AI hallucination. Yes, this is going to take some extra time, but it's essential if you want to ensure you're presenting correct information.
Legal AI isn't going away.
Nor should it.
In the right hands, with the proper guidance, AI for lawyers is empowering, accelerative, and inordinately efficient. The only catch is that you can't totally rely on it to do all your work for you.
As long as you choose the right AI tool and you're willing to supervise and review whatever it generates, you can avoid AI hallucinations entirely – and potentially improve your career in ways you could only dream about.
It’s natural to feel a bit skeptical, especially if you just finished reading this article about how AI is sometimes wrong – and especially if you’ve never used a legal AI tool before.
But believe us, AI has the power to transform your work for the better.
Scratch that. Don’t believe us.
See it for yourself!
Sign up for a free trial of the ultimate artificial intelligence (AI) platform for lawyers and law firms today!
Samuel Edwards is CMO of Law.co and its associated agency. Since 2012, Sam has worked with some of the largest law firms around the globe. Today, Sam works directly with high-end law clients across all verticals to maximize operational efficiency and ROI through artificial intelligence. Connect with Sam on Linkedin.
November 26, 2024
November 24, 2024
Law
(
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
)
News
(
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
)
© 2023 Nead, LLC
Law.co is NOT a law firm. Law.co is built directly as an AI-enhancement tool for lawyers and law firms, NOT the clients they serve. The information on this site does not constitute attorney-client privilege or imply an attorney-client relationship. Furthermore, This website is NOT intended to replace the professional legal advice of a licensed attorney. Our services and products are subject to our Privacy Policy and Terms and Conditions.