Samuel Edwards
April 11, 2024
Legal AI has the power to massively alter your speed, quality and efficiency in the practice of law.
It can help you with research, document review, document drafting, brainstorming, and much, much more.
But legal AI isn't a lawyer.
It doesn't have a human brain. It needs a lawyer with a human brain to operate it, ask/give it the right prompts, and design the conditions necessary for it to do what it does best.
In other words, it's on you to phrase the legal AI inquiries and engineer the prompts necessary to make the most of this generative AI engine.
How do you ask your legal AI questions the right way?
And why is it so important?
Let's start by addressing why not all legal AI prompts are equal.
If generative AI had crossed the threshold of technological singularity, it might be so ridiculously intelligent that it could answer any prompt, or even predict your prompts before you type them. But for right now, legal AI does have some weaknesses and limitations.
Legal AI, and most generative AI platforms broadly speaking, are extremely good at processing data, recognizing patterns, answering basic, specific questions, and applying repetitive formulas to predictable scenarios.
But these are the areas where legal AI is currently held back:
· Data accessibility. Generative AI platforms are typically trained using an existing, external set of data. Different types of AI platforms are also given access to various data archives, depending on the intention of the tool and the environment in which it was created. Legal AI systems, for example, often have access to case databases so they can reliably answer questions about historical cases and judgments. As you might imagine, the capabilities of any AI system are limited by the data to which the system has access. If the AI can't access the data, it might as well not exist, creating blind spots that inattentive human users might accidentally ignore.
· Reliability and accuracy. While generative AI platforms are more advanced than they've ever been and are truly impressive to the general population, they do occasionally struggle with accuracy and reliability. Famously, legal AI platforms have cited cases that never existed in reality, and while rare, it is possible for AI to speak inaccurately about basic facts and “common sense” details of our world. Most of these can be easily caught with a simple round of proofreading, but it's still important to acknowledge that legal AI isn't perfect.
· Lack of context and experience. No legal AI platform has the educational background or experience that you do. Without proper context, there are limitations to how it can present legal knowledge. And without a proper understanding of your intentions, it may not be able to answer your prompts effectively.
· Biases and innate perspectives. Many philosophers have raised concerns about the possibilities of AI entrenching and reinforcing cognitive biases, either acquired from their creators or naturally created from the algorithmic learning processes that allowed them to develop. While many generative AI platforms, including most legal AI platforms, have been saddled with guidelines and restraints to minimize the influence of bias, this isn't always perfectly effective.
· Privacy, transparency, and ethics. Finally, we need to consider potential issues associated with privacy, transparency, and ethics in the realm of legal AI. For now, the best path forward for most lawyers is the safest one, which is properly disclosing AI use, being as thorough as possible when using AI, and creating an audit trail to justify your responsible AI use. If you can demonstrate that you practiced the most diligent and responsible forms of legal AI use possible, you'll be much less likely to be held liable for damages caused in the wake of that AI use.
The only way to compensate for these weaknesses is to have a capable, thinking human being asking the right questions and making good use of the answers.
In other words, the path to responsible tool use is becoming a more knowledgeable, capable tool user.
Prompt engineering is the solution.
As the admittedly fancy name implies, prompt engineering is the practice of deliberately orchestrating questions and prompts for legal AI systems so that you can get the best, most reliable answers possible.
Instead of maintaining a simple, borderline impulsive conversation with your legal AI tool, you'll be thoughtful, deliberate, and methodical about your phrasing.
It’s much simpler than the name implies. You don’t need a degree for this. You don’t need weeks of training. Instead, you just need greater awareness of the strengths and limitations of AI – and some familiarity with core principles that will lead you to success.
It’s currently estimated that legal AI could automate up to 44 percent of all legal tasks.
That's very impressive. Imagine freeing up 44 percent of your schedule and simplifying some of the most tedious, least interesting tasks on your plate.
But that's only going to be a benefit if you can ensure those tasks are completed effectively.
Proper legal AI prompt engineering can help you in this respect, by achieving the following goals:
1. Get better answers. Putting in forethought and phrasing your questions deliberately is one of the best ways to get better answers to your questions – as well as better document drafts, better reviews, and better analytics output. Some of the documents you draft and questions you ask will be simple and boilerplate, but it's important to put in the effort where it's practically required.
2. Reduce the risk of mistakes. Superior legal AI prompt engineering also reduces the risk of mistakes. Errors with generative AI often pop up because of gaps between the question being asked and the answers available. If you constrain the possibilities sufficiently, there will be less wiggle room for inaccuracies to appear. Simultaneously, because your prompts will be so specific and targeted, you'll be able to notice errors and hiccups much more easily.
3. Save time. It may seem like legal AI prompt engineering compromises the time saving nature of legal AI, since you'll need to spend some extra time carefully crafting your AI questions. This is true, at least superficially, but in the long run it should actually save you time. Spending even a few extra minutes polishing the perfect AI question or prompt can get you a much more suitable answer, and get you that answer immediately. Instead of dealing with insufficient answers, or repeatedly coming up with follow-up prompts, you can get the answers you need almost immediately.
So what specific strategies can you use to ask “better” legal AI questions?
How do you practice superior legal AI prompt engineering?
Be Clear and Specific
Clarity and specificity will take you a long way. Modern generative AI engines are extremely good at understanding human language, so they can usually understand the gist of what you're saying no matter how you phrase your writing. However, you'll get better answers much more reliably and quickly if you make a proactive effort to add clarity and specificity to your prompts.
A simple prompt like “what are relevant laws and regulations for [legal issue]?” is a decent enough place to start. But you can get better answers by adding specificity. Laws and regulations from where? From when? For what purpose? And what details do you need?
It’s almost impossible to be too specific for legal AI, so err on the side of specificity. The worst that can happen is no results to return, in which case, you can try a different prompt and start again.
Provide Context and Intention
Next, it's important to provide context and intention. Sometimes, this can have a powerful effect on the outputs you receive. If you're looking for example cases relevant to a particular topic, you can narrow down the list and get much more useful examples if you explain what you're trying to accomplish.
This is especially helpful for document drafting and legal research. On the document drafting side of things, your legal AI engine can create much more customized, much more relevant documents if it knows exactly what your goals are. On the legal research side of things, your legal AI tool can help filter out extraneous, unnecessary details so you can better focus on the details that matter.
Impose Answer Constraints
AI works much better within preordained, limiting constraints. That might seem counterintuitive, since most human beings work more creatively when there are fewer restraints present. But given unlimited room for creativity and flexibility, AI has a tendency to go off the rails.
You'll typically see much better results if you dial in your expectations for the answer. Even simple limitations, like length requirements for document drafts, can help ensure concision and focus. You can also ask your legal AI tool to exclude certain types of results, list examples in a certain order, or organize results in a specific way so it's easier for you to sort through.
Showcase Examples
In some cases, it's helpful to provide your legal AI with examples of what you're looking for. Remember, at their core, generative AI engines are pattern recognizers. They're exceptionally good at identifying and replicating patterns they find in large datasets; in fact, this is the only reason why generative AI engines are capable of producing seemingly natural language output.
Just as more text examples can make generative AI engines better at producing text, more examples of your goal outputs can refine your AI engine to create outputs closer to those ideals.
Walk Through Individual Steps
Sometimes, the right move is to walk the AI through a sequence of iterative steps. This is partially due to the AI being better capable of handling more specific tasks and requests; rather than asking an AI to outline a broad topic like how World War II developed, you’ll be much more successful asking it to describe various individual elements relevant to WWII, like the rise of fascism in Europe or the events of the D-Day Invasion.
Consider breaking your requests down to these individual steps, whenever you can. It might take a bit more time and effort, as well as some final organizing/editing on your part, but you’ll likely get much better results overall.
Request Rewrites and Alternative Versions
AI doesn’t have feelings. It’s not going to be offended if you don’t like what it produced.
Accordingly, you should exercise your ability to request rewrites and alternative versions of whatever it initially created. This is obviously true if you find something wrong with the original material, but it’s also a worthwhile strategy even if you’re mostly satisfied with the initial draft. You never know when a rewrite could illuminate something important or provide you with a useful alternative perspective.
Triangulate With Similar Questions
If you want to increase the accuracy of your AI’s output (as well as your confidence in that output), consider “triangulating” by asking similar, yet distinct questions. Police officers sometimes do this during interrogations to ferret out inconsistencies; for example, they might ask “where were you last night at 8 pm,” then ask a similar, yet distinct question like, “what were you up to last night?” If the answers to these questions combine to form a coherent picture, you can generally trust that picture. But if these answers align incoherently, it’s a sign that something may have gone wrong – or that there are too many ambiguities or nuances for the AI to handle.
Experiment and Iteratively Improve
Be willing to experiment with the way you phrase your legal AI questions. As you get more practice and experience, you’ll learn subtleties of effective and ineffective questions – and you’ll gradually develop an AI collaboration style that suits you perfectly.
AI is bound to improve iteratively in the future, and perhaps at a rate that none of us is truly ready for. If you want to continue pushing the limits of what legal AI can do, without ever putting the quality or consistency of your work in jeopardy, you need to be ready to evolve with it. That means keeping an open mind, staying on top of the latest trends and developments in AI, changing up your prompt engineering practices, and tinkering with your workflow to better integrate AI into your day-to-day responsibilities.
This is a process, and it pays to treat it accordingly.
Always Verify
Better prompt engineering and thoughtful legal AI questions can help you make the most of any generative AI engine. You'll maximize the strengths of these legal tools and minimize the weaknesses, reducing the possibility of error, miscommunication, and even misinterpretation on your part.
That said, there are always risks associated with legal AI, and there always will be. No matter how reliable the tool you're using seems to be, no matter how diligent and thorough you are in your prompt engineering, and no matter how much confidence you have in the initial answers you see, it's important to verify the output material.
You are the lawyer and your legal AI doesn't have the knowledge and expertise that you have.
Double check everything for the integrity of your career.
Legal AI isn't perfect.
But it sure is impressive at boosting productivity and enabling new possibilities in law firms like yours.
Once you master the art of effective prompt engineering, and you start asking legal AI better inquiries, you'll save time, streamline your workflows, and ultimately improve profitability.
AI may never be good enough to replace human lawyers fully, but it can certainly become a nearly perfect legal assistant.
And in your hands, with a bit of practice, it could become even more powerful.
If you’re ready to see our state-of-the-art legal AI in action, sign up for our free demo today!
Samuel Edwards is CMO of Law.co and its associated agency. Since 2012, Sam has worked with some of the largest law firms around the globe. Today, Sam works directly with high-end law clients across all verticals to maximize operational efficiency and ROI through artificial intelligence. Connect with Sam on Linkedin.
November 26, 2024
November 24, 2024
Law
(
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
)
News
(
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
)
© 2023 Nead, LLC
Law.co is NOT a law firm. Law.co is built directly as an AI-enhancement tool for lawyers and law firms, NOT the clients they serve. The information on this site does not constitute attorney-client privilege or imply an attorney-client relationship. Furthermore, This website is NOT intended to replace the professional legal advice of a licensed attorney. Our services and products are subject to our Privacy Policy and Terms and Conditions.