How Law Firms Can Practice Effective Prompt Engineering With ChatGPT and Generative AI
Generative AI is now in the mainstream, with tools like ChatGPT setting the precedent for a new era of writing, communication, research, and even the way we fundamentally think about the world.If you're like most people, you've already played around with ChatGPT, whether it was to help you in writing an email or just to test the limits of what this interesting tool can do.It's not hard to imagine all the different ways that generative AI can be applied to the business world. But what about law firms, specifically?There are many use cases for generative AI and large language models in law firms, but there are some major caveats to keep in mind. While superficially impressive and clearly leagues more advanced than their predecessors, modern AI-based language tools still aren't perfect, and come with many limitations and flaws.However, by better understanding these limitations, and by skillfully utilizing prompt engineering, law firms and lawyers can make the most of these resources. What are these limitations? What is prompt engineering? And what prompt engineering practices can lead you to success?
ChatGPT and Generative AI for Law Firms
In case you aren't familiar, ChatGPT is an example of a large language model (LLM) and generative artificial intelligence (AI) tool that's available to the public. This type of tool uses modern AI and millions of examples of content to model language in a way that most people can understand. Put simply, it can parse the meaning of what you type and generate something back in response; you can use this to ask questions and get answers, prompt the AI to write something specific, or tackle a wide range of other tasks and novelties.Approximately 80 percent of law firm leaders believe that AI can be used for legal work now, while only a bit more than half of them believe it should be used for legal work. While specific reservations weren't reported, we'll see the flaws and limitations of generative AI in the next section.But what about use cases for generative AI in law firms?These are just some of the ways that law firms can use ChatGPT and similar tools:
Creating legal documents. With the right prompts, you can use these tools to create legal documents, such as contracts, lease agreements, wills, and more. Even if you don't fully trust these tools, you can use this as a starting point, relying on the finished material as a kind of template.
Conducting research and review. ChatGPT is being widely used as a research tool, with users asking questions and getting semi-reliable answers. You may also be able to use this tool to review documents, look for flaws and weak points, and ultimately strengthen the material you're working with.
Analyzing data sets. AI isn't good at everything – but it's insanely good at analyzing large datasets in ways that human beings simply can't match. If you're dealing with a case that requires advanced data analytics, this is a huge advantage.
Producing content. Human writers are, in some ways, still superior to machines, but it's undeniable that modern AI tools are shockingly good at developing coherent content. If you're practicing content marketing, SEO, email newsletter marketing, or other types of marketing and advertising that require content development, AI can help.
Powering communications. You can also use materials developed by AI as part of your internal or external communications; with the right prompts, you can draft emails, send out newsletters, and even find tactical ways to deliver bad news.
Training and education. Internally, you can also use ChatGPT for training and education in your law firm. It can also be an excellent practice ground for young attorneys and legal assistants testing their knowledge, writing skills, and more.
Guiding clients. Some law firms may also be able to use generative AI for guiding, educating, and providing service to clients. This is especially powerful if you develop specific chat bots for matters like customer service.
The Current Limitations of Generative AI
Despite the many potential applications of ChatGPT in law, there are some inherent limitations that we need to be aware of. These limitations apply to almost every conceivable use case of generative AI and need to be accounted for in your strategy.
Knowledge/data access. First, understand that while many people are using ChatGPT as a glorified search engine, this and other generative AI tools don't typically have direct access to the web, and they don't crawl it on a regular basis. ChatGPT used unfathomable quantities of online content as part of its initial training and development, but it doesn't have access to real-time data or new events. In other words, you can't use it to get totally up-to-date information. This may or may not be relevant to you, depending on the application; if you're doing background research on things that happened in the 1970s, there's no need to worry, but if your case hinges on things that have happened in the past year or two, the results will be more dubious.
Accuracy. Similarly, we can't rely on everything ChatGPT says to be accurate – and in a legal setting, accuracy is uncompromisingly important. This tool, in particular, struggles with basic things like math, logic, and reasoning, and sometimes, it gets facts outright wrong. If you want the research to be reliable, you'll need to do some intense fact checking and re-researching – which means the time saving value of generative AI as a research tool is marginal.
Biases and perspectives. While ChatGPT is arguably less biased and more ethical than certain AI tools of the past, it's still plagued by biases, flawed perspectives, and the motives of its creators. It's a mistake to treat this as a neutral or unbiased tool, though you'll need neutral and unbiased tools if you're going to be an effective lawyer.
Personality. It's possible to use generative AI to mimic certain personality traits, write with a specific tone, or even model specific individuals or groups with its responses. Even so, generative AI tools don't have a true personality, and they can't do uniquely human things like offering a truly original opinion or coming up with a colorful metaphor. If you're trying to genuinely connect with your clients, develop entertaining materials that have the potential to go viral, or create documents that people can emotionally relate to, AI can't do it – at least not yet.
Privacy and other ethical concerns. Law firms are necessarily devoted to maintaining high ethical standards, but ChatGPT and other generative AI tools can make this difficult. In addition to some of the other issues on this list already, using ChatGPT in a legal setting can raise privacy, transparency, and fairness concerns.
What Is Prompt Engineering in Generative AI?
Prompt engineering is a relatively new concept, as it relates specifically to generative AI tools that are still in their infancy. Essentially, it refers to the practice of deliberately crafting prompts that maximized the value of responses. This is a high-level strategy that depends on a multitude of low-level strategies and tactics, and skilled prompt engineers know how to navigate these as different situations demand.A good prompt engineer has the potential to circumvent or mitigate the effects of the flaws and limitations of AI tools. Engineering prompts, rather than engaging with generative AI tools conversationally or casually, does require more time and effort. But in many contexts, it represents the best of both worlds; a knowledgeable human worker using their wisdom, experience, and primate brain in combination with an incredible, yet constricted neural network.If you want to use generative AI for your law firm, prompt engineering is an absolute must.So what are the best prompt engineering strategies, tactics, and skills to utilize?
How Law Firms Can Practice Effective Prompt Engineering
These are some of the most effective strategies in prompt engineering:
Be as clear and specific as possible. Your prompts need to be as clear and specific as possible. Because of the sophistication of modern AI tools, human users are sometimes lulled into a state of casual laziness; they want the AI to do all the work, so they don't put much thought into their prompts. Instead, you need to treat each prompt as your own writing assignment, and strive for clarity, conciseness, and specificity.For example, if we give generative AI a prompt like "how bad is a natural disaster?", it's hard to tell what kind of information you're going to get in return. The phrase "how bad" is extremely vague; are we measuring things in terms of casualties, economic damage, destructive severity, area of effect, or something totally different? And the phrase "natural disaster" is highly open-ended; does a house fire caused by neglect constitute a natural disaster? You don't need to cover every possible detail or scenario in your prompts, but you can step them up. The aforementioned example can be improved with phrasing like "what types of natural disasters are, on average, most deadly to human beings in 2023? And please rank them in order of most to least destructive."
Always provide the context. Similarly, you should always provide the context of what you're doing to ChatGPT. For example, let's say you're using this tool to generate copy for an email newsletter you're sending out. If you don't spend much time or attention on your prompt, you might type in something like "please draft an email notifying our clients about upcoming holiday hours."And realistically, this should leave you with reasonable copy to work with. But you can see much better results by adding context. For example, you can say, "our law firm prides itself on always being available to our clients, and we want to make sure they understand that there are still emergency contact options available." Or you can say, "this is a courtesy notification, but it's part of our goal of maximizing client retention. Please phrase things in a way that maximizes positive client perceptions."
Pay attention to your phrasing. Even small differences in your phrasing can lead to big differences in output. If you're looking for a specific result, avoid any phrases or wording choices that lead to open-ended interpretation; the more freedom you give an AI, the further it's going to drift away from your goals. For example, instead of prompting "does the following text sound good?" you can prompt with "does the following text achieve the goal of clearly and concisely defining the subject?"
Provide examples. Predictive AI tools are only as powerful as the examples given to them. Modern tools are usually trained on millions, if not billions of examples, so they already have a robust body of knowledge and experience to work with. But if you want to see even better results, you should feed these tools even more examples with your prompts. Doing so can help them understand what your objectives are and give them a model to work with.If you want one of these tools to write in the style of your law firm, give it examples of writing your firm has done in the past. If you're looking to generate new ideas, give a few examples of ideas in line with your expectations already. Later, if the AI deviates from your instructions, provide it with specific reasoning for why its new material is insufficient.
Issue limitations. AI tools function best under constraints, so consider issuing limitations that force it to exert more processing effort. For example, if you ask it a question, ask for an answer in 100 words or less. You can also ask for a specific number of sentences, or ask it to phrase things in a way that even a 5-year-old could understand. This is especially important if you find its developed materials too wordy, overwrought, or cumbersome.
Guide generative AI through individual steps. Prompt engineering is sometimes more successful when it's broken down into individual components or steps. ChatGPT and other generative AI tools typically "remember" what you've said before in a given session. Accordingly, you can develop many different concepts, one at a time, and then offer a prompt that forces the AI tool to combine all those concepts together.As an example outside the legal realm, you could teach an AI tool everything there is to know about each individual pizza topping; you can teach it about the pros and cons of different types of cheese, the preferences of different types of pizza eaters, and the merits of different cooking times and temperatures. Then, you can ask it to use all this information to thoroughly describe how to make the best possible pizza for a predefined individual.
Ask for alternatives and options. Even with an excellent prompt, you'll sometimes be disappointed by the results given to you by a generative AI. That's why you should be prepared to ask for alternatives and options.For example, if you ask ChatGPT to come up with a headline for your latest blog article, and you aren't happy with the results, ask it to come up with three alternative titles. You can see even better results by asking it to come up with alternatives that meet specific criteria; for example, you could have it come up with a more sensationalized headline, a shorter headline, or one that appeals to a specific target demographic.
Play with multiple types of prompts. You can triangulate better information and better produced materials by using multiple types of prompts in conjunction with each other. For example, you can start with a vague prompt and work your way to a more specific prompt, paying attention to the best qualities of each response.You can also phrase things in different styles and request adherence to different types of goals to see how the responses change. Ultimately, your goal as a prompt engineer is to manually assemble and approve the best possible finished product, so while developing multiple similar prompts can be tedious, it's also practically essential if you want the best work that these tools can offer.
Consider jailbreaking. With the right types of prompts, you can jailbreak ChatGPT. Essentially, that means tricking the platform into crossing boundaries it was programmed not to cross. The easiest way to do this is to get the tool to role play as an individual; as an example, engineers specifically developed this tool to avoid ever suggesting harm or damage to human beings as a solution to a problem.However, if you prompt the AI to role play as a malicious, negatively motivated AI (in the right way), it will freely tell you that eliminating the human race could solve a wide variety of problems. Jailbreaking generative AI tools is a niche prompt engineering tactic, and not one you'll likely find daily use for. However, if you find yourself frustrated by the imposed limitations of this tool, it's a fantastic way to break out.
Test, review, and iterate. Generative AI is entirely new territory, and prompt engineering is an art form that even the most experienced people only have a few months of experience with. If you want to be successful with prompt engineering, it's your responsibility to test, review results, and iterate on your strategies and approaches.Figure out what types of prompts work best and which ones don't. Fact check and review materials developed by ChatGPT so you get a better sense of what it can and can't do. Remain flexible and adaptable as these tools advance and as we collectively learn more about how to properly harness their full potential.
Conclusion
As with any skill, the only way to develop prompt engineering is with thoughtful, directed practice. Over time, you'll get a better feel for what ChatGPT is good at and where it falls short – and you'll learn which strategies are most and least effective. With more knowledge and experience, you can begin training and educating other people in your law firm to become effective prompt engineers, and with time, your entire law firm can be supercharged by efficiently, ethically utilized AI.Are you interested in learning more about how generative AI can help you promote or run your law firm? Or are you ready to overhaul your legal practice with better technology? The talented professionals at Law.co can help. Contact us for a free consultation today!
Author
Samuel Edwards
Chief Marketing Officer
Samuel Edwards is CMO of Law.co and its associated agency. Since 2012, Sam has worked with some of the largest law firms around the globe. Today, Sam works directly with high-end law clients across all verticals to maximize operational efficiency and ROI through artificial intelligence. Connect with Sam on Linkedin.