Derek Bryan
October 18, 2024
As a first-year law student, one of the first things we were told in orientation week is that there are no shortcuts. Law school is notoriously hard, with a steep learning curve, mountains of coursework, confusing material, and competitive pressure. For more than a decade, students have had online resources like prewritten case briefs available to help them expedite things – but these have always been mildly to moderately discouraged, as they preclude you from developing the skills necessary to become a competent lawyer in your own right.
But the availability and power of technology in the legal world is changing, such that we must rethink our conceptualization of the entire legal profession. Namely, legal AI has entered the fray.
Law professors are not ignorant of this; they anticipate students to try and use legal AI as well as more general AI tools in support of their academic work, hoping to save time and get better grades in the process. My school, ostensibly like many schools, and generally discourages the use of legal AI in academic matters – and some types of AI use are strictly prohibited by the honor code, and could get you kicked out of the school.
Why is it that law students are told not to use AI?
And why is it that, sometimes, they do it anyway?
Legal AI has become incredibly powerful – and quite popular among lawyers and law firms across the country. Depending on which tool you're using, legally AI can equip you with features like advanced legal research, which pulls in data from a multitude of robust sources. It can help you summarize complex legal documents in a handful of simple, easily readable sentences. It can help you draft legal documents like memos and contracts, often in a matter of seconds, rather than hours. It can help you with document review and other tedious preparation responsibilities. It can even help you proofread and make recommendations for how to improve your work.
When used responsibly, legal AI has the potential to save you hours of time, expose you to more robust sources, and potentially even boost the quality of your work.
So why can't law students generally use it?
There are a few major reasons why law professors are reluctant to allow their students, particularly new ones, to use AI?
· Development of foundational skills. Before you can be a good lawyer, you need to have certain foundational skills in place. You need to be able to read dense legal materials. You need to be able to think critically and analyze problems, even when they're buried in technical complexities and legal jargon. You need to be able to recognize the human element in various cases and remember the sequences of events within them. You need to be able to put things in your own words, in formats and contexts that lawyers can work with. If you use AI to summarize cases, draft documents, and handle your other responsibilities as a 1L, you'll never get a chance to develop these skills in a live environment. The first year of law school is meant to be tough not as an exercise of cruelty, but as a kind of boot camp training. If you don't develop these foundational skills, you may not be able to develop the advanced skills necessary to establish yourself as a competent lawyer.
· AI limitations and weaknesses. Professors are also concerned about the limitations and weaknesses of legal AI in its current form. We've been told multiple horror stories, repetitively and by multiple people, about lawyers that blindly relied on AI without checking it or violated ethics by using AI in nefarious or opaque ways. It's true that legal AI isn't perfect, and at this stage in our legal careers, we may not have the expertise necessary to properly discern what AI can or can't do for us.
· Accessibility and equality issues. To a lesser degree, schools are concerned about accessibility and equality issues. If legal AI use is allowed, and only some students use it, won't those students have a critical advantage over their contemporaries? How are we supposed to handle students who can't afford the best legal AI tools on the market? How do we contend with the fact that some students won't be able to use legal AI as easily or reliably as their counterparts? When the only tool held by every student in the classroom is a traditional casebook, the playing field is remarkably even. In such a competitive environment, this is vitally important.
· Lack of transparency. Transparency is a massive issue in the AI world. In addition to not knowing exactly how these tools work on the inside, we don't always know who is using AI or how they're using it to produce their work. Hypothetically, it should be permissible for students to use legal AI as a guiding tool, or as a tool to check and review their work. But how exactly would you be able to prove the extent to which a student used legal AI? When generative AI can pass the Turing test, how can we say for sure whether something is AI generated or not?
· An abundance of caution. Finally, I suspect blanket bans on legal AI are a byproduct of an abundance of caution. This is still a relatively new tool in the early stages of development, and law schools based heavily on tradition and uniformity aren't quite ready to expose themselves to this level of novelty, despite its possible advantages.
Nearly all of these points are completely valid and understandable, especially considering most professors, practicing lawyers, and other professionals I've encountered have strongly encouraged us to start using legal AI once we enter the legal profession and get some real experience.
There's also some subtext in university decisions to ban the use of legal AI in academic work.
· Lack of institutional control. There might be a tinge of desire to maintain institutional control at play here. Administrators and other academic professionals may not like the idea of coursework becoming chaotic. If everyone in the classroom is using a different legal AI tool, they may end up approaching their education in radically different ways. This can have long-term consequences not only for the students, but for the university itself.
· Academic undermining. Legal AI also has the potential to undermine the academic process. Law school is heavily about reading and writing complex materials, so why would you bother learning to read and write this way if there's a technological tool that can do it for you?
· Lawyers protecting lawyers. The legal profession is a somewhat insular and completely self-regulated discipline. Lawyers look out for each other and do their best to maintain honor and prestige in the legal world. In some ways, discouraging the use of AI helps keep human lawyers employed and in charge. Currently, lawyer jobs aren't threatened by legal AI, but if we fully embrace legal AI and continue encouraging its development, that may change in the distant future.
While I neither confirm nor deny personally witnessing such activity, I have heard rumors that some law students violate the honor code and use AI for their work.
Why would they do this?
· Laziness/manipulation. For some students, this is purely a matter of laziness or desire to manipulate the system. This is rarer in law school than it is in undergrad, but there are still some people who think they can get decent grades without putting in the work. Chances are, these students will not do well, as even if the legal AI does the heavy lifting for them, their laziness and manipulations will probably hurt them in other areas.
· Time savings and shortcuts. For other students, legal AI is a simple shortcut that helps them save some time in a chaotic, incredibly busy schedule. If you have 10 cases to read and brief in a day, and you've only had time to read 9 of them, you might use legal AI to get a quick understanding of that final one. Although this is technically still a violation of the honor code, it's far more permissible than using legal AI for everything out of laziness.
· Idea generation. Legal AI is also quite reasonably used for idea generation. If you're not exactly sure how to start a legal document, you might use legal AI to generate a handful of examples, so that you can write your own. If you're not sure how to analyze a particular sentence or phrase in a complex case, you might use AI to boil it down to different terms. Again, this might still be flagged as a technical violation, but because it preserves academic integrity and puts the onus of work on the student, it seems much more permissible from an ethical standpoint.
· A sanity check. It's also possible to use legal AI as a kind of sanity check. You can use it to double check your papers, validate your understanding of cases, or make sure you're communicating as effectively as you think you are. Of course, this is still highly discouraged, if for no other reason that it puts you at an unfair advantage over other students, but it's a reasonable way to use these powerful tools in other contexts.
As lawyers, we are bound by strict ethical obligations. Not all of these ethical obligations will make sense to you, personally, in all contexts, but you'll still be bound to follow them if you become a lawyer. Similarly, I have mixed feelings about whether or not law schools should take the step of banning or discouraging the use of legal AI; many of the justifications make sense, but there are also specific uses of legal AI that fall outside the scope of these justifications. Nevertheless, the honor code is the honor code, and I encourage any law students reading this to follow theirs very strictly.
That said, if you are allowed to use legal AI in certain contexts, or if you're using legal AI as a form of career preparation outside of your academic work, I highly encourage you to take advantage of it.
Even when you've graduated and passed the bar exam, it's going to be your responsibility to use legal AI responsibly. That means:
· Do your due diligence. Before using any legal AI tool, make sure you understand who made it, how it works, and how it handles data. There are many legal AI tools on the market, and some of them are more reliable and more secure than others.
· Validate everything. Don't assume that anything a legal AI generation tool generates is accurate. It's your responsibility to validate and double check everything. You'll have no excuse if you present an AI hallucination as a true fact.
· Protect sensitive information. When dealing with sensitive information, like personal details protected by client confidentiality, make absolutely certain that the information remains secure.
· Disclose your use. To preserve your ethical standing, always disclose your use of AI. Transparency is always better than the alternative.
I'm personally playing it safe and am not using legal AI in any of my academic work. But when my academic work is done, I'm building my prompt engineering skills, tinkering around with new legal AI tools, and polishing my cyborg-like ability to utilize the best of my human brain and the functionally robotic brains available to me. If you're interested in making an impact in the legal profession, you'll need to be prepared to compete in a profession where AI is commonplace. Sooner or later, it will be.
Are you a law student currently allowed to use legal AI for your research and assignments? Or are you a law student or lawyer outside of a law school context? Sign up for a free trial of our legal AI software today!
Derek Bryan is a freelance writer, entrepreneur, and JD candidate. He has written for law firms across the country and has been following AI developments since reading Nick Bostrom's Superintelligence in 2014 (before it was cool). As a ghostwriter, he has contributed content for 100+ publishers, including Forbes, Inc.com, and The Wall Street Journal. Derek enjoys composing music and lives outside of Cleveland, Ohio with his wife and two kids.
November 26, 2024
November 24, 2024
Law
(
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
)
News
(
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
)
© 2023 Nead, LLC
Law.co is NOT a law firm. Law.co is built directly as an AI-enhancement tool for lawyers and law firms, NOT the clients they serve. The information on this site does not constitute attorney-client privilege or imply an attorney-client relationship. Furthermore, This website is NOT intended to replace the professional legal advice of a licensed attorney. Our services and products are subject to our Privacy Policy and Terms and Conditions.