Artificial intelligence (AI) is a bit like a toolbox. Inside are many tools with varying purposes. For most, these tools will build and craft, but for others, these tools can be used to destroy and harm.
Lexis+, Westlaw, and even the Google search bar are often utilized to collect case law. These engines use just one subset of AI called machine learning. This form of AI compresses and connects data at a speed and quantity human minds cannot. Legal scholars who have lived through this transition can attest to the time saved and how the vast collection has improved their work product.
The nuance between different subsets of AI is important when considering its implications. Just as important as it is to know what AI can do, it is equally important to recognize how AI has its risks.
How AI Can Be Used
AI has become so enticing because it offers an avenue for professionals to complete remedial, repetitive tasks quickly. Prior to AI, a professional may have spent ten to thirty minutes crafting a well-thought-out client letter. Additional time likely was spent checking for spelling, grammar, and tone. With AI, this task can be done in minutes. ChatGPT allows a prompt to be entered, along with the tone that the writing should have, and the program will give an eerily human response. ChatGPT’s response can be further fine-tuned with additional prompts until the response is the desired output. In addition, because of ChatGPT’s learning machine algorithm, prior writing samples can be uploaded to teach ChatGPT to mimic the style. Even if one is not completely satisfied with ChatGPT’s response, at the very least it has given the professional a starting point.
ChatGPT is the most popular AI tool on the market, but it is not the only one. In the legal field, various AI tools have been eveloped specifically to aid lawyers and legal scholars. Tools like Smith.ai offer client intake forms that automate conversation, allowing clients to set up appointments regardless of the time. Diligen is an exciting contracting tool that reviews contracts for specific clauses, and then outputs a digestible summary for the user. Lexis, Westlaw, Casetext, and Blue J L&E are AI research tools that allow lawyers to input a simple phrase and access case law anywhere and at any time.
As intriguing as these tools are, AI tools also come with dangers. In using these programs, users must be aware of the security protocols the sites utilize. For example, because ChatGPT prides itself in its machine learning software, this means that the inputted data is being used to teach the program. The implication is that if sensitive client information is entered into ChatGPT, this data is being stored within the program. ChatGPT recommends turning off the chat history, but the data is still saved for thirty days to allow the company to monitor the inputs for breaches of service.
For this reason, many law firms have begun to develop in-house AI tools that do not come with the risk of breaches of client confidentiality, yet perform the same functions as ChatGPT. However, ChatGPT is not completely off-limits to lawyers. If users take care of what they are inputting into ChatGPT, no sensitive data will be at risk. No personal names or identifying details should appear in the prompt to ChatGPT when lawyers are utilizing this tool. Rather, ChatGPT can assist in forming templates, allowing lawyers to put sensitive details in on their private servers. ChatGPT should be used as a tool and not as a complete substitute to avoid issues of plagiarism and ethics.
The Future of AI
For as helpful as AI can be, it may raise eyebrows when the client learns the lawyer automated many of the tasks the client is paying for. The client may even be tempted to tackle the problem themselves through AI. But, AI is in no way perfect.
Andrew Perlman, a law professor at Suffolk University, conducted an experiment that perfectly sets the stage for where AI is going. In this experiment, after giving ChatGPT various prompts, the program was able to craft a 14-page mock U.S. Supreme Court brief. This document was completed in under an hour.
However, no law degrees are at risk because the AI-created document should not be relied upon as a final draft. The brief drafted by ChatGPT for Professor Perlman blatantly suggests incompetence. It lacked detail. Rather, Professor Perlman sees ChatGPT as a tool for crafting a first draft, not a final submission. Others trying this experiment come to a similar conclusion that, although ChatGPT is fast, it is often wrong. While an average associate attorney will take longer to craft a brief, that brief should be superior, at least for now. For now, the use of AI in legal writing should be limited.
In an opinion decided in June of 2023, an attorney tested the ethical limits of ChatGPT. In Mata v. Aviana Inc., attorney Steven Schwartz was brought before the New York Southern District Court on sanctions following his use of ChatGPT. Mr. Schwartz claims to have believed that ChatGPT was just a “super search engine.” Yet, while Mr. Schwartz was researching his case, ChatGPT outputted case law that did not exist. Mr. Schwartz, believing these cases to be real, took these fake opinions and argued before the court, even after their existence was called into question. Judge P. Kevin Castel ultimately sanctioned Mr. Schwartz and his law firm, concluding that Rule 11 sanctions were appropriate because of the failure to inquire into these opinions. The court took the greatest offense to the fake court cases attributing themselves to real judges. Mr. Schwartz and the other attorneys involved in the case were each fined 5,000 dollars and ordered letters of apology be sent to all judges involved in the fake opinions. The court detested the creation of fake opinions but did not comment on the use of ChatGPT specifically. While the Mata case shows that attorneys are expected to inquire further on the validity of the court cases found, courts have indicated that using ChatGPT at the beginning of the research phase seems to sit on ethical ground.
What does this mean for attorneys in the future? From a purely ethical point of view, an attorney using and submitting a document sourced from ChatGPT with no changes is bad practice. It is dishonest, incomplete, and misrepresents the attorney and the client he represents. As suggested by Professor Perlman, ChatGPT responses could be a good starting point. Instinctively and with ethical training, an attorney should know when a line has been crossed and the writing is no longer his own.
OpenAI’s list of banned activities prohibits the use of AI to assist in the unauthorized practice of law. Human input is still required. Though OpenAI’s policy is not law, it is a guiding principle. ChatGPT was and never will be intended to supplement a law degree. People using the program must critically consider the information and fact-check accordingly. In addition to potential sanctions, the risk of errors still exists with ChatGPT in the legal field. Heavy caution and further research must accompany the use of ChatGPT.
However, understanding the possibilities of ChatGPT, the program presents an opportunity to pro se litigants, a potential intellectual equalizer. No longer will digestible law be locked away behind paywalls and hidden inside law schools. ChatGPT could equip pro se litigants in a way never seen before. Yet heavy care should be utilized. What happened to Mr. Schwartz should be a cautionary tale. ChatGPT is not the end of the research – it is only the beginning.
This begs the question: Will artificial intelligence build a new world or destroy the one we know? Well, is that not the same thing?