Academic Theft – Is ChatGPT an Offender?
Editor’s Note: Please read Ann’s related article, “Who Owns What: How to Classify Ownership of a Chatbot’s Generated Content” from last week! Both pieces provide great insight into the emerging world of AI and chatbots like ChatGPT by OpenAI.
Recently, plagiarism has littered the academic world. Passing another’s work off as one’s own reveals a character of dishonesty and robs accreditation of the true author. In the past, plagiarism was clear-cut. If you used another’s work without proper citation, that was plagiarism. But in the quest to simplify life, chatbots have entered the discussion. The issue with using chatbots in academic research is that the program provides no citation for its outputs. Plagiarized works are mixed into the outputs to such an extent that a standard plagiarism checker may not be able to detect it. As such, self-regulation is the only true cure. The most notable of the chatbots, ChatGPT, has taken a clear stance on the issue.
OpenAI, the company behind ChatGPT, has a list of activities that are banned, with the risk of termination of a user’s account if the company discovers banned activity. Within these banned activities, OpenAI specifically prohibits academic dishonesty. The regulation instructs its users not to misinform, misrepresent, or mislead others. In this regulation, OpenAI confirms its program’s susceptibility to be used for purposes of plagiarism. Recognizing the danger, OpenAI banned the activity. Though OpenAI has made its stance clear, would courts share a similar opinion?
Court Opinions on Plagiarism
Instinctively, one may assume a court’s opinion of plagiarism is similar to that of academia. However, for as much overlap as the two share, plagiarism is not similarly defined.
Authors Carol Bast and Linda Samuels explored the implications of plagiarism in the legal profession in their law review article, “Plagiarism and Legal Scholarship in the Age of Information Sharing: The Need for Intellectual Honesty.” They concluded that the goal of document creation is not originality, but consistency and representing the best interests of the client. Sometimes, this may mean borrowing from other authors, especially when applying consistent precedent. Bast and Samuels conclude that “customary” practices of plagiarism, such as borrowing the formatting of a transactional document, may be overlooked, while egregious cases are likely to face sanctions.
The line between “customary” and egregious is hard to find because courts only comment on egregious matters. In re Mundie details how an attorney copied a previous brief to the point he neglected to change some of the facts to align with his case. The Second Circuit Court of Appeals found it necessary to bring him under review for plagiarism and filing issues. Ultimately, however, the attorney faced disciplinary action for his late filings, not his plagiarism. In Supreme Court Bd. Professional Ethics v. Lane, an attorney took eighteen pages from a treatise and put it uncited in his brief. In its opinion, the Iowa Supreme Court explicitly called plagiarism unethical and suspended the attorney’s license for six months. However, the attorney had also misrepresented his books, and the court attributed this suspension more to fraudulent timekeeping than to his plagiarism. In these two cases, plagiarism seemed to be a secondary issue, making it even more difficult to conclude what the court defines as plagiarism.
Though the sanctions may seem lackluster, courts’ attitudes toward plagiarism are clear. Summarized by Judge Richard Posner in his Atlantic article, “to pass off another writer’s writing as one’s own–is more like fraud….” For as definitive as Judge Posner’s statement may be, in practice, plagiarism is not considered the cardinal sin as it is in academia. As long as a lawyer does not walk over the line of “too much” copied work, the court is unlikely to issue sanctions.
The Intersect of AI, Plagiarism, and Legal Ethics
Looking at AI through the lens of how courts have ruled on plagiarized work products, it is hard to predict how a court would measure what was considered “too much” AI. Yet, what is “too much” is entirely objective, which is why understanding the nuance of AI is so important. If a judge takes issue with the AI commonly utilized by search engines, he or she will likely also take issue with ChatGPT. But no judge has taken issue with the use of AI-based research, yet. Why would they? A lawyer must represent the client to the best of her abilities, and that includes knowing all applicable statutes and case law.
But can the same argument be made for ChatGPT? ChatGPT works differently than a traditional search engine. Typing the same prompt into Lexis+ and ChatGPT generates distinctly different responses. With Lexis+, though applicable case law will come faster than a traditional search, this type of AI research still requires reading the case law. ChatGPT is different. Not only will it allow a more nuanced prompt, but it will also go beyond finding applicable case law – ChatGPT makes an argument. Is ChatGPT a research tool? Or is it something else entirely?
Would the use of ChatGPT even get sanctioned as plagiarism? The Legal Writing Institute defines plagiarism as “[t]akingthe literary property of another, passing it off as one’s own without appropriate attribution, and reaping from its use any benefit from an academic institution.” As defined by the U.S. Copyright Office, ChatGPT is not anyone.
Though calling the use of ChatGPT plagiarism may not be the most applicable term, the conclusions Bast and Samuel made are a likely test that courts would use in defining what is “too much” AI. In the ever-evolving quest to simplify human life, knowing the line between “customary” and egregious plagiarism is an important nuance in using chatbots for research.