Summer 2023 By S. Katie Calvert |
Lately, it can be difficult to find common ground. Most of us, however, would agree with this: attorneys are expensive. Artificial intelligence systems such as ChatGPT and Google Bard, on the other hand, are free to use. It is no coincidence then that, since the emergence of AI, questions such as “how to draft a contract with AI” and “can AI prepare my will” are becoming commonplace. AI can be used to generate legal documents. But whether AI-generated legal documents should be used is another matter entirely. For those seeking to utilize AI for the creation of legal documents like employment contracts, leases, and wills—proceed with caution. The risks associated with AI-generated legal documents may ultimately outweigh the benefits.
AI systems are powerful tools whose purpose appears limited only by human imagination. In short, AI systems employ algorithms to analyze information and identify patterns. AI systems “learn” from these patterns and subsequently draw upon that knowledge. Many consumers interpret the overwhelming amount of attention paid to AI as unconditional endorsement. Indeed, as some of the greatest minds of our generation implement AI across nearly every industry, it can be difficult to think of AI as anything other than an infallible marvel of technology. Someday, perhaps in the near future, AI may live up to these expectations. As it stands, AI is a work in progress.
Above all else, AI is not an omnipotent intelligence. In order for AI to generate a document, a user must first communicate to the AI what type of document is needed and what information the document should contain. If the user’s communication is unclear, overbroad, or contains inaccurate information, the quality of the AI-generated document will suffer. The same holds true for legal documents. Users may simply lack the knowledge necessary to convey their legal needs fully or accurately to the AI, resulting in documents that do not adequately protect the user’s interests. For example, a user who asks an AI to generate a complaint for breach of contract may not be aware that other claims are also available to them. If these additional claims are not included in the AI-generated complaint, the user may lose the ability to assert these claims at a later date and could be unable to recover their damages. Unfortunately, users who lack the knowledge necessary to convey their legal needs fully or accurately to the AI will likely be unable to assess for themselves the suitability of the documents generated.
Currently, most AI-generated legal documents are overbroad, rudimentary, one-size-fits-all forms. For example, when asked to draft an employment contract, ChatGPT generated a one-page fill-in-the-blank form that did little more than formalize the employer-employee relationship. This form did not include provisions that prohibited employees from disclosing confidential information, that bestowed ownership of works created by the employee during their employment to the employer, that barred employees from working for a competitor, or that accounted for numerous other issues which might arise in a given context. Simple forms may suffice for some users; however, for many others, the failure to include adequate detail can result in significant harm. If, for instance, AI fails to include a non-compete agreement in a sales-driven startup’s employment contract, the startup may fail when it is unable to prevent its best salesperson from working with the local competition.
Contrary to what some may believe, AI-generated legal documents do not necessarily comply with the law, nor do they ensure consumer compliance. There are laws at the local, state, and federal level that, collectively, touch upon nearly every aspect of daily life. Similarities exist among the various laws; however, there are also significant differences in how laws are written, interpreted, and applied. The laws of two states may not govern a single matter in the same way. Different courts within a given state may issue contradictory opinions regarding the same issue. An administrative agency may interpret a law distinctly from how a United States District Court might. While AI may be able to identify differences in text, AI cannot employ reasoning and judgment in interpreting and applying the law to a specific set of facts as an attorney might. This, in turn, invites error in AI-generated legal documents.
As an example, a user may live in a state with laws very favorable to landlords. If the user—a landlord—requests that AI prepare a lease agreement with “standard terms” for property being leased in a second state, the AI-generated lease may contain terms that comply with the laws of the user’s state, but violate the laws of the second state, which may have laws more favorable to tenants.
To further illustrate this point, consider a user who requests that AI generate a company timekeeping and overtime policy. Based on the user’s direction, the AI generates a policy that states employees may only clock in during their scheduled shift and that employees are to be paid for all on-the-clock work. Assume that some employees regularly perform work outside of their scheduled shift at their manager’s request, however. Pursuant to the company policy, these employees would not be paid for that work. Facially, nothing might appear to be wrong with the AI-generated policy, but the manner that the company policy is implemented will almost certainly result in a lawsuit.
At this early stage, it is doubtful that AI can adequately perform due diligence. AI may be unable to access some court documents, cases, and other legal resources, including resources which are not electronically stored or are locked away behind paywalls. Companies that control certain legal resources, such as Westlaw and LexisNexis, have little incentive to provide outside AI with access to these resources. While AI may be able to quickly process a staggering amount of information, the lack of access to certain resources could result in problematic gaps in the AI’s knowledge. While a lack of knowledge is troublesome enough, AI may fill these gaps with false information. In fact, this recently happened to one New York attorney who is currently facing potential sanctions for filing an AI-generated brief containing citations to fake cases. That attorney informed the court that he was “unaware that [ChatGPT’s] content could be false.”
Finally, to be effective, some legal documents require additional human action. For example, in a majority of states, at least two witnesses must be present to observe a testator’s signature of a will to ensure authenticity and to confirm the testator’s intentions. While this issue has not yet been addressed, AI almost certainly will not qualify as a witness; AI is currently incapable of fulfilling that role due, in part, to a lack of perception and conceptual reasoning. The intended heirs of testators who do not secure witnesses may later discover that they will not actually inherit what was promised them.
Certain AI systems may be free, but they are no substitute for an attorney. While some may be tempted to take advantage of AI-generated legal documents, such documents could end up costing much more than the amount which would have initially been spent in attorneys’ fees. If the ease of AI still seems an attractive alternative for document preparation, users should, at minimum, retain an attorney to conduct a review of AI-generated legal documents. AI may be impressive, but it will forever lack the human touch.
S. Katie Calvert is an attorney at Quattlebaum, Grooms & Tull PLLC in Little Rock, Arkansas. Ms. Calvert’s practice areas include Commercial Litigation, Class Action Defense, Employment Litigation, and Intellectual Property. She can be reached at kcalvert@QGTLAW.com.
 Kathryn Armstrong, ChatGPT: US lawyer admits using AI for case research, BBC News, https://www.bbc.com/news/world-us-canada-65735769.
Note: The above article was published in the Summer 2023 issue of USLAW Magazine. Click the link below to read the actual publication.