Nathan Potter, Blog Editor, University of Cincinnati Law Review
As artificial intelligence (AI) continues to permeate into the legal profession, practitioners must guard against inappropriate deference (read: laziness) towards their new resources. The use of AI raises ethical concerns within the practice of law which may not be obvious. “In using technology, attorneys must understand the technology that they are using to assure themselves [that] they are doing so in a way that complies with their ethical obligations—and that the advice the client receives is the result of the attorney’s independent judgment.”But, if an attorney depends upon the compilation and summary of information gathered by an AI, is that attorney truly making an independent judgement on a set of facts or law?
Attorneys may review studies that compare legal work performed by an AI against that performed by an attorney; however, similar comparisons in other professions have shown to be effected by research bias and incongruent results.Researchers from the Massachusetts Institute of Technology (MIT) assert that current AIs cannot replace a doctor’s “gut instincts.”Likewise, it could be asserted that AI is years away from being able to draft a nuanced legal argument or contract. Also, it is unlikely that AI would be effective at interviewing or working with clients, and it is doubtful that AI will have the skills and knowledge of an experienced trial attorney. Therefore, it is important that current legal professionals obtain a grasp of what current AI programs provide to their users.
This article is an introduction into AI programs already used within (or applicable to) the legal profession and some ethical concerns surrounding them. Part II will give a—very—brief introduction into machine learning. Part III will discuss some of the readily apparent legal ethical issues surrounding the use of AI. And Part IV will provide a general summary and some recommendations for attorneys already using or contemplating the use of AI within their practice.
Machine Learning is the subset of artificial intelligence that is most likely to be used by an attorney. There are four main types of machine learning, and an attorney should be introduced to the purposes and benefits of each. Supervised Learning presents the program with a set of inputs and desired outcomes (outputs).Supervised Learning is often criticized as the most rudimentary of machine learning processes and is often used for things like classifications of inputs/outputs or decision trees. Unsupervised Learning presents the program with a set of unlabeled data, and then the program will look for patterns, rules, summaries, etc.Unsupervised Learning is excellent at revealing information in a dataset that may be obscure to the human eye or have nominal patterns. Semi-Supervised Learning is a mixture of Supervised and Unsupervised Learning.Semi-Supervised Learning is regarded as the most complex and resource consuming area of machine learning because it is asking the program to teach itself outcomes based on an incomplete set of inputs. This forces the program to make “guesses” and to prove those guesses through trial and error. Reinforcement Learning (RL) makes the program choose an action within a closed system, then evaluate the outcome.The RL program will continue to loop this action and evaluation cycle to determine the highest benefit and/or lowest risk possible.
ROSS is a Supervised Learning program already used by firms in the United States (it costs $125 per month for a single user).ROSS is dependent upon its user, who must accurately describe the legal issues or facts surrounding a case. Being a Supervised Learning program, the attorney is essentially giving ROSS the requisite set of input data for which ROSS will select a list of the closest matching outputs. This process raises a few legal ethical issues, and this article will use ROSS a backbone for part of the analysis.
III. Legal Ethical Issues Surround the Use of AI
Anytime an attorney uses AI for legal work, the attorney should make sure that his or her independent judgement makes the final decision. Additionally, there are issues surrounding the mere use of AI programs, such as: is the program competent to find correct legal or factual summaries or conclusions, was the program trained on a biased dataset, is the attorney correctly and efficiently setting the parameters that will be used by the program, is the program collecting client data to add to its system, is the use of the program authorizing the programmer to practice law through the attorney, and many more.
The first issue surrounding the use of ROSS by legal professionals comes from the Model Rules of Professional Conduct 1.1: Competence. An attorney has the obligation to provide competent representation to his or her client. Comment 8 on Rule 1.1 asserts that an attorney should keep abreast in the changes of law—and—practice within the legal profession. This imposes a duty of continued education in the law and the technology used within the profession. As AI programs, like ROSS, continue to be adopted by the legal profession, attorneys will need to utilize said AI programs to remain an effective advocate. AI could potentially eliminate much of the document review and baseline research work performed by (usually newer) attorneys. Failure to train oneself to use this technology could result in subpar results, possibly arising to the level of incompetence.
An attorney must also be diligent in his or her representation of a client.This diligence means that the attorney competently handles every client matter. It is possible (even likely) that the reliance on AI programs, like ROSS, to produce the background work on a case, especially when combined with inadequate or incomplete search techniques, will invite procrastination or overreliance into the workflow of many practitioners. There are also programs which specialize in other aspects of the legal profession. Luminance is an AI platform specializing in document review.This AI program has changed the role of an attorney performing document review from that of a reviewer of documents to one of a reviewer of Luminance’s findings.
Rule 1.6 does not appear to be at issue when an attorney chooses to utilize programs like ROSS. This is because Rule 1.6(a) says the attorney has implied authorization to disclose information to carry out the representation. The use of ROSS, like other tools in the legal profession (such as detection of conflict of interest software), is simply part of how an attorney performs his or her duties. It is up to the attorney’s discretion of what is necessary to complete said duties. However, in accordance with Rule 1.6(c), the attorney shall make reasonable efforts to prevent inadvertent or unauthorized disclosure of the client’s information.
Rule 2.1 provides that an attorney shall exercise independent professional judgement and render candid advice to a client. An attorney may refer to other considerations such as moral, economic, social, and political factors which may be relevant to the client’s situation. This area may be another pain-point for AI programs within the practice of law. If a program like ROSS is already evaluating the law and facts inputted about a client, what is to prevent the development of the tool to include the other considerations which attorneys may utilize when advising clients? Considerations such as economic and political factors may be more easily evaluated by an AI program than by an individual. As such, the individual may defer to the AI program’s judgement (opposed to his or her own). This deference may even be subconscious (the attorney assuming the recommendation by the AI program as his or her own). It is unethical for an attorney to simply parrot the recommendation of a program like ROSS. Harkening back to Wendy Chang’s, a member of the ABA’s Standing Committee on Ethics and Professional Responsibility, quote: “the advice the client receives is the result of the attorney’s independent judgement.”Anything less may constitute the unauthorized practice of law by an AI or the AI’s programmer(s).
Unless a new Model Rule is created, Rule 5.3 is most suited to be applied to the use of AI programs within the legal profession. If a program such as ROSS is going so far as to summarize legal and factual findings for an attorney to review, then ROSS is, in practice, performing the same tasks traditionally performed by law clerks, paralegals, and newer associates. The attorney utilizing ROSS will be responsible for reviewing and ratifying all of ROSS’s output. Again, this expedites work but may leave pitfalls or create liability for the attorney reviewing or ratifying ROSS’s findings.
Rule 5.5, the Unauthorized Practice of Law Rule, may need to see changes, along with Rule 5.3, in order to accommodate the integration of AI programs into the legal profession. Rule 5.5 is currently very focused on the attorney practicing without authorization in his own or another jurisdiction; however, it could be said that this restriction could be imputed to an AI program. For example, what would happen if a program such as ROSS was found to be biased against clients of a specific demographic? Could the ABA or state bar associations disbar ROSS or prevent attorneys from using the program?
AI is transforming many businesses. The medical and legal professions are examples of industries that may be behind the curve but are in drastic need of a renaissance of development. As such, attorneys need to be prepared to adapt to the inevitable changes that AI will bring to the practice of law. Perhaps most importantly, attorneys will need to develop checks for biases that favor following the guidance or conclusions produced by an AI program. The client needs to receive the independent thought and judgement of the attorney. It is not a stretch to say that if medical AIs cannot replace a doctor’s gut instinct, they also cannot replace the sharp mind and nuanced pen of a skilled attorney. AI will not replace attorneys, as many have speculated, but it will become a necessary tool to raise productivity in the near future.
Wittenberg, Daniel S., Artificial Intelligence in the Practice of Law (Jan. 18, 2017), https://www.americanbar.org/groups/litigation/publications/litigation-news/business-litigation/artificial-intelligence-in-the-practice-of-law/
IBM, the world leader in artificial intelligence research, acknowledges that many AI programs are trained using biased data (accessed May 5, 2019), https://www.research.ibm.com/5-in-5/ai-and-bias/.
Lagasse, Jeff, AI can’t replace doctor’s gut instincts, MIT study says, Healthcare Finance (Jul. 26, 2019), https://www.healthcarefinancenews.com/news/artificial-intelligence-cant-replace-doctors-gut-instincts-mit-study-says.
Fumo, David, Types of Machine Learning Algorithms You Should Know (Jun. 15, 2017), https://towardsdatascience.com/types-of-machine-learning-algorithms-you-should-know-953a08248861.
https://rossintelligence.com/ (accessed May 5, 2019).
Model Rules of Professional Conduct 1.3.
https://www.luminance.com/(accessed May 5, 2019).
Jee, Charlotte, Luminance: the startup using AI to shake up the legal sector, TechWorld (Jun. 29, 2018), https://www.techworld.com/business/luminance-startup-using-ai-shake-up-legal-sector-3679652/.
Wittenberg, Daniel S., Artificial Intelligence in the Practice of Law (Jan. 18, 2017), https://www.americanbar.org/groups/litigation/publications/litigation-news/business-litigation/artificial-intelligence-in-the-practice-of-law/.