ResourcesArticlesPost

AI + Human: A Bright Future For Legal Co-Pilots

Introduction

By Johannes (Jan) Scholtes, Chief Data Scientist IPRO and Geoffrey Vance, Partner, Perkins Coie LLP

[Foreword by Geoffrey Vance: Although this article is technically co-authored by Jan and me, the vast majority of the technical discussion is Jan’s work. And that’s the point. Lawyers aren’t supposed to be the Captains of the application of generative AI in the legal industry. A lawyer’s role is more like that of a First Officer, whose responsibility is to assist the primary pilot in getting to the ultimate destination. This article and all the exciting parts within it demonstrate how important data scientists and advanced technology are to the legal profession. The law firms and lawyers who don’t reach that understanding fast will be left behind. Those who do will lead the future of the legal industry.]

Contents

AI advancements within the legal industry

AI+Human: The Future is in Legal Co-pilots.

What do we mean by Legal Co-Pilots?

What a diff’rence a day makes”. Dinah Washinton’s popular song best describes the Artificial Intelligence rollercoaster we have been on the last 3 months: from the introduction of ChatGPT, the human-like chatbot in December 2022, to a new search paradigm, which effectively is a whole different gateway to the internet, to the first legal assistants. Every day, a new breakthrough!

Currently, we are witnessing the largest world-wide AI experiment ever, and we are all participating whether we like it or not!

That Artificial Intelligence will change the legal industry is beyond doubt. How can we harness AI responsibly, however, is an open question. So are the questions about ensuring the technology is defensible and transparent and how it can be integrated in our everyday legal processes to achieve the most optimal combination of Human & AI collaboration.

For an industry filled with lawyers that do not fully understand where these AI models come from, how they work, and what their limitations are, that is a risk. We can already the use of ChatGPT for legal applications for which it is not designed. These are interesting experiments, but they will only lead to disappointment, frustration, and worse, less than ideal results to the detriment of the clients. The main reason being that ChatGPT is just a generative language model, designed to generate language based on a statistical process mimicking human language as seen during training. This process is kicked off with a so-called prompt, but that is all the steering there is. ChatGPT on its own is not a search engine, nor a source of knowledge, nor a translator, nor a linguistic classifier. 

Indeed, given how fast ChatGPT returns results with such a highly confident tone, it “feels” like it is always providing accurate information and gives the perception that it will continue to improve its results over time. But nothing is farther from truth: it is not. Without a proper encoder[1], the language generation is a statistical process that one cannot control. You will probably be tuning until the end of time, without getting a reliable and stable system. You can already see this in the integration between ChatGPT and Bing. This is not a knock-on Bing, as it is not intended to be the reliable legal search assistant that lawyers are looking for, clients expect and courts demand.

There are several legal initiatives that make more sense (such as Allen & Overy’s Harvey), but these are at risk of being introduced to the real-world too fast. AI models take shortcuts and do not disclose why they take certain decisions and disregard others. People who rely on the models do not fully understand the technology and many of the companies involved in the models have not fully-disclosed the specifics of the algorithms. 

Lawyers are not alone in their ignorance of AI. Very few people in the world truly understand these large language models. This is one of the main concerns of the Stanford University Human-Centered Artificial Intelligence School and from various scholars that express their opinions in publications such as the MIT Technology Review and popular magazines such as WIRED Magazine. Truly understanding is one of the prime research goals. Put another way, the technology was the easy part; understanding it is much more challenging.

Without understanding, transparency and a proper framework for (legal) defensibility, there will be no trust. Without trust, the legal industry will not accept AI, and that, our friends, is a good thing, often coined professional skepticism. 

Creating trust has different facets:

  1. Understanding the technical roadmap of large language models, their capabilities and how to address their limitations.
  2. Understand how they can add value to existing technology such as search engines, integrated programming environments or (contract) content management systems.
  3. Understand how to validate the quality of such models and integrations.
  4. Understand why these models take certain decisions and not others? What do they know and what do they not know?
  5. Understand how to integrate them optimally in exiting legal workflows.

By addressing the above facets of trust, a route map will be proposed that can be used to benefit from the success of large language models for legal applications in a responsible way.

Replacing lawyers with artificial intelligence algorithms is not realistic, certainly not in the light of the (i) unpredictable nature of the many AI algorithms, (ii) regulatory requirements (attorney client-ethics), and (iii) due to the fact that research shows that combining skills from Artificial Intelligence with human skills leads to the best results.

Humans are cognitively not suited to quickly find relevant case law in paper binders, manually review 100 million emails consistently and without any sleep, analyze the content of a data room for legal risks using binders and yellow markers, listen to audio, watch video of testimonies, or redact hundreds of thousands of documents with privileged information without making errors in the process. Humans get tired. Humans are inconsistent. Humans get distracted. Humans misunderstand and misapply instructions. All of these flaws associated with every living human being are minimized by the appropriate use of technology.

AI and humans working in tandem can be more effective than either AI or humans working alone because they bring different strengths and abilities to the table. AI is good at processing large volumes of data quickly, identifying patterns and trends, and making predictions based on statistical analysis. It is also not subject to biases or emotional reactions, which can sometimes cloud human judgment.

On the other hand, humans are better at tasks that require creativity, critical thinking, and the ability to interpret complex information. They also bring a wealth of experience, knowledge, and intuition that cannot be replicated by AI.

By combining AI and human capabilities, organizations can leverage the strengths of both to improve decision-making and achieve better outcomes. For example, in litigation, AI can be used to sort and classify large volumes of data, while humans can in turn review and interpret in the context of the legal matter at hand. In healthcare, AI can analyze medical records and imaging data to identify potential health issues, while doctors can provide personalized care and treatment recommendations based on their expertise and experience.

Furthermore, AI and humans can work together to improve the quality of AI models over time. Humans can provide feedback on the accuracy and relevance of AI-generated recommendations, which can be used to refine and improve the AI models.

Given all of these prefatory comments, let’s first look at the genesis of our term “co-pilots.” Next, we’ll discuss the capabilities of large language models, how we can improve them and how to integrate them in other legal technology. Finally, we’ll examine what is essential to create the necessary trust for large scale integration of such models in the daily workflows of legal professionals.

We cannot take credit for coining the term “co-pilot” when discussing AI. Microsoft did that. In one of its many generative AI experiments, Microsoft showed that a programming tool named “Co-Pilot” in Github could result in significant developer productivity. It concluded that, while the code was not always optimal and did not always follow today’s cyber-security standards, it could be quickly finetuned by humans which, when integrated into an Integrated Development Environment (IDE) such as Visual Studio, could achieve a perfect AI+Human experience.

The use of legal co-pilots based on AI has the potential to revolutionize the legal industry by enabling law firms to work more efficiently, accurately, and cost-effectively, while also expanding access to legal services for a broader range of clients. While there are still challenges to be addressed, such as ensuring the transparency and accountability of AI-based systems, the growing adoption of legal co-pilots by leading law firms suggests that this technology is likely to play an increasingly important role in the future of the legal profession.

In addition, it is important to understand that not every LegalTech application can benefit from Generative Large Language Models such as (Chat)GPT or Google BARD. We will explain this in more detail in our other blog: “A Closer Look at Generative Large Language Models and Applications in LegalTech”.


Selected References

Vaswani, Ashish, et al. “Attention is all you need.” Advances in neural information processing systems 30 (2017).

Devlin, Jacob, et al. “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805 (2018).

Tenney, Ian, Dipanjan Das, and Ellie Pavlick. “BERT rediscovers the classical NLP pipeline.” arXiv preprint arXiv:1905.05950 (2019).

Radford, Alec, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. “Language Models are Unsupervised Multitask Learners.” (2019). GPT-2.

Language Models are Few-Shot Learners, Tom B. Brown et al., arXiv:2005.14165, July 2020. GPT-3.

Ouyang, Long, et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).

Tegmark, Max (2017). Life 3.0 : being human in the age of artificial intelligence (First ed.). New York: Knopf.

Russell, Stuart (2017-08-31). “Artificial intelligence: The future is superintelligent”. Nature. 548 (7669): 520–521. Bibcode:2017Natur.548..520R. doi:10.1038/548520a. ISSN 0028-0836.

Russell, Stuart, Human Compatible. 2019.