Written by Kim Taylor, IPRO for Legaltech News
In response to the inefficient and messy state of e-discovery, there was a new star at Legaltech New York this year: automation
To kick off the start of each new year, the legal industry anoints a hot trend at the annual Legaltech New York conference. In 2013, Big Data took center stage. In 2014, the talk of the show was advanced analytics and technology assisted review (apparently its time had come). Information governance stole the show in 2015. In fact, all of these trends complement each other and show a consensus in the industry that we need a better way to deal with the dramatic growth of data and the increasing cost pressures it adds to litigation.
The data proliferation is undeniably overwhelming today’s litigation teams, while at times having an adverse impact on the overall outcome of cases. Large data volumes are cited for driving up discovery costs, creating excruciating delays and increasing the chances for human errors. Our industry has evolved over the years to meet this ever-increasing demand, and adopted complicated and disconnected workflows along the way. To collect, process, analyze, review and produce data, litigation teams have been forced to use disjointed and mostly manual systems creating an impossible workflow.
In response to the inefficient and messy state of e-discovery, there was a new star at Legaltech this year. The star was automation. To that, I would say, “nothing is more powerful than an idea whose time has come.”
So, what does automation mean to e-discovery? What it means is that tedious, time-consuming, manual processes are being converted to automated ones. Using advanced technology, litigation teams are now able to set up pre-defined templates to auto-copy data, auto-validate files, auto-process, auto-filter, auto-load, auto batch and auto-tag documents based on auto-keywords – all tasks that a year ago required a human touch. With the advent of automation, you’ll begin to see more predictability and more structure around how the industry conducts e-discovery.
A question you may be asking is, “If automating the e-discovery process is so essential, then why did our industry wait so long to develop it?” The reason: e-discovery is hard.
Think about all the disparate tools that organizations have used to perform a specific task in the EDRM workflow. Really think about it, there are search engines, early case assessment (ECA) tools, processing applications, review systems, analytics solutions, production tools and the list goes on…and on…and on.
The issue hasn’t been that there weren’t enough tools, but rather that there were too many. Yes, too many products to learn and too much data moving between them. Many of these individual tools were great at just one thing, but had additional features and functions hastily added to fulfill a gap or client request. Responding to the cries of customers struggling to integrate their applications, technology companies attempted to build or acquire functionality that pulled individual functions into the workflow together, with limited success. Established review software tried to add processing functionality or vice versa, but none could crack the code completely. Integration was an important improvement, but not the end-all, be-all.
Even with unification, our world remained a manual and error-prone environment. After collection, data would have to be broken apart into chunks in order to process it, where it would wait until a technical person could verify and approve it to be moved down the line to review; all of this consuming valuable time and resources.
What’s more, when you break apart data sets, you always run the risk that something will go wrong. All the copying, importing and exporting needed in this approach are the steps where mistakes are most likely to be made. All of these issues are precisely the reason why automation is so crucial to boosting efficiency and lowering the costs of e-discovery.
Once you have automation, continuous streaming of data is the next innovation. What does this look like? Let’s start with an analogy. When you want to watch a movie on Netflix, you find the movie, press a button and start streaming it to your TV or computer – instantly. Instead of downloading an entire file before you start watching the movie, Netflix took advantage of data streaming. The question is… if it’s possible to stream movies, then why not e-discovery data? Now, you can.
With the right tools, you can do essentially the same thing. Instead of breaking electronic data into batches and then processing the batches, one after the other, you can feed a stream of data into the processing engine. As the system receives the data stream, it tracks and logs the incoming data, copies that data to the appropriate location, verifies that the data copied correctly, processes and filters the data according to pre-set parameters and automatically prepares the data for review.
Important productivity and cost gains can be had by adopting automation and data-streaming. In particular, you can drastically shorten the amount of time from when you receive data and when you are ready to work with it at a substantive level, prevent errors from happening, reduce human capital requirements and lower your overall costs. One thing to keep in mind with this advanced technology; you must look under the hood as you’re evaluating it. Automation is important, but quality is just as important. Many questions should be asked regarding how the data is being processed, what metadata is being captured, how are anomalies being handled, and can the solution grow and scale along with your case needs.
Automation has vast potential for our industry. Seek out partners that can introduce it to you in a meaningful way. Ensure they have a solution for the many places throughout the process where it can be leveraged to bring you significant efficiency gains and cost savings.