If you enjoy this content, please consider signing up. Creating a member account is free, and you will
· receive new content delivered directly to your inbox;
· have exclusive access to members only content;
· gain access to our online booking tool;
· collect 500 bonus points in our points program.
This blog post is a reproduction of an academic paper I wrote in 2015 for a law school course on law and technology. The paper has been modified to fit within the format of a blog post. It is reproduced here for ease of access and because AI in the legal profession is once again becoming a hot topic. If you are interesting in this topic, please also check out: Will AI Replace Lawyers? and Canada's Flawed Regulation of Artificial Intelligence
Introduction
Three different papers about information retrieval problems in the legal profession came out of the first A.I. and Law conference in 1987. Carole D. Hafner, Richard K. Belew, and Jon Bing were the authors of these papers. Schweighofer’s commentary on Bing’s paper argued that even though Bing dealt with similar problems as Hafner and Belew, his approach differed significantly from theirs. Instead of writing in “hard-core” A.I. language like Hafner and Belew, Bing, a lawyer, discussed the problem from a practical standpoint. This different approach complimented Hafner’s and Belew’s computer science background and better informed the legal A.I. that they were trying to develop. The integration between theory and practice illustrated that when the legal community actively engages with the A.I. and law community the development of legal A.I. meets the needs of both parties. On the other hand, if the two communities become alienated from each other, then legal A.I. develops without the input of the legal practitioner. This alienation creates a gap where the legal profession does not have the necessary understanding or knowledge of legal A.I. This gap creates problems for the legal profession as the legal A.I. intrudes into the courts, and judges must decide on how to address it. In the U.K. Court of Appeal, a justice rejected the use of Bayesian system, leading to a loss of its benefits and negatively impacting the administration of justice. In the U.S, the acceptance of predictive coding without the understanding of the lawyers forced litigators to use a technology that they did not understand, leading them to being unable to properly meet the interests of their clients.
Overview
The first part of this paper provides an overview of three approaches to developing legal A.I (logic trees, machine learning algorithms, and Bayesian systems). The second part of the paper draws on these approaches and investigates two cases studies of courts reacting to legal A.I: the rejection of Bayesian systems in a U.K. Court of Appeal and the wholesale acceptance of predictive coding in the U.S. With these case studies in mind, the third and final part argues for the need to train intermediaries to translate and communicate the assumptions, limitations, and benefits of legal A.I to the legal profession. It also recommends that the legal community push legal A.I. vendors toward open source principals to encourage proper investigation and development of their products.
Part 1: The Nature of Legal A.I
Background
Legal A.I. does not try to replicate the internal reasoning process of a lawyer, nor does it try to mimic the lawyer’s behaviour. Developers of legal A.I. have designed them to act as agents that think or act rationally to produce the best possible outcome for a particular situation. A legal A.I. agent is composed of a program and architecture (this paper will focus exclusively on the program aspect, but it recognizes that the architecture is an important attribute in designing and differentiating legal A.I). The program is the formalization and the function that runs that formalization, and the architecture is the actual computing device that runs the program. Developers of legal A.I programs can adopt a variety of models, including but not limited to logic trees, machine learning algorithms, and Bayesian systems. These approaches are not legal A.I. in and of themselves, but only become legal A.I. when the developer uses them to automate a certain type of legal reasoning or process in order to solve a certain problem (this paper focuses on legal reasoning. The automation of legal processes would include things like legal transactions and forms [. . .] in all of the discussed methods, a legal expert helps create or guide the A.I to help ensure that it achieve its goal. Therefore, legal A.I. is often referred to as an expert system.).
Logic Trees and Patent Litigation
Patent litigation has proven to be a fertile area for legal A.I. development. Its systematic and technical nature makes it amendable to automation. Various companies have developed sophisticated legal A.I. that predicts patent outcomes (LexMachina is one of the more mature companies that employs legal A.I. for patent purposes). While many of these companies use machine learning algorithms to formalize patent claims, developers have also applied strict inference models. Kacsuk’s paper helps illustrate this approach. Strict inference reasoning tends to dominate critical legal thought, and Kacsuk shows how a developer can adopt this type of reasoning to design A.I. This approach is sometimes called “thinking rationally” or working through a “forward problem,” and it requires an expert to theorize and create a model based on precedent and then apply new cases to the model to produce an outcome.
As a patents’ claim lawyer who has a background in logic and math, Kacsuk draws on precedent to create a mathematical model that predicts future E.U. patent claims. An individual can only patent a new invention if it is novel and involves an inventive step. The applicant communicates these attributes to the court in a complex patent claim (the example she provides is “navigation tool comprising a needle made of a ferromagnet"). Kacsuk break downs a sample patent claim into its symbolic parts and then adds judicial considerations like amendments and priority to round it out (for the patent claim, she breaks it down into “A = navigation tool B = needle C = ferromagnet). From these elements she creates a symbolic logic tree by applying calculus and symbolic logic principals. The execution of the logic tree that includes amendments, priority, and the atomic structure of a claim is found in Figure 1.19
The A.I. evaluates any new claims based on the dimensions that Kacsuk has encoded (the user will actually interact with the software that a computer scientist will program using Kacsuk’s model. The A.I. program, therefore, includes both the model and the execution of this formulization through the programming function). It thinks rationally about the claim by applying a model to a particular fact situation and then producing a rational conclusion using strict inference. This approach would likely encounter difficulties in areas where the legal reasoning has a less rigid framework. Even with patents, if the court decides to deviate from this structure, the A.I. cannot accommodate that deviation. Legal A.I. models like these generally do not adapt well to changing situations.
Machine Learning and Legislative Interpretation
Legal A.I. developers have used other methods of formalization like iterative machine learning to avoid this rigidity. This approach operates in reverse (or, inverse) as it takes observational data to produce a model that fits that data and then predicts the outcome based on that model. It has the ability to change as a legal expert codes its responses. Možina et al’s paper develops an argument-based machine learning system that aids in legislative interpretation. The developers want their A.I. to produce a set of rules that would match an ideal sets of rules, both of which determine whether an individual qualifies for welfare benefits in the U.K. (this ideal set of rules is only created to test the effectiveness of the A.I. It is not necessary for the A.I. to function).
The machine learning A.I. uses both an ABCN2 algorithm and an iterative process. The A.I. first runs an algorithm CN2 to analyze the sample database to find a rule, remove the data that the rule covers, and then add that rule to a set of new rules. After the first run, the A.I. produces an imperfect set of rules. The developers, then, have an expert augment the algorithm with arguments to improve this first set (the experts provide arguments only for a piece of data that the A.I. continually misinterprets). The A.I. then runs the modified algorithm (ABCN2) to induce a new set of rules. After running through this iteration several times, the A.I. produces a set of rules very similar to the ideal list. The developers explain that “rules 1-5 are good rules, 6 and 7 have the right format, but the threshold is slightly inaccurate. The remaining four rules approximate the 10 ideal contribution rules.” The full comparison between the rules are seen in Table 1 below.
Table 1
Ideal List
1. IF age <60 THEN qualified = no;
2. IF age <65 and sex = m THEN qualified = no;
3. IF any two of cont5, cont4, cont3, cont2 and cont1 = n THEN
qualified = no;
4. IF spouse = no THEN qualified = no;
5. IF absent = yes THEN qualified = no;
6. IF capital >3000 THEN qualified = no;
7. IF inpatient = yes AND distance >750 THEN qualified = no;
8. IF inpatient = no AND distance £750 THEN qualified = no.
Machine Generated List
1. IF capital >2900 THEN qualified = no;
2. IF age £59 THEN qualified = no;
3. IF absent = yes THEN qualified=no;
4. IF spouse = no THEN qualified = no;
5. IF cont4 = no AND cont2 = no THEN qualified = no;
6. IF inpatient = yes AND distance >735.0 THEN qualified = no;
7. IF inpatient = no AND distance £735 THEN qualified = no;
8. IF cont3 = no AND cont2 = no THEN qualified = no;
9. IF cont5 = no AND cont3 = no AND cont1 = no THEN qualified
= no;
10. IF cont4 = no AND cont3 = no AND cont1 = no THEN qualified
= no;
11. IF cont5 = no AND cont4 = no AND cont1 = no THEN qualified
= no;
The developers do not create an expert symbolic model, but classify and clarify A.I. with arguments from a legal expert. Rather than applying an expert model to the observables, the A.I. moves from the data to create a model. This A.I. can modify the model based on new data and learn from its previous mistakes. This approach provides it with an adaptability to new situations that the previous model lacked.
Bayesian Systems and Criminal Trials
Bayesian system determine probabilities of competing outcomes and are one of the most widely used approaches in legal A.I. Bayesian systems are “standard, successful in many real cases, and logically well justified.” Computer scientists, engineers, and mathematicians also use this system. Bayesian systems are an amalgamation of incident diagrams (also called directed acyclic graphs) and mathematical theory (i.e. Bayes’ theorem) (Kaptein, Prakken, & Verheij describes Bayes’ theorem as the “probability ratio of two competing hypothesis is the product of the ratio of the two competing hypothesis before the evidence came in, multiplied by the likelihood ratio of the evidence under the two competing hypothesis”). Incident diagrams allow a developer to systematically model relationships, and the mathematical theory calculates the different probabilities for the outcomes that arise out of the relationship. Bayesian networks systematically model complex relationships and, based on those complex relationships, calculate probabilities for certain outcomes (Franklin remarks that the main message of a Bayesian system is: “the verification of a (non-trivial) consequence renders a theory more probable” ). Developers of legal A.I. have adopted this approach to model complex legal relationships and estimate probabilities for certain legal outcomes. Vlek et al.’s paper uses a Bayesian system to model the legal reasoning from a Dutch criminal trial in order to compare the probabilities of two competing scenarios. They illustrate how Bayesian systems can solve practical legal issues with probabilities by constructing an incident diagram based on the narrative of the case and breaking the narrative up into two scenario nodes: the scenario where Beekman (“B” accused #1) kills Leo (“L” the victim) and the scenario where Marjan (“M” accused #2) kills L. Figure 2 models the reasoning behind “M killing L because of a cannabis operation”:
Figure 2
The developers place the reasons that support each argument in a schematic scheme that models their relationships. A sub-scenario node supports the scenario node and a sub-sub-scenario node supports the sub-scenario node. “M actually had a cannabis operation” or that “M called another suspect to help her with the body” are reasons that support the main scenario node. “M had a false contract” is a reason that supports the sub-scenario idiom “L was to be the front of the Cannabis operation.” Vlek et al. must also assign probabilities for each and every scenario node (they actually assign likelihood ratios and prior probabilities, but to clarify the idea of probabilities for their scenarios, they reduce the example to just probabilities. A discussion of the difference between likelihood ratios and prior probabilities will occur in more depth in the following section). For example, “L was in state of impotence” supports the sub-scenario node “M drugged L.” The toxicology report stated that because L had alcohol and high amounts of Temazepam in his blood, it is likely he was impotent. Therefore, the probability that “if M drugged L, L was in state of impotence” is high (for example, 0.99). They run the entire network in a software that executes the model and probabilities, producing the probabilities of 0.25 that M was the killer and 0.7 that B was the killer. The A.I’s functioning depends on the developer’s understanding of legal reasoning because the A.I. identifies non-trivial outcomes based on the relationship between the evidence and the legal narrative that the expert creates.
Part 2: Responses from the Court
Legal A.I.’s drive to achieve the best outcomes for a particular situation means that legal A.I. has the potential to aid legal professionals in their profession. Kacsuk’s A.I. helps lawyers predict the success of patent claims. Možina et al.’s A.I. assists legal experts in quickly determining whether or not a person meets the legislative requirements for welfare benefits. Vlek et al. helps a litigator determine the probability that one situation that informs the result of a criminal trail happened over another. Based on these models, legal A.I. has the potential to improve the administration of justice and help a lawyer represent her client’s interest. In an ideal world, lawyers and judges would understand both the strengths and limitations of legal A.I. and use them in the right situations to produce the best possible outcome. In the real world, lawyers and judges do not have this understanding, and as a result, do not use legal A.I. in the right situations to produce the best possible outcomes. Two case studies illustrate this fact. In the U.K., a Court of Appeal rejected the use Bayesian systems and the benefits it offers. A string of U.S. decisions embraced predictive coding, but did so without first ensuring that lawyers had the correct information and research about the A.I. to properly understand its implications. Without this understand, the lawyers could not properly meet the needs of their clients.
Rejection R v. T and Bayesian Systems
A U.K. Court of Appeal rejected a forensic evidence expert’s use of Bayesian systems in weighing the certainty of scientific evidence in R v T. In the case, the court required a forensic expert to determine the probability that a type of footwear made a specific mark. To meet this requirement, the expert provided testimony on the probability of this event occurring by using a Bayesian system (this type of legal A.I. operates at the fringes of the nature of legal A.I. because weighing the validity of scientific information is not something the average legal profession does. However, this case study is still instructive because of the implications this decisions has on the development of “purer” forms of legal A.I. that use Bayesian systems to automate legal reasoning (like the Vlek et al example which tried to automate a type of legal reasoning used at a criminal trial)). In response to the expert’s testimony, the justice ruled that “no likelihood ratios or other mathematical formula should be used in reaching whether the footwear made a mark beyond the examiners’ experience.” Likelihood ratios “should not be used in evaluating forensic evidence, except for DNA” and “possibly other areas where there is a firm statistical base.” By dismissing likelihood ratios, the justice undermined the use of Bayesian system in a court. This decision did not just prevent forensic evidence experts from using this system, but it prevented developers of legal A.I. from using it for more experimental purposes that certainly lack firm statistical foundations.
The judge combined the idea of assessing the probabilities of the proposition and the strength of evidence for the proposition, and as a result, made the logical error that if probabilities of a formula cannot be properly expressed, then they negate the validity of the relationships described by the formula. The calculation of a Bayesian posterior probability depends on both a likelihood ratio and prior probabilities. The likelihood ratio expresses the relationship between two competing hypotheses based on the evidence (i.e. the probability of the evidence occurring for hypothesis #1 and for hypothesis #2). On the other hand, the prior probability expresses the chance of each hypothesis actually occurring (i.e. what is the likelihood that the hypothesis is true? [. . . ] Berger provides an illustrative and accessible example of the relationship between the likelihood ratio and prior probability). Therefore, the likelihood ratios is the strength of the evidence for that proposition and the prior probability is the determination of the likelihood of that proposition. Based on this fundamental misunderstanding of statistical principals, the justice incorrectly assumed that these two concepts had the same role in a Bayesian system.
Experts set likelihood ratios based on the strength of the evidence, which is often based on objective standards. However, they set prior probabilities based on the likelihood of the hypothesis being true. In other words, experts create prior probabilities based on their own judgement of the situation, and therefore, even forensic evidence experts sometimes lack a hard, empirical base to ground their prior probabilities. Experts account for this less-than-objective measurement by setting their prior probabilities conservatively. The developers in the Vlek et al. case were trying to predict probabilities of entire narrative scenarios, so they had to set the prior probabilities for each scenario idiom at 0.01. They recognized that the criminal law system requires that all defendants be presumed innocent until proven guilty, and therefore, they needed to set the prior probabilities of each scenario idiom at the same conservative value. The prior probabilities lacked a hard empirical base for these developers because the criminal law system restricted the researchers from setting them at any other figure. Despite this limitation, the likelihood ratios still incorporated the scientific evidence from the trial into the calculation, and the theorem still properly calculated the relationship between the likelihood ratios and prior probabilities. Even with a more experimental Bayesian system that cannot properly express the prior probabilities with a firm statistical basis, the posterior probabilities still correctly illustrate the validity of the relationship between the evidence and the narrative.
Even in forensic evidence cases, Bayesian probabilities do not provide objective measurements of the situation occurring (to ask for such a result would be akin to asking a lawyer to tell the court with absolute certainty who actually committed the crime). Bayesian systems calculate probabilities based on a formalized method that roots out errors in human reasoning. Forensic evidence experts use Bayesian system because they cannot calculate the likelihood that a certain piece of evidence resulted in a certain situation on their own (imagine trying to calculate the probability that one shoe made one particular mark when you have a thousand shoes that can produce a thousand different marks). The system clarifies the value of scientific evidence to better inform the decision of the trier of fact (as Kaptein, Prakken, & Verheij argue, “the model serves as powerful conceptual framework, which makes explicit what data are required to arrive at a source attribution statement”). The developers in Vlek et al. used the Bayesian system to remove human reasoning biases. The narrative of the Dutch case had encouraged tunnel vision and a “good story” emotionally manipulating the trier of fact (which, perhaps, lead Marjan to be unfairly convicted at trial). The developers had used the Bayesian system to try and adjust for these biases and to show that when they removed these biases, Beekman was more likely to be the killer (this does not mean Bayesian systems have no problems of their own, including problems of circularity, issues with weighing evidence, and being susceptible to the reference class problem. There concerns do not invalidate their use, but are presumptions that the court should be aware of in weighing the probabilities that these systems produce. This conclusion also does not mean that Beekman was absolutely the killer, but it provides the trier of fact with one more tool in determining a verdict).
R v T prevents forensic evidence experts from properly weighing scientific evidence and shut off more experimental legal A.I. that cannot meet the rigorous requirements of statistical certainty from developing. This decision prevents legal A.I. developers from using Bayesian systems to automate legal reasoning and challenge conceptual biases. It negatively impacts the administration of justice and prevents legal A.I. from potentially improving certain types of legal reasoning that have a direct impact on determining the verdict of a trial.
Acceptance: Da Silva Moore and Predictive Coding
Courts can also accept legal A.I. without fully understanding its presumptions, limitations, and benefits. The United States mandated the use of predictive coding without first requiring that lawyers full comprehend this type of A.I. Da Silva Moore, a federal decision, established that a party could use predictive coding for discovery. Global Aerospace followed that judgement, and the court in Kleen Products, LLC v. Packaging Corporation of America pushed the principals forward by granting an order for the defendants to redo their e-discovery with predictive coding. The judge in EORHB, Inc. ordered the parties to use predictive coding or show why it ought not to be used. This wholesale acceptance of predictive coding for e-discovery within the United States has made predictive coding a central concern for litigators (the state of predictive coding is considerably less certain in Canada).
Lawyers have not completely embraced this legal A.I. despite its widespread acceptance from the courts. The Sedona Conference lists three general reasons for why litigators have refused to use predictive coding. First, litigators are concerned that “computers cannot be programmed to replace the human intelligence required to make complex determinations on relevance and privilege. Second, lawyers have a perception “that there is a lack of scientific validity of search technologies necessary to defend against a court challenge.” Third, litigators have a “widespread lack of knowledge (and confusion) about the capabilities of automated search tools” (the concerns that the Sedona Conference raise are that predictive coding would miss some documents found in a straight keyword search and even with the extra effort with predictive coding, it will still miss problem documents). These three concerns ought to be understood within the context of each other as the lack of empirical research and information about the structure of predictive coding A.I. contributes to the lack of knowledge that litigators have about this area.
Very few scholars have actually addressed these three concerns because they have chosen to focus almost exclusive on the iterative process rather than studying the iterative process in tandem with the predictive coding algorithms (an expert litigator will take the documents that the A.I. has coded as either relevant or irrelevant and relay that information back to the A.I., so during its next attempt, it can more accurately and precisely code the documents). Nicholas Barry and Baron & Burke both provide an excellent overview of the importance of studying the iterative process that informs predicting coding A.I. However, neither address the impact that predictive coding algorithms have on the iterative process and on the predictive coding A.I. itself. Baron & Burke argue that one must “recognize, first and foremost, the importance of the process that manages the task,” and later conclude that “a failure to employ a quality e-discovery process can result in failure to uncover or disclose key evidence." By focusing solely on the iterative process in judging quality, Baron & Burke fail to recognize that the predictive coding algorithms also significantly contribute to quality of the A.I.. Later in the paper, they present a list of ways to ensure quality in predictive coding but again focus solely on the iterative process itself with no mention of the algorithms (this list includes judgmental sampling, independent testing, reconciliation techniques, inspection to verify and report discrepancies, and statistical sampling). In his paper, Barry analyzes Baron & Burke’s list and argue that statistical sampling is the preferable approach to ensuring the quality of the predictive coding process because it evaluates the quality of the actual results. This argument still misunderstands that judging the results on their own does not comment on effectiveness of the algorithm, the iterative process, and the predictive coding A.I.
Empirical research needs to account for both the iterative process and the algorithm to meet the concern that “that there is a lack of scientific validity of search technologies necessary to defend again a court challenged.”81 The E-Discovery Institute conducted a wide-ranging survey on predictive coding A.I. to increase vendor transparency. The survey polled a list of predictive coding vendors, and Gallivan Gallivan & O’Melia (“GGO”) responded to a question that asked them to describe their process:
“Collect and process records; extract content and placed in a repository, store references in a database. Consolidate duplicates. Extract text or OCR, compare text content to create a similarity vector, store results […] The reviewers identify groups known to be responsive and then we associate other records that are “most like” those records based on the similarity vector. Reviewer decisions define the actual mark of the documents vs. the mark suggested by our system. As new waves of data arrived, they are placed in groups based on similarity vectors generated for that data.”
This quote illustrates that predictive coding is an integration of both the iterative process and the algorithm (i.e. the “similarity vector”). The statistical sampling approach that Baron & Burke’s and Barry advocated would not actually measure the impact of the similarity vector on the quality of the A.I. Imagine that “y” is a competing predictive coding vendor, and a researcher decides to evaluate the effectiveness of both “y” and GGO. The researcher finds that GGO has a success rate of 65% and ‘y’ has a success rate of 60%. If a litigator saw these results, she would likely choose GGO. However, imagine now that the researcher has access to both vendors’ A.I. and has somehow removed the algorithms from each A.I. and placed vendor ‘y’’s algorithm into GGO’s A.I and vice versa. After re-running the test, ‘y’’s A.I. now has a success rate of 20% and CGO’s has a success rate of 85%. These results show that ‘y’ actually had the better algorithm, but just a slightly less effective iterative process. GGO had a terrible algorithm (or “similarity vector”) but had a marginally better iterative process. The litigator might now refuse to use GGO and choose vendor ‘y’ (and ask ‘y’ to slightly modify its iterative process). The original statistical sampling would have lead the litigator to choose the wrong A.I. for her client (and not meet the client’s best interests) because it did not adequately measure the quality of the algorithm, the iterative process, and the predictive coding A.I.
Inspecting the predictive coding algorithm will also help address the concern that this type of A.I. “cannot be programmed to replace the human intelligence required to make complex determinations on relevance and privilege. Predictive coding A.I. has actually had difficulties with replacing a lawyer’s reasoning process for determining the relevance and privilege of a document. The legal A.I. community has used the case of Popov and Hayashi to address this problem. The court in Popov v Hayashi needed to rule on who possessed a baseball. The court considered cases that dealt with hunting foxes, whales, and ducks to arrive at their decision. The reasoning process that requires a judge to answer this question requires that she understand that a case about a fox, a whale, and a baseball have a high degree of similarity or relevance. Legal A.I. developers have struggled with formalizing this “common-sense reasoning.” Franklin devises an argument that visualizes the reasoning that this problem requires:
Premise 1: a has features f1, f2, . . . , fn.
Premise 2: b has features f1, f2, . . . , fn.
Premise 3: a is X in virtue of f1, f2, . . . , fn.
Premise 4: a and b should be treated or classified in the same way with respect to f1, f2, . . . , fn.
Conclusion: b is X.
A lawyer normally provides the information in manual review to support Premises 3 and 4.
Yet, the validity and accuracy of predictive coding A.I. depends on how the algorithm deals with this very problem. 87 Looking only at the iterative process will not provide the litigator with a satisfactory answer to this conceptual problem. Discovering how one vendor deals with this problem versus another requires an investigation of the algorithm.
Jason R. Baron and Paul Thompson commented that “one can search in vain through a vast amount of proprietary literature [dealing with predictive coding software] without citation or
grounding to AI or IR research.” Six years later, the legal community still has not yet solved this problem. The court mandating the use of predictive coding has forced lawyers to adopt A.I. that they do not understand. Research in this area needs to focus on the iterative process and the algorithm to provide litigators with this understanding. If a lawyer does not have this research, then she will likely choose a predictive coding vendor that does not best serve her client’s needs.
Part 3: Bridging the Gap between Theory and Practice
The case studies from the U.K. and American courts illustrate that legal professionals do not have the necessary understanding of legal A.I. to take full advantage of its benefits. This problem has negatively impacted both the administration of justice and a lawyer’s duty to protect her client’s interest. To deal with the negative effects of the creep of legal A.I. into the courtroom, the legal profession needs to offer realistic and workable solutions to ensure that legal practitioners receive the information they need about legal A.I. to make informed decisions. Training and developing intermediary experts to properly inspect and investigate legal A.I. with grounding in legal A.I. research would help bridge the information and knowledge gap between theory and practice. Encouraging legal A.I. developers to adopt open source principals would further ensure that intermediaries could properly conduct research into a vendor’s legal A.I. and help ensure that judges and lawyers receive the necessary background and information on this complex and foreign area.
Training the Intermediary Expert
Judges and lawyers likely do not have the background, training, or time to conduct the necessary research to have a nuanced understanding of legal A.I. Hard-core A.I and law
scholars are also less concerned with the practical benefits of one vendor software over another, and more concerned with producing general theoretical principals and formalizations that drive legal A.I. Legal professionals should not need to learn a new language overnight, and A.I. and law scholars should not suddenly need to generalize and dilute their writing for the legal profession. Intermediary scholars versed in both legal A.I. and legal practice would offers a realistic solution and bridge this information and knowledge gap.
Intermediaries of a certain kind already exist. McGinnis & Pearce offer practical insight into areas where legal A.I will have a significant impact on the legal profession (They identify how general technologies and developments like machine intelligence, IBM’s Watson, and Moore’s Law (i.e. processing speeds increasing and storage costs decreasing) are transforming discovery and legal analytics). Cooper discusses how courts could use Watson to correctly identify the context and meaning of a word in a piece of legislation (In talking about Watson, she comments that “Watson runs on a cluster of Power 750™ computers—ten racks holding 90 servers, for a total of 2880 processor cores running DeepQA software and storage”). Katz explains how legal A.I. use structured and semi-structured data, natural language processing, information retrieval, knowledge representation, and machine learning (the remaining parts of his paper provide some concrete examples of areas where A.I. could affect the legal profession, including predicting judicial decisions, patent disputes, securities fraud class actions, and predictive coding). While these writers do not use mathematical and computer science jargon and make these concepts more accessible to legal practitioners, they also fail to properly engage with A.I and law scholarship. When discussing predictive coding or machine learning, these writers focus almost exclusively on describing general technologies (e.g. big data or decreased memory cost) and then speculate on their potential legal application.93 This approach ignores the vast amount of legal A.I research available and prevent these writers from critically analyzing specific legal A.I.
Fenton et al. advocated for a radical rethink on how to communicate Bayesian systems to the legal profession. They argued that intermediary scholars or experts need to have legal practitioners accept that they only need to question the prior assumptions that go into the calculations and not the accuracy or validity of the calculations given those assumptions. Even though they were referring only to Bayesian systems, their suggestion provides a good starting place on how to properly communicate legal A.I. to legal professionals. If intermediaries could communicate the presumptions, limitation, and benefits of legal A.I. based on sound A.I. research, they could effectively bridge the information and knowledge gap between the two disciplines. If GGO had released the details of their algorithm, then an intermediary versed in legal A.I. could break it down and present its strengths, weaknesses, and presumptions to a legal practitioner. An intermediary could also synthesize the A.I. research on Bayesian systems and present it to judges in a way that focuses on the presumptions, limitations, and benefits of this approach. When more experimental legal A.I. enters the courts, justices could use this information to better anticipate their usefulness.
The legal profession needs to support the development of intermediaries. Law schools can foster a law student’s interest in A.I. by offering courses that provide a mathematical, statistical, and computer science foundation to supplement the development of a law student’s legal reasoning and knowledge. Law faculty specialized in these areas could teach these courses, or professors from mathematical, statistical, or computer science departments could offer courses through the law school. Law societies could offer educational opportunities to lawyers interested in these areas, and law firms and governments could provide professional opportunities for intermediaries, instead of potentially offering these positions to experts in other fields.
Open Source Legal A.I
A closed approach to legal A.I. would make rigorous analysis and inspection of A.I. very difficult for intermediaries. Opening legal A.I. would provide a solution to this problem. If the legal profession encouraged legal A.I. developers to use open-source technology, intermediaries could have a way to analyze and discuss legal A.I. products in the marketplace (because this paper focuses on A.I. programs, it emphasis open-source software. However, as legal A.I. architecture also defines the design and development of legal A.I., this paper also encourages open-source hardware, which has very similar principals as open source software). Open-source legal A.I. could have a broad definition that could cover a range of situations. The Open Source Initiative lists 10 elements that contribute to the degree that a software is open. These elements include: Free redistribution, open source code, allowing for derived works, protecting the integrity of the author's source code, no discrimination against persons or groups, no discrimination against fields of endeavor, license must not be specific to a product, license must not restrict other software, and license must be technology-neutral.
The legal profession does not need to require that legal A.I. developers satisfy all 10 requirements to still achieve the objectives of allowing for the proper investigation and inquiry of legal A.I. Not restricting the availability of the source code, allowing for derived works, and allowing for distribution of licenses would be the most relevant to this mandate.
Freeing the source code would allow intermediaries and A.I. and law scholars to see how legal A.I. functions. It would allow them to locate any presumptions that the technology relies on as well as identify any of its limitations and benefits. Allowing for derived works further supports independent peer review because experts often need to experiment and modify code to see how it effects the performance of the A.I. Requiring open distribution licenses prevents companies from closing up their A.I. through indirect means like requiring a non-disclosure agreement. If the vendor allows the scholar to inspect and test the software, but not publish the results, the purpose of opening up legal A.I. is defeated. While observing the remaining 7 principals are not necessary to achieve its objective, the legal profession should revisited these principals if any issues arise that would prevent intermediaries from inspecting or publishing on legal A.I.
As vendors spend billions of dollars to develop legal A.I., any concentrated push from the legal profession towards open source will likely encounter resistance. The legal community should emphasize that open source technologies has the potential to commercially benefit those who develop it and provide any further incentives or information to encourage this development (IBM, Sun Microsystems, and Orcale are all prominent examples of companies that have invested in open source software and have had profitable returns). Numerous studies have also demonstrated which type of companies actually benefit from embracing open source software (benefiting from the switch depends on whether the company produces hardware or software, or whether most of their property is protected by patents or trademarks. The scale and the type of development process, and the size and approach of the company also effect the potential for profitability). They have also provided ways for companies to make the switch. The legal profession could use this information to educate companies on proper ways to develop and plan for adopting open source principles. While opening up legal A.I. depends on a variety of factors (these factors include organizational and business structures, the form of A.I. developed, and the type of intellectual property governing it), scholarship on open-source software clearly shows that is possible for companies to make this change and gain both significant social and economic value from the transformation. This knowledge should provide a sufficient motivation for the legal profession to investigate the feasibility of its implementation.
Conclusion
Legal A.I. does not try and think or act like a lawyer; instead; it thinks or acts rationally to achieve the best outcome according to the situation. The nature of legal A.I requires developers to adopt different approaches to formalization, including symbolic logic trees, machine leaning algorithms, and Bayesian systems. As legal A.I. becomes more sophisticated and enters the courts, judges must rule on its applicability in the legal system. Their lack of knowledge about legal A.I. has led to problematic decisions. A UK Court of Appeal justice had disallowed forensic evidence experts from using Bayesian systems, undermining the development of these networks for other legal A.I. purposes. A string of decisions in the United States accepted predictive coding but did not ensure that lawyers had the information available to properly use the A.I. to ensure they meet the best interests of their clients. To deal with this lack of understanding, the legal profession could develop intermediary scholars or experts to bridge the information and knowledge gap between the theory and practice of legal A.I. It could also push legal A.I. developers to open up their A.I. to encourage the proper investigation and study of their products.
The past two decades have seen a niche discipline comprised of mathematicians, statisticians, philosophers (and the odd lawyer) creep into the courts and the average legal professional’s life. Judges must now rule on once obscure concepts like Bayesian systems, and litigators must understand machine learning algorithms to properly meet their clients’ needs. McGinnis & Pearce argue that this disconnect will cause a disruption in the legal profession akin to the one that journalism has undergone. The Canadian Bar Association proclaims that “with technology, clients will increasingly realize that lawyers are not essential for all questions touching the law — in some respects, they are fungible.” William Caraher and Cooper argue that IBM’s Watson will replace lawyers and judges. These events force the legal profession to decide how to deal with the rise of legal A.I. If the profession decides to resists the advances of legal A.I., it may encounter a grim dystopian. If it decides to embrace its ever expanding horizon, it may welcome a bold new world.
Bibliography
JURISPRUDENCE: UNITED STATES
EORHB, Inc. v. HOA Holdings, C.A. No. 7409-VCL (Del. Ch. Oct. 15, 2012)
Order Approving the Use of Predictive Coding for Discovery, Global Aerospace, Inc. v. Landow Aviation, L.P., Consolidated Case No. CL61040 (Loudoun Cnty., Va. Apr. 23, 2012)
Kleen Products v. Packaging Corp. of America, 2012 U.S. Dist. LEXI 139632 (ND Ill. Sep. 28, 2012)
Monique Da Silva Moore et al. v. Publicis Groupe & MSL Group, 868 FRD (2d) 137 (SD NY 2012)
Popov v. Hayashi, 2002 WL 31833731 (Ca. Sup. Ct. 2002)
JURISPRUDENCE: UNITED KINGDOM
R v T 2007 EWCA Crim 2439, All ER (D) 240
SECONDARY MATERIALS: ARTICLES
Alexy, Oliver, Henkel, Joachim & Wallin, Martin W. “From closed to open: Job role changes, individual predispositions, and the adoption of commercial open source software development” (2013) 42:8 Research Policy at 1325-1340.
Baron, Joseph R &. Burke, Macyl A (eds), “The Sedona Conference commentary on achieving quality in the e-discovery process: A project of the Sedona Conference” (2009)10 Sedona Conference J at 299.
Barry, Nicholas "Man Versus Machine Review: The Showdown between Hordes of Discovery Lawyers and a Computer- Utilizing Predictive- Coding Technology" (2013) 15:2 Virginia J of Ent & TL at 343.
Bench-Capon, Trevor et al, “A History of AI and Law in 50 Papers: 25 Years of the International
Conference on AI and Law” (2012) 20:3 AI & L at 1. Berger Charles E.H. et al “Evidence evaluation: A response to the court of appeal judgment in R v T.”(2011) 51:2 Science & Justice at 43.
Caraher, William “Is this computer coming for your job? IBM's Watson supercomputer shows tremendous potential to revolutionize the legal profession” (2015) Nat’l LJ at 16.
Cooper, Betsy, “Judges in Jeopardy!: Could IBM’s Watson Beat Courts at their Own Game?” (2011) 121 Yale LJ at 87.
Fenton, Norman et al, "When ‘neutral’ evidence still has probative value (with implications from the Barry George Case)"(2014), 50:4 Science and Justice at 274.
Fosfuri, Andrea, Giarratana, Marco S. & Luzzi, Alessandra “The Penguin Has Entered the Building: The Commercialization of Open Source Software Products” (2008) 19: 2 Organization Science at 292.
Franklin, James, "Discussion Paper: How Much of Commonsense and Legal Reasoning is Formalizable? A Review of Conceptual Obstacles," (2012) 11:2-3 L, Prob & Risk at 225.
Kacsuk, Zsófia, "The Mathematics of Patent Claim Analysis," (2011) 19:4 AI & L at 263.
Katz, Daniel Martin. "Quantitative Legal Prediction - Or - how I Learned to Stop Worrying and Start Preparing for the Data- Driven Future of the Legal Services Industry. (Innovation for the Modern Era: Law, Policy, and Legal Practice in a Changing World)," (2013) 62:4 Emory LJ at 909.
Kilamo, Terhi, et al. “From proprietary to open source—Growing an open source ecosystem” (2012) 85:7 The Journal of Systems & Software at 1467.
Lee, Changyong; Bomi Song & Yongtae Park, "How to Assess Patent Infringement Risks: A Semantic Patent Claim Analysis using Dependency Relationships," (2013) Technology Analysis & Strategic Management 25:1 at 23.
McGinnis, John O. & Russell G. Pearce, "The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services.(Colloquium: The Legal Profession's Monopoly on the Practice of Law)," (2014) 82:6 Fordham L Rev at 3041.
Možina, Martin et al, "Argument Based Machine Learning Applied to Law," (2005) 13:1 AI & L at 53.
Murphy, Tonia Hap, "Mandating use of Predictive Coding in Electronic Discovery: An Ill‐ Advised Judicial Intrusion," (2013) 50:3 Am Bus LJ at 609.
Polanski, Arnold “Is the General Public Licence a Rational Choice?” (2007) 55:4 The Journal of Industrial Economics, at 691.
Schweighofer, Erich “Designing text retrieval systems for conceptual searching”, Comment, (2012) 20:3 AI & L, 20(3) at 9.
Trevor Bench-Capon et al, “A History of AI and Law in 50 Papers: 25 Years of the International Conference on AI and Law” (2012) 20:3 AI & L at 1.
Vlek Charlotte S. et al “Building Bayesian networks for legal evidence with narratives: A case study evaluation” (2014) 22:4 AI & L at 375.
SECONDARY MATERIALS: CONFERENCE PAPERS
Baron, Jason R. & Thompson, Paul (2007) “The search problem posed by large heterogeneous data sets in litigation: possible future approaches to research” (Paper Delivered at the Eleventh International Conference on Artificial Intelligence and Law, 2007) .New York: ACM Press, 2007 at 42.
Belew, Richard K. “A connectionist approach to conceptual information retrieval” (Paper delivered at the First International Conference on Artificial Intelligence and Law, 1987).New York: ACM Press, 1987, at 116.
Bing, Jon “Designing text retrieval systems for conceptual searching” (Paper delivered at the First International Conference on Artificial Intelligence and Law, 1987) New York: ACM Press, 1987, at 51.
Hafner, Carol D. “Conceptual organization of case law knowledge bases” (Paper delivered at the First International Conference on Artificial Intelligence and Law, 1987). New York: ACM Press at 35.
SECONDARY MATERIALS: MONOGRAPHS
Burke et al. E-discovery in Canada, 2nd ed (Markham, Ont.: LexisNexis,2008). Kaptein, Hendrick, Prakken, Henry & Verheij, Bart. Legal evidence and proof statistics, stories, logic (Applied legal philosophy). (Burlington, VT: Ashgate, 2009).
Russell Stuart J. & Norvig Peter, Artificial Intelligence: A Modern Approach, 3rd ed. (Upper Saddle River, N.J.: Prentice Hall, 2010).
SECONDARY MATERIALS: ONLINE SOURCES
Canadian Bar Association, “Futures: Transforming the Delivery of Legal Services in Canada” (August 2014) online: <http://www.cbafutures.org/CBA/media/mediafiles/PDF/Reports/Futures- Final-eng.pdf?ext=.pdf>
E-Discovery Inst., eDiscovery Institute Survey on Predictive Coding, eDiscovery Inst., 6-10 (Oct. 1, 2010), online: <http://www.ediscoveryinstitute.org/images/uploaded/272.pdf> Hebert, Sarah et al. “Bayesian Network Theory” (29 November 2007), The Michigan Chemical
Process Dynamics and Controls Open Text Book, online: <https://controls.engin.umich.edu/wiki/index.php/Bayesian_network_theory>
The Open Source Initiative, “The Open Source Definition” (22 March 2007), online: https://opensource.org/osd [Open Source Initiative].
Comments