“It is better that ten guilty persons escape than that one innocent suffer.”
Considering the present situation, it is still unclear how much of a wide spectrum will AI occupy in the courtroom. Though, one of the most powerful countries in the world, the United States has integrated AI into its judicial system. AI has been touching all the industries and its footprints are apparently visible as more and more companies are looking forward to going digital and or planning to make a difference in their business strategies by integrating AI into their strategic plans. Similarly, both private and public firms engaged in law and order are taking similar initiatives but the question actually comes into streamline whether Government bodies like an actual courtroom should actually take help from legal tech? And, this article is meant to shed insight on the risks and achievements of involving tech in legal work.
Convincing Achievements of Legal Tech
Last year in February 2020, during an annual session held by New York State Bar Association a special scenario was discussed among the members regarding the possibilities of integrating AI in the Courtroom. Wherein, a scenario was considered which stated that involving technology in courtroom proceedings will help in making decisions faster in certain cases which are almost identical to the previous cases and their results are almost similar in all the cases. Therefore, it was expected that legal tech should be able to manage the same and with time legal tech’s growth can be foreseen only to be strengthened. Furthermore, given below are various ways in which AI in Courtroom is already practiced:
Court’s using AI is currently using the algorithms meant for identifying defendant’s risk. The algorithm helps in predicting the probability of the defendant's risk of committing another crime during or after the trial period if relieved from prison. The LegalTech term for the given software is COMPAS which helps the U.S. government in measuring the defendant’s risk of committing a crime again. Wikipedia defines the same as:
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a case management and decision support tool developed and owned by Northpointe (now Equivant) used by U.S. courts to assess the likelihood of a defendant becoming a recidivist.
People supporting Legal Tech have opined that integrating AI in the courtroom can help in ensuring proper workflow and case management amongst the piles of pending cases. Be it arranging dates, reminding dates, or preparing a cause list; legal tech is continuously growing and helping professionals to make their work-life easier by integrating software that can manage some of their work for them.
Legal tech experts have to say that once AI-based bots or algorithms are used to fasten the case process; experts like Judges and Jury can take the help of these bots to come up with a probable decision that should be made regarding the case. The system can be trained on the dataset of the previous cases which help in the ground level learning of the machine, and therefore; contribute to making decisions.
Risks attached with Legal Tech
Is legal tech reliable enough that the decisions made by the AI system can be approved without any human interference? As of now, the answer is an absolute no. The reason is simple; AI is not human. This means that as each and every human is different AI is continuously evolving but it is still not perfectly capable to understand the intricacies of human nature and each and every case cannot be judged on the same parameters. Furthermore, there are more points that question the reliability of Legal Tech:
AI bots that will help in the recommendation of the cases will have to be trained on a certain dataset. Therefore; let’s say in the past due to the case complexity some decisions were made which were different than the usual sentence. So, what will happen in this case? Whose probability will be accountable?
It is socially and practically researched that humans generally possess bias towards each other. And since, this data will be prepared using human data points; henceforth, will the AI be affected with human bias as well?
Is there someone who can be held accountable for these software/ bots? Can we pinpoint a particular person who can withhold responsibility for not letting any mishappening or unfair judgment be given?
Should we start considering AI rights? This does not involve giving voting rights or other human rights to AI but the point that should AI witness be considered as a witness? Can they be labeled as “expert witnesses”?
To further explain the point let’s consider the legal department of the United States that has adapted COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) in their working environment. The department is taking help to fasten the decision-making process by taking leverage of the defendant’s risk algorithm present in COMPAS. As explained above as well, the defendant’s risk algorithms help in identifying the probability of a criminal committing a crime again. To make the AI system more effective they took a measure where they made a point of being careful with the racial scenarios. And now, as a result, it is recorded that:
“blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” whereas COMPAS “makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.”They also found that only 20 percent of people predicted to commit violent crimes actually went on to do so.”
From the above instance, it can be drawn upon that ethics were neglected in terms to make life more comfortable by using Legal Tech.
There are instances where the mere presence of legal tech has been questioned but it still cannot be neglected that are certain instances where they have contributed to making the whole legal process better by making it faster, cost and time-effective. By ensuring that there are no delays in the workflow of the judges/jury or lawyers and by providing assistance in managing trial dates; AI has proven its potential in the legal tech. Though simultaneously, it becomes essential to express that introducing AI along with Jury & Judges in a courtroom will be highly unethical.