Exclusive HOLI Discounts!
Get Courses and Combos at Upto 50% OFF!
Upgrad
LCI Learning

Share on Facebook

Share on Twitter

Share on LinkedIn

Share on Email

Share More


Artificial intelligence (AI) drives value based on business outcomes around customer experience, cost reduction and revenue generation. Successful CIOs understand that AI applications are more than technical and tactical projects and that the application of AI as a technological capability can enable new opportunities and help achieve business goals.

The use of artificial intelligence (AI) and machine learning is already imposing changes to market practice and service delivery in some parts of the legal sector. However, the role of efficiency-driven solutions powered by AI and the legal tech space broadly is still evolving within the legal industry.

In the justice system specifically, AI has the potential to radically influence the way criminal and civil proceeding are heard and decided though, there are many questions around its eventual application, and the necessity to consider the ethical implications of using such technology..

A Commission was set up to examine the use of algorithms in the justice system in England and Wales and what controls, if any, are needed to protect human rights and trust in the justice system.

As the legal industry increasingly engages with efficiency driven solutions, including AI and machine learning—what controls, if any, are needed to ensure that trust and basic human rights are protected in the justice system?

I think it’s helpful to distinguish between two kinds of issues.

Within customer-facing solutions, we are going to see an explosion of ‘legal apps’. There will be cases (think parking fines) where there is little downside to the vital increase in affordability and accessibility that automation brings, provided transparency, accountability and privacy are safeguarded. Yet such clear-cut cases of unproblematic automation are not that common. Laudable as it may be, the drive to democratise legal expertise by distilling it into mass-market, problem solver apps can conceal issues that demand human input. As an example, an app that allows those who have recently been dismissed from their job to avail themselves of their right to severance pay (which may be opaque due to complex legislation) is commendable. Yet without a proactive referral system, such an app would fail its users.  The vulnerability that is concomitant with finding oneself jobless cannot be addressed by algorithms, no matter how much empathy such apps may be able to display.

At a larger scale, there is a risk that a focus on efficiency. For instance, through increasingly performant prediction tools—it will make us blind to the fact that increased automation is changing the very nature of the legal system. Given their impressive accuracy, it is highly likely that lawyers will increasingly refer to prediction tools to advise clients on whether their claim is worth pursuing. This may seem like a welcome innovation, except for the fact that this will insidiously contribute to a growing degree of conservatism since cases with a low success prediction are unlikely to be heard in court. This in turn makes organic changes within case law less likely. Shifts in case law often depend upon an accumulation of previous, unsuccessful cases that trigger a growing number of dissenting voices (both within and without the judiciary). There may be ways of developing tools that not only predict the chances of success in court, but also the likelihood that a particular case will eventually contribute to some organic evolution within case law, but commercial incentives for both the development and use of such tools will be low.

How do you envisage AI impacting the legal profession and the role of lawyers in the next five years?

There is little doubt that advancements with computer systems will play an essential role within the legal profession, and that this could transform it for the better. Automated document management (and discovery) is already becoming commonplace, saving lawyers a lot of dull workhours, but we are still a long way from harnessing the full potential of the data now available. Everything hangs on exactly how we harness that potential, whether we allow an instrumentalist logic to take over or whether the aims that preside over such data mining reflect what we want the law for.

In terms of future roles for lawyers, again, there is no doubt that the nature of that role will change. In many areas, we probably won’t need quite as many lawyers. What few people realize, however, is just how urgently we need lawyers trained in data governance. I believe that in the next five years we will see an increasing need for lawyers acting as intermediaries between data subjects and data controllers (both in GDPR countries and elsewhere). Law schools need to get their act together and urgently train future lawyers in data governance. This would ideally be within the context of interdisciplinary degrees. We do need lawyers with some minimal training in statistics and computer science.

I believe this turns the case for wholesale automation on its head. One can no longer assume that, as a rule, wholesale automation is legitimate, provided it improves the quality and accessibility of legal services. The assumption, instead, is firmly in favour of designing systems that better enable legal professionals to live up to their specific responsibilities.


"Loved reading this piece by Rama chary Rachakonda?
Join LAWyersClubIndia's network for daily News Updates, Judgment Summaries, Articles, Forum Threads, Online Law Courses, and MUCH MORE!!"






Tags :


Category Others, Other Articles by - Rama chary Rachakonda 



Comments


update