Impact & Opinions | Tionchar & Tuairimí

The Future of Law: Automation or Augmentation?
edition-image
AI & Human Creativity

The Future of Law: Automation or Augmentation?

10 December 21
 | 
0
(0)
 STARS
 | 16 MINS

What will the future of law look like in the Fourth Industrial Revolution? Rónán Kennedy delves into the likely changes we’ll see on the automation front, and calls for the addition of digital literacy to the current menu of legal skills.

Imagine that by some magic, we transported a doctor from a hospital in Dickens’ time to a present-day infirmary. The new arrival would be confounded and bedazzled by all of the digital technology flashing and beeping around her. Now imagine that we transported a lawyer from the same time period to a modern court. Although there would be some screens here and there (and in an Irish courtroom, a large time display for the Digital Audio Recording system), the nature of the work and the tools used would largely be familiar: paper documents, bound law reports, and large textbooks abound.

However, that is all changing and, in a few years, our hypothetical time travelling lawyer might be as much at sea in a courtroom as a Victorian doctor in a modern clinical setting. Indeed, in the last two decades, if he had visited a lawyer’s office, he would have found wondrous new devices that allow documents to be viewed and dispatched long distances in an instant, and enable immediate access to vast volumes of legal texts. If he had the misfortune to visit from the past during the peak of the pandemic, he would have found no-one but a registrar in the courtroom, with the proceedings taking place on a giant screen and judges sitting remotely. The ’IT revolution’ has finally reached one of our most conservative professions, the law. Will the process of innovation that it has enabled be positive or negative?

I tell my students that the right answer to any question about the law is ‘it depends’, and that also applies to whether we should welcome or fear ‘lawtech’. Consumers and businesses may be glad to see reduced costs and more responsive services. Vested interests might be less keen on seeing profit margins eroded. The public generally should watch carefully to ensure that professional standards are maintained and that fundamental values are not lost in the transition. The outcomes of technological change are rarely determined in advance, and a great deal depends on the choices that we make.

Legal costs in Ireland are often criticised for being too high (although the Law Society of Ireland takes a different view). Information technology can help reduce them, particularly as new ‘artificial intelligence’ tools appear that can help lawyers provide advice, draft documents, and analyse contracts. However, some large-scale projects have been high-profile failures, such as the Law Society’s initiative to develop an electronic conveyancing system. As we create software that can read documents, we create more and more text that must be sifted through by these new tools.

Lawtech could also allow individuals to access cheaper legal services in a more convenient way – for example, by using a chatbot on a website for an initial consultation or to draft something routine like a will at a time that suits them, and then finalising the document in a video meeting with their solicitor. However, research indicates that cost is often not a barrier to accessing services – many people put off making a will because they do not wish to contemplate their own mortality. Those on the wrong side of the ‘digital divide’ should not be prevented from accessing legal services because they do not have the skills, an appropriate device, or broadband.

Remote hearings have been an essential aspect of the courts’ response to the pandemic, allowing the system to continue to function, even at a reduced speed. Although remote hearings work well for routine and short matters, judges and lawyers prefer face-to-face sessions.  Remote hearings are clearly inappropriate for criminal matters, as defendants find it even harder to engage properly with their lawyers or figure out what is going on.

Of even more concern is the growing use of AI tools to assist judges in decision-making, particularly in bail and sentencing. Such tools are prevalent in some legal systems, such as the USA and China. Their proponents argue that they are fairer and more transparent. However, evidence indicates that they ‘bake in’ existing biases and prejudices. Many AI systems are so-called ‘machine learning’ tools. A close examination that gets past the marketing hype will discover that these are no more than statistical modelling which relies on the past to predict the future.

The Future of Law: Automation or Augmentation?

Such AI tools are not ‘thinking’ beings and the approach to problem-solving that they rely upon is very different to how humans think. They are undoubtedly effective in some specific contexts, but while we might tolerate an AI recommender that suggests a streaming show that we don’t enjoy, we should be much more cautious about an algorithm that guides a judge to deny an individual their liberty.  We should prohibit them altogether if it transpires that those recommendations simply repeat historical patterns of bias against particular ethnic or social groups. It is also important to remember that experiments have shown that if the same sets of rules are given to different teams of software developers to automate, the results will differ from each other in ways that matter.

If lawyers, and particularly judges, rely more and more on software, and particularly AI, to assist them in their daily work, this (often invisibly) shifts the power in decision-making away from the legal profession and towards the programmer and the systems architect. New professions are developing and may need regulation to ensure that ethical standards are met. The public should be confident that the website chatbot which advises them on routine matters such as planning permission, income tax, or family law has been developed by individuals with appropriate education and training, who have proper liability insurance in place, and have an understanding of how their services might cause harm. They should also be able to trust that their lawyer is not relying solely on computer advice (which may have significant blind spots), particularly if failure to argue a particular line of defence could result in their losing money, going to jail, or being deported.

The European Union is developing an ‘AI Regulation’, which will limit the use of certain ‘high-risk’ AI systems, but we may also need to require training, certification, and ongoing membership requirements for those designing and building AI systems, as we already do for architects and engineers, for example. However, regulation should not prevent innovation where it can bring clear benefits to consumers and businesses, and justified concerns about unscrupulous or naïve entrepreneurs should not be an excuse for lawyers to stifle competition.  Nor should the very real difficulties in developing AI that does not reproduce existing social problems halt experiments by courts in online dispute resolution, particularly for small claims.

Discussion of lawtech often predicts ‘robot lawyers’ and ‘robot judges’. (Sometimes it seems that if lawyers were all replaced by machines, we would not be missed.) These predictions are mistaken in at least two ways: first, judges and lawyers do a great deal more than simply give legal advice or hand down judgments – they also manage the courts, provide commercial guidance, and support their clients in many ways. Second, and more important, AI tools are not ‘general intelligence’ and probably never will be. They can be very effective and efficient in certain limited domains, but they are not sentient, will often fail in ways that are very different to human mistakes, and they do not recover from failure like humans can.

The lawyer of the future will probably be relying on these tools to analyse and draft documents, locate relevant legal authorities, and advise their clients on the likelihood of positive or negative outcomes if they go to court, but a good lawyer will have a keen understanding of the strengths and weaknesses of AI (in the same way that they will know the abilities and limits of the people on their team) and will know how to marshal them in a way that works well and makes sense in a particular context, rather than applying them blindly in a ‘one size fits all’ manner.

The future of the law is not automation, but augmentation – using machines to supplement and assist with thinking by humans, not replacing them, for example, with software tools that can search vast databases of text or make predictions about the outcomes of litigation based on past experience with more accuracy than a person. This is in many ways no different to the ways that lawyers have used technologies to expand the limits of the mind in the past: writing down legal texts rather than relying on memory, printing them so that they can be quickly distributed, indexing them so that relevant material can be easily located. Properly managing this transition will require the addition of critical digital literacy to the essential skills that we expect of a student in law. NUI Galway is including this and other modern requirements in ground-breaking modules such as Understanding the Law and Law and Innovation, but there is still a great deal more to do in order to fully engage with the new potential and dangers of so-called AI.

This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number 19/PSF/7665. It is based on Oireachtas Library & Research Service, 2021, L&RS Spotlight: Algorithms, Big Data and Artificial Intelligence in the Irish Legal Services Market.

Profiles

profile-photo
Rónán Kennedy
RATE

0 / 5. Vote count: 0

Discover More
edition-image
SDG Champion

Focal ón Uachtarán

Keep up to date on the latest from us straight to your inbox

Privacy policy