Machine arbitrators: science-fiction or imminent reality?

December 2018  |  SPECIAL REPORT: INTERNATIONAL DISPUTE RESOLUTION

Financier Worldwide Magazine

December 2018 Issue


Fifty years ago law firms looked nothing like they do today. Arbitration, however, has not experienced the same changes. Is it time for a calculated overhaul?

The technological revolution that gradually seeped into legal practice is moving towards arbitration, despite conservative practitioners and a conception that arbitration lies outside the reach of artificial intelligence (AI).

Some AI is already outperforming humans. Against 100 lawyers, Case Cruncher Alpha predicted outcomes of financial ombudsman cases 20.3 percent more accurately. Kira analyses contracts 20 to 60 percent faster than the average lawyer. Watson of IBM diagnoses cancers 40 percent more accurately than doctors. Built on the Watson technology, Ross is the first ‘artificially intelligent lawyer’. Ross digests infinite databases, registers queries, conducts research, generates hypotheses and responses, and backs up reasoning with references and citations.

Frey and Osborne of Oxford University listed judges and magistrates at medium risk (40 percent) of automation. However, we are unlikely to have passed the lag phase of technological penetration of legal practice. Historically, decades lapse and sharp decreases in price occur before nascent technology penetrates social consciousness and permeates the workplace – e.g., videoconferencing technology, email, mobile phones and personal computers.

The question remains as to the extent to which machines are capable of replacing arbitrators. It should be borne in mind that the advent of arbitration spurred opposition on public policy grounds. Today, it is a judicially recognised means of alternative dispute resolution (ADR). Are familiarity and acceptance the next steps awaiting machine arbitrators? We allow AI-powered vehicles to drive us around. Why not let AI adjudicate for us?

The extent to which the arbitral framework permits new technologies in proceedings is undoubtedly in the mind of practitioners. More important is the extent to which the AI-induced disruption of the 21st century job market will revolutionise the arbitrator role. Are their jobs safe?

Article 19(1) of the UNCITRAL Model Law on International Commercial Arbitration (Model Law) states that: “subject to the provisions of this Law, the parties are free to agree on the procedure to be followed […]”. Furthermore, Article 19(2) of the Model Law states that “failing such agreement, the arbitral tribunal may […] conduct the arbitration in such a manner as it considers appropriate”. Article 14(4)(ii) of the London Court of International Arbitration (LCIA) Rules provides that “the Tribunal has a general duty to adopt procedures suitable to the circumstances of the arbitration, avoiding unnecessary delay and expense, so as to provide a fair, efficient and expeditious means for the final resolution of the parties’ dispute”.

Therefore, the contractual basis of arbitration and institutional rules confer operational freedom, from decisions to arbitration to procedure matters, that does not impede the proliferation of new technologies in proceedings.

Queue the disruption.

According to Lucas Bento, chair of the AI & International Law Subcommittee at the New York City Bar Association, “both international arbitration and AI are leading alternatives to the status quo – IA to traditional dispute resolution, AI to traditional methods of production. The former promotes freedom from the judiciary, the latter freedom from cognitive limitations”. AI stands to enhance arbitral proceedings in myriad ways, provided practitioners adequately strike the balance between machine and human input and harness the added value conferred by AI to potentiate the arbitrator role.

In a 2015 survey conducted by White & Case with Queen Mary University of London and the School of International Arbitration, 46 percent of respondents felt practitioners should better use technology to save time and costs. This percentage is likely to be greater today.

Two words summarise the impact of digital technologies on arbitration: efficiency and accuracy. Exploiting the added value of technology would yield a more favourable monetary, temporal and labour ratio, thereby addressing the inefficiencies reported. In short, technology could enhance representation and adjudication services, as well as provide further institutional services and insights.

Subsets of AI such as natural language processing (NLP) enhance cognitive capabilities and perform automatable tasks in a fraction of the time. They assist in analysing submissions, documents, agreements and lengthy awards with negligible margins of error. Moreover, NLP can extract meaning from written and oral materials, and text-mining can produce summaries of documentation and material findings. Furthermore, machine learning and Big Data technology can predict costs, duration and merits of arbitrating, and propose settlement ranges. Effectively, AI is touted to assist pervasively from drafting suggestions and safeguarding against exposure, to appointing arbitrators and preparing awards.

So far, so good. But is the usurpation of the arbitrator role an imminent consequence? At what point does automation equal hindrance of the intuitive administration of justice? Many national legislations impliedly exclude machine arbitrators by referring to arbitrators as ‘people’. For instance, the Peruvian Act and French Code of Civil Procedure provide that an arbitrator must be a natural person in full capacity to exercise their civil rights. Moreover, the Brazilian Arbitration Act makes reference to a name, profession and domicile of the arbitrator and the UK Arbitration Act provides for their removal on grounds of physical or mental incapacity.

The previous is not true of all jurisdictions. Ecuador, Mexico and Chile are among those that do not refer to arbitrators as natural persons. Importantly, the UNCITRAL Model Law does not address the issue, which consequently encompasses numerous jurisdictions in the loophole.

However, in addition to confidentiality, due process, bias, regulatory and public policy concerns, machines lack the sociological print, emotional sensitivity and underlying metacognition intrinsic to decision making and performance of the arbitrator role. But how important a role do emotions play in decision making?

We all remember one situation or another when we were told to keep a cool head and take emotions out of the equation: that logic should prevail in decision making. We may have been lied to. Compelling scientific evidence elucidates the role of emotions in decision making and the consequences of a lack of the former on the latter.

American-Portuguese neuroscientist Antonio Damasio conducted studies on patients with injuries to parts of their limbic system responsible for integrating emotion and cognition. They exhibited an impairment in their ability to feel emotions and consequently decision making. Curiously, despite cognitive perception that the decision they were making was not the most advantageous, measures of electrodermal activity demonstrated that a lack of emotional signals prevented them from altering the behaviour to their benefit. The lack of emotion-induced signals prevented an adaptive response. There exists a link in the brain between emotion and reason that is paramount to decision making.

Therefore, Herbert Simon correctly postulated that: “to have anything like a complete theory of human rationality [which we now purport to be able to delegate to machines], we have to understand what role emotion plays in it”. But to what extent, if any, are we able to impart emotional perception on machine software? Scholars such as David Hume have theorised that emotion is a precursor of reason and that “reason is, and ought only to be, the slave of passions, and can never pretend to any other office than to serve an obey them”. Perhaps in order to trust machines to make reasoned decisions, we must be capable of programming emotional consciousness. If we share in this conceptualisation, the appointment of machines as arbitrators seems distant.

Moreover, there exists a school of thought that it takes emotions to recognise emotions and particular emotions motivate specific responsive actions. It flows from the work of Frijda, Solomon and Maroney, that anger, for example, is instrumental to adjudicators. It motivates one to engage due to a sensation that something significant is afoot. It fosters desire to restore injustice and predisposition to alter the state of affairs.

Nappert and Flader emphasise the need for adjudicators to withdraw their own emotions but employ emotional sensitivity to perceive the parties’ emotions and comprehend the situation prior to the dispute.

Conclusion

Institutional rules confer operational freedom and do not bar AI in arbitral proceedings. There are myriad advantages of incorporating AI selectively in arbitration. Nevertheless, while technically feasible to appoint a tribunal of machines, considerable downsides cannot be overlooked.

The greatest disadvantage is the lack of emotional sensitivity and perception which, from a neurobiological standpoint, is instrumental to efficient decision making and tasks entrusted to arbitrators. Emotions are inextricably linked to information, motivation, processing, memory and judgment, the lack of which impedes the adoption of machine arbitrators.

While a potential usurpation of the arbitrator role seems distant, it is advisable to address the possibility now, to minimise the damage that often follows consequential change.

Arguably, this disruptive turmoil warrants implementing think tanks and roundtables to adapt institutional rules and legislation, effectively barring the usurpation of the arbitrator role and the dismantling of proceedings. Moreover, they should endeavour to ensure the seamless integration of AI into arbitral proceedings. The nascent nature of these technologies signifies that practitioners are seldom well versed in their utilisation and potential. Overreliance can equate to counter-productivity and dependence could signify an election of efficiency and accuracy over quality, intuitive administration of justice.

We should instruct practitioners on the advantages and drawbacks of AI and collectively move in a direction whereby technology is neither shunned nor dominates, but is correctly harnessed to potentiate the quality of both the proceedings and awards rendered.

 

Francisco Uríbarri Soares is an associate at FRORIEP. He can be contacted on +41 79 823 33 65 or by email: furibarrisoares@froriep.ch.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.