Battling Deepfakes in Arbitration
Images, videos and documents created by artificial intelligence are coming to the ADR world. While courts have been wrestling over this, so, too, have arbitrators. Unfortunately, given the limited discovery procedures that are inherent to arbitration, the ability to suss out fakes and the fraudsters who make them is more difficult.
On April 30th, the Silicon Valley Arbitration and Mediation Center (SVAMC) published its “Guidelines on the Use of Artificial Intelligence in Arbitration.” A total of 7 guidelines, these aim to strengthen the integrity of the arbitration process and ward off fraudulent AI.
The guidelines speak out against using AI to select arbitrators, to verify data protection protocols and, perhaps most importantly, prohibit the falsification of evidence or efforts to mislead the arbitrators. They also make it clear that humans are ultimately responsible for anything that AI would generate. Interestingly, though, the guidelines fall short of mandating disclosure when AI is used.
The guidelines also apply to arbitrators. Under the guidelines, arbitrators are prohibited from using AI to generate anything outside of the record, without disclosing such. Arbitrators are also required to verify anything AI would generate, such as a draft award.
AI is here to stay. It is incumbent upon us to get control of its usage in ADR.
To download the guidelines, click here.