The term metaverse was first coined by Neal Stephenson in his 1992 science fiction novel Snow Crash. In the novel, the main character moves in and out of a three-dimensional virtual urban landscape called the metaverse, populated by user-controlled digital avatars and system bots, to have a lifelike experience.
The ensuing technological advances have turned what was once a science fiction dream into a reality. Popular massively multiplayer online games (MMOGs) such as World of Warcraft, Minecraft, and Fortnite have created metaverses where users can play with each other by use of avatars. The advances in virtual/augmented/mixed reality (AR/VR/MR) technologies have allowed users to put on a VR/AR/MR goggle and experience the metaverse from a first-person perspective, much like the characters in Snow Crash.
Now, photorealistic avatars are coming. Scanning a photorealistic avatar for virtual reality used to take more than 100 cameras. With advances in technology, it can be done with just one iPhone, according to researchers at Meta. If photorealistic avatars become readily available for general use, the metaverse could look virtually indistinguishable from the real world, with users interacting with each other through avatars that look and act just themselves in the real world.
The brave world of metaverse with photorealistic avatars, however, will not be without problems. One particular problem will have to do with the old adage, “[o]n the Internet, no one knows you are a dog.” The anonymity of the Internet allows one to completely reinvent oneself to whoever he/she wants to be. It can also be used to deceive others.
The story of Manti Te’o, a former Notre Dame University football player who became a victim of a catfishing incident in 2012, is a good reminder. He met a young woman online who claimed to be a student at a distant university and began an online relationship with her. It turned out to be that she was a fake online persona created by someone who used a photo taken from another woman’s social media without her knowledge.
The technological advances in artificial intelligence, audio, and graphics since then have made deepfake photos, videos, and audio available to those who want to deceive others. For instance, during the 2022 Russian invasion of Ukraine, a video of Ukrainian President Zelensky purportedly telling the Ukrainian soldiers to surrender surfaced on social media, only to be debunked as a deepfake.
It will be a matter of time before the same technology will be used to create a fake avatar using someone else’s facial image and take over that person’s identity. In fact, a pair of fraudsters was recently arrested in China after they purchased facial images on the black market and created fake identities as part of a tax fraud scheme.
Identity theft of one’s facial image, fingerprints, iris scan or voiceprint, so-called “biometric identifiers,” can have even more pernicious consequences than traditional identity theft. In the United States, for instance, a social security number is the number one target of identity thieves because, if stolen, the thief can use the social security number to borrow money, open checking/credit card accounts, claim a tax refund from the government, apply for government benefits, etc. Yet, a compromised social security number can be changed by the administrative agency that issued the number. On the other hand, if biometric identifiers are stolen, how is one supposed to change his/her facial image, fingerprints, iris, or voice?
In recognition of this dilemma, several US states have passed laws to protect individuals from biometric identity theft. Illinois first passed the Biometric Information Privacy Act (“BIPA”) in 2008. Texas, Washington, and New York followed suit and passed their own version of the BIPA. Many other states are also considering a similar law. Undoubtedly, these are positive developments. However, these efforts fall short in several ways. First, most states do not yet have their own version of the BIPA. Second, the existing BIPAs are binding only on the private entities conducting business in those states. Third, the BIPAs only regulate the collection, use, and handling of biometric identifiers by individuals and private entities. They are silent as to and do not have any remedy when one’s biometric identifiers are stolen and used on the metaverse.
Here is the issue. The metaverse transcends state and national borders. A photorealistic avatar wearing your facial image may be pretending to be you on a metaverse run by a company a few continents away. A US state law, even if it provides a remedy for biometric identity theft, it is not going to be of much help. There has to be a borderless solution to this borderless problem.
We can perhaps take a cue from cybersquatting. The use of trademarks as Internet domain names without the trademark owner’s consent was problematic and often led to consumers being misled. As the disputants spread around the world, the Internet Corporation for Assigned Names and Numbers (ICANN) established the Uniform Domain-Name Dispute-Resolution Policy (UDRP) in 1999 which requires all registrants of domain names worldwide to (i) represent and warrant that the registration will not infringe upon the rights of any third party and (ii) agree to participate in an arbitration-like proceeding in the event of any third-party claim. The UDRP has proved to be an effective way of resolving domain name disputes, in part because the ICANN and registrars have the power to take down domain names that have been decided to infringe third-party trademarks.
The metaverse needs an equivalent of the UDRP in order to deal with digital identity theft. Each time a user creates an avatar on any metaverse, he/she should be able to represent that he/she has the right to use any biometric identifier associated with the avatar that belongs to him/her and agrees to an arbitration-like proceeding to resolve any third-party claim as to the biometric identifier. Combined with each metaverse platform’s power to remove the avatar found to be using unauthorized biometric data, this approach should protect victims of digital identity theft.
Just a reminder that SCOTUS will be hearing oral arguments next week on two cases involving arbitration. We reported in December that the Supreme Court had granted Certiorari in four cases involving...By George Friedman
This article first appeared on Urbas Arbitral, here. In Abittan v. Wilcox, 2020 ONSC 6836, Mr. Justice Frederick L. Myers reiterated the “consequences” of agreeing to submit disputes to arbitration, limiting...By Daniel Urbas
In this episode of the Arbitration Conversation Amy interviews Prof. David Horton of the University of California - Davis School of Law about infinite arbitration clauses. https://youtu.be/SI2f3ubCytcBy David Horton, Amy Schmitz