UNIVERSITY of NOTRE DAME
Extended Privacy For Extended Reality: Xr Technology Has 99 Problems And Privacy Is Several Of Them
Suchismita Pahi & Calli Schroeder
Americans are rapidly adopting innovative technologies which are pushing the frontiers of reality. But, when they look at how their privacy is protected within the new extended reality (XR), they will find that U.S. privacy laws fall short. The privacy risks inherent in XR are inadequately addressed by current U.S. data privacy laws or court-created frameworks that purport to protect the constitutional right to be free from unreasonable searches. Many scholars, including Ryan Calo, Danielle Citron, Sherry Colb, Margaret Hu, Orin Kerr, Kirsten Martin, Paul Ohm, Daniel Solove, Rebecca Wexler, Shoshana Zuboff, and others, have highlighted the gaps in U.S. privacy protections stemming from big data, artificial intelligence, and increased surveillance technologies.
However, the depth and breadth of what XR technology reveals about a person, the risks it poses to bystanders, and the imminent paradigm shift of a public space versus a private space are new problems. This paper provides three central contributions for technologists, legislators, and anyone interested in privacy rights: first, a brief guide to understanding XR technology; second, a survey of the current U.S. privacy landscape and the gaps in U.S. privacy protections for XR; and third, an easily digestible list of solutions that legislators and technologists can pursue to better protect privacy in XR.
“I forgot I was in virtual reality and I got grounded, and now I’m grounded in real life.”
– Leopold “Butters” Stotch
Introduction
Augmented Reality, Virtual Reality, and Mixed Reality (collectively, “extended reality” or “XR”) are poised to explode in use in the United States (“U.S.”). XR technologies present unique risks to privacy by enmeshing the real world with the imagined. XR technologies exacerbate existing privacy concerns related to artificial intelligence and big data and introduce new privacy risks for bystanders. On top of these risks, existing privacy regulations that address virtual or real-world privacy issues fail to adequately address the convergence of realities that exists in XR. These privacy risks heighten the urgency of developing substantive protections for both users and bystanders from privacy intrusions previously only imagined in cyber dystopian fiction.
XR technologies typically involve one or more wearable devices that include cameras, microphones, and sensors that collect a vast array of information about the user and their environment. And XR data collection and use does not stop at external data or solely physical data or even inferences from that data. XR technology also includes neural activity tech, such as brain-computer interfaces (BCI), that companies are developing to make the XR experience less clumsy and more intuitive. As the technology advances, these devices will inevitably become more ubiquitous. They can collect information about not just the user, but also bystanders—which could be children, strangers, intimate partners, or anyone else. And their portability means that they collect information not just within the intimacy of the user’s own home (which itself raises a several potential privacy and safety concerns) but also a wide range of public and private places—including hospitals, shelters, restrooms, places of worship, and more.
Current U.S. privacy regulation has failed to evolve with technology, leaving Americans at the mercy of a personal privacy trade-off that is often made without the individual’s full knowledge. XR technologies are making inroads into businesses, healthcare, schools, marketing, and leisure, generating millions of data points that can be used to extrapolate, infer, and create profiles on users and bystanders alike—and may subsequently be used to manipulate, target, provide, and deny services with limited or no meaningful choices or options for those users and bystanders. This paper enumerates the privacy risks present in and unique to XR and the regulatory gaps in privacy protections from this technology. Please note that the terms “XR,” “XR technology,” and “XR technologies” may all be used within the paper and collectively refer to the devices and systems used to create and support extended reality.
Potential privacy risks from XR include legal and real-world harms ranging from expanded surveillance and data collection methods for law enforcement and intelligence agencies to long-term harms stemming from corporate black box decision-making for users, bystanders, and households. Our analysis explores the limits of existing U.S. privacy doctrines and of Fourth Amendment protections against unreasonable searches. Current U.S. privacy regulation largely fails to recognize privacy harms for individuals when grounded in loss of data or impacts from data without a direct tie to a financial, physical, or otherwise calculable loss or a historically recognized harm, such as intrusion or unlawful disclosure. This failure is magnified in the big data analytics context and proves particularly insufficient to meaningfully protect individuals in the XR context.
Various technologists recognize that there are privacy problems with big data, including big data processed in XR, and attempt to mitigate these privacy problems through technical measures. However, these attempts are not a substitute for substantive legal privacy protections that fully address XR technologies themselves. Existing regulations are likely to exclude XR due to narrowly tailored scope meant to address a different technology space. For example, the types of biometrics collected in XR may not trigger regulations targeted at biometrics used specifically as identifiers in existing technologies (e.g., iPhone FaceID), even though the data itself is directly related to biological measurements (e.g. height, gait, heart rate).
In addition to the risks XR poses to user privacy, XR also creates greater and significant risks for bystander privacy. Processing of bystander data poses a crucial unaddressed privacy risk because a bystander does not have awareness that their information is being collected and does not have a way of opting out of said information collection. This is especially problematic in the case of biometric data since neither users nor bystanders have the ability to change that information without surgical intervention or other highly-invasive and class-accessible actions. You can’t change your faceprint.
Facebook recently revealed a partnership with Ray-Ban to create eyeglasses that can be used for XR purposes. The glasses are unobtrusive and have to be linked with the user’s Facebook account. The only indication to bystanders of these glasses’ XR capability is a small red light on the frames. While the Ray-Ban capabilities are currently relatively limited, it is a foray into XR that can only grow and immediately implicates bystander privacy by allowing recordings that are not easily detectable by the bystander. These recordings are not necessarily secret, but they are also not easily detected and are unexpected by the general U.S. public. Facebook’s repeated overtures into the “metaverse,” including rebranding as “Meta Platforms, Inc.” to demonstrate its commitment to XR, add to already existing concerns about the massive data repository that will be available to Facebook to use at will if it moves virtually unregulated into the space.
Setting aside legislative approaches or judicial norms, we also explore industry standards as a risk-mitigation measure. Users are unlikely to be able to rely on industry self-regulation, as industry expectations can, and often do, diverge from user expectations and may be changed with little notice to or input from users. Industries often make decisions regarding data processing activities that the public is uncomfortable with, highlighting the disconnect in public expectations and industry norms. As a real-world example, Facebook decided to collect data from and keep shadow profiles about non-users. Notably, there are no state or federal regulations preventing companies from creating “shadow” profiles on behalf of users who aren’t engaged with a product. Facebook, from a legal perspective, could assume creating profiles in this manner was a reasonable choice. But, from a transparency and user expectations perspective, it was evident that Facebook shot far above the target, as many non-Facebook users demonstrated discomfort with the concept of profiles created for them without any affirmative actions on their part. This conflict demonstrates the misalignment between permitted uses within self-regulatory systems and individual expectations. Further, this example could easily expand in the XR space to detailed profiles being created on bystanders, including sensitive information, such as biometric information, location information, and more.
As another example of the unreliability of industry self-regulation, Facebook reassured Oculus users that they would not be required to tie their devices to a Facebook account. This provided users with some assurance where they may have been interested in the gaming environment but did not want to include personal information in a Facebook account for other Facebook uses. Facebook later pivoted and announced that Oculus users would now require a Facebook account to login and use new headsets, leaving users no recourse but to tie their Facebook account identities (including the identities that had been previously built by Facebook for users without a formal account) to an XR device. The only other option for users was to stop using Oculus, a device which they’d purchased based on Facebook’s prior representations. These examples demonstrate the potential harms of leaving XR solely to self-regulation without representation for user and bystander interests. Not only is there the risk of a disconnect between public expectation and company decisions, but individuals are often left with few options to mitigate or control any exposure or damage to themselves and their personal information. Increasing forays into XR carry correspondingly increasing privacy risks and must be addressed with privacy protections before becoming irrevocably ingrained in our society.
Current privacy protections in the U.S. have proven unable to adapt to changing privacy risks, including those raised by XR. Similarly, in the context of the Fourth Amendment, existing legal protections from government intrusion are stretched thin in their applications to new technologies. Between the U.S. Supreme Court’s discomfort with the third party doctrine, which removes privacy protections surrounding information provided to a third party, and its decision in Carpenter, it appears that the judiciary is catching on to the threats that newer technologies pose to constitutional rights. However, applying Fourth Amendment law as it stands today would still allow the government to ask for and receive a company’s records of a user’s interactions with XR technologies. This could include not just standard data points, but telemetry, metadata, and derived or inferential information—sleeping habits, travel patterns, social interactions, communications content with other users, emotional state, behavioral or cognitive patterns, and more. Any restrictions on this type of data sharing would rely on both the discretion of the third party company and whether a court chose to apply the framework in Carpenter, as we discuss in more depth later in this paper.
In Part I, we aim to explain XR technologies, the scale of data collection within XR, and the personal data collection and use that these systems enable. Once we have established the technology and some of the privacy risks therein, Part II supplies a summary of existing privacy regulation and case law—both in the private sector and within government—and identify privacy risks inherent in XR technologies currently unaddressed in the U.S. regulatory framework. Finally, we propose some possible approaches to bridge these privacy gaps and ensure privacy protections for both users and bystanders in XR.
References
This paper is the result of 2 years of virtual collaboration during the chaos of the pandemic(s). We would like to express our deep gratitude to fellow practitioners who have taken the time to read and comment, or otherwise provide thoughtful feedback, challenge assumptions, and provide assessments and encouragement throughout this endeavor: Alyssa Feola, Madaleine Gray, Mike Hintze, Joel Scharlat, Ben Steinberger, and our families for their support, with apologies to anyone whom we might have omitted. The views in this paper do not reflect the views of either of our employers: Databricks, Inc. or the Electronic Privacy Information Center (EPIC).
- artificial intelligence
Article by Noah John Kahekili Rosenberg
- artificial intelligence
Article by Daniel E. Ho; Jennifer King; Russell C. Wald; and Christopher Wan
Notre Dame Journal on Emerging Technologies ©2020