Home

No Escape From Reality

The immersiveness of virtual reality fuels a fantasy of richer data collection

Full-text audio version of this essay.

In 2019, a tech startup called TaleSpin exhibited a demo of its “virtual human technology” as part of a managerial training module. It used virtual reality to put a trainee “in the shoes of an HR manager tasked with terminating a fellow employee” — a simulation (or “virtual human”) rendered as an older adult in his 60s named Barry. Using voice recognition and “AI-enabled software,” Barry responds to the behavior of the trainee. If the trainee’s tone is assessed as too aggressive, Barry will express dismay at his chances of finding employment elsewhere. If their tone is too soft, Barry might get angry at the indignity of being fired by someone so much younger than he is.

In a press release (since taken down), TaleSpin claimed that this module would allow trainees to “gain virtual experience that feels real enough to create emotional muscle memory, and get real-time guidance on how to empathetically and effectively terminate an employee.” TaleSpin co-founder Kyle Jackson boasted about how real interacting with Barry feels: He pointed out how some users’ hands sweat, some start crying, others take off the headset. The more real this simulation feels, they claim, the more applicable the trainees’ increased capacity for “managerial empathy” will be. But for TaleSpin, and presumably for its prospective clients, the point of this realness is not simply to make the emotional rehearsal more compelling and edifying. It is to “make training for ‘soft skills’ measurable.”

VR tools are not just a means of immersive simulation, but also of data collection derived from a scenario that is “real” enough to inspire confidence in the data’s veracity

Elsewhere, Walmart is already using virtual reality in its hiring and promotion process “to simulate everyday obstacles” a potential employee might face. While the company emphasizes that VR assessment is only one of the “data points” used in hiring decisions, its tech partner STRIVR describes how much it claims to extract from a VR session, claiming to provide “objective” and “automated” predictions of a trainee’s capability to deal with an emotional customer. In a webinar, STRIVR’s chief science officer Michael Casale says that the data it collects — in this case, on decision-making, performance, attention, and engagement — predict “in almost 80 percent or more than 80 percent of the cases how people would actually perform in the real world.” He asserts that with as little as 20 minutes of VR, companies can “actually start to make predictions of real-world performance just based on what’s going on in the headset.”

VR tools are generally presented as a means to a more immersive simulation for users, but as these examples suggest, they are also a means of quantification and data collection. When they are explained to institutional clients rather than consumers, they are described as means for capturing more of a user’s physiological data, derived from a simulated scenario that is “real” enough to inspire confidence in the data’s veracity and broader applicability. VR companies are quick to laud the thoroughness of the data they collect about users, as we’ve found in our research project into the ethical implications of emerging mixed-reality technologies: TaleSpin’s co-founder Kyle Jackson asserts that “we can measure anything, from your sentiment to your gaze to what you said and how you said it.” Immerse.io co-founder Justin Parry describes VR as “fundamentally different” from other learning mediums, because it can “record absolutely everything that user did” with “30 data points per second.” STRIVR describes VR as providing the “next generation of data” that will provide “insights about proficiency never before captured by traditional learning methods.” Mursion, a U.S.-based VR simulation company, claims that its “simulations achieve the realism needed to deliver measurable, high-impact results,” and STRIVR’s “science resources” page claims that VR simulations “activate the same neural pathways in the brain” as real scenarios.

As Jeremy Bailenson, the founding director of Stanford University’s Virtual Human Interaction Lab, explains, current VR systems “typically track body movements 90 times per second to display the scene appropriately, and high-end systems record 18 types of movements across the head and hands. Consequently, spending 20 minutes in a VR simulation leaves just under 2 million unique recordings of body language.” Next generation VR devices, with more inward-oriented sensors — eye-gaze tracking, physiological sensors to measure heart rate, facial expression monitoring, and brain-computer interfaces — will allow companies to extend their claim, as Immerse.io does, that VR data “capture every detail,” offering a frameless representation of the learning experience it offers. Through all this monitoring, VR systems aspire to more extensively quantify aspects of qualitative training, standardize this quantification across employees and institutions, establish benchmarks for normative evaluation, and provide lakes of data for artificial intelligence, machine learning, and automated decision-making.

This fantasy about the quality of VR-procured data catalyzes the speculative multibillion value of companies like Magic Leap. It has also led to the acquisition of VR-related firms by massive tech companies like Facebook, which has incorporated the VR hardware manufacturer Oculus and a range of smaller startups into its Facebook Reality Labs research and development division. A recent update to Facebook’s Oculus user license agreement permits the company to capture and retain biometric data (such as hand size and movement data) and share it with its subsidiaries to enhance marketing.

But VR data is no more “perfect” or accurate than any other measurement scheme: like any quantification of behavior, it is based on normative and exclusionary assumptions that are often gendered, classed, and raced in their origin and outlook. As Rob Kitchin has argued, data is often erroneously framed as “being objective, neutral, and free of bias,” as though it were “simply natural and essential elements that are abstracted from the world in neutral and objective ways.” But data is never a neutral representation. It is always collected for a specific purpose, making visible something that was previously concealed and constructing a new view about the world. In other words, as Lisa Gitelman and Virginia Jackson put it, data is never raw. It is always “cooked” — collected, stored, and circulated with particular aims and logics in mind. Despite this, predictive analytics are increasingly employed to shape our lives on the basis of the pervasive belief in data’s inherent objectivity.


As consumers, we’ve been expected to adopt and adapt to extractive sensors and data collection devices before: the internet cookie, the mobile phone, the smart speaker, and so on. But with virtual reality, which draws on an incomparable intimacy with the body to render its simulations as “realistic,” the potential for abuse is much greater. VR adds yet another layer to the existing dangers of data science, which is already being mobilized as a tool for surveillance and discrimination. When data is captured and classified according to certain parameters (constructed by people, with their own biases and assumptions) and then employed within institutional settings that have their own troubling histories, it works to obscure the underlying, often discriminatory logics of decision making. In police forces, for instances, data systems do more than just reflect social attitudes. They “reinforce and amplify them,” typically causing most harm to marginalized populations who fall outside the universalized and historically couched data view of what people are (i.e. what makes people machine-readable) — a process Anna Lauren Hoffmann has called “data violence.

Next generation VR devices, with more inward-oriented sensors — eye-gaze and heart-rate tracking, brain-computer interfaces — will allow companies to extend their claim

Elaborating on Hoffmann’s argument, Os Keyes argues that data science is inherently “the inhumane reduction of humanity down to what can be counted” and, as currently constituted, “responds to critique only by expanding the degree to which it surveils us.” That is, data science sees the solution for problematic data collection to be more and better data collection, aspiring ultimately to what theorist Mark Andrejevic calls “framelessness”: a complete picture of the world “in machine-readable form,” positing a “post-subjective perspective of the view from everywhere.”

Such a picture is not merely far off; it is impossible. Data is always partial and always carries the aims and biases of those tasked with facilitating their capture. But that hasn’t stopped tech companies from trying. Wired founding editor Kevin Kelly’s account of the “mirrorworld” as the next big tech platform typifies this ambition, which can also be seen in Facebook’s recently announced Project Aria. This effort, in which employees wear data-collecting glasses, seeks to “build the software — including a live map of 3-D spaces — and hardware necessary for future AR [augmented reality] devices,” with the explicit aspiration of creating a 1:1 copy of the world.

Augmented-reality-style data overlays will offer a familiar bargain to users of convenience (such as telling them they’ve forgotten their keys) or insight (information about plants they see in the park) for a vast amount of data about them and their environment. But VR’s immersive simulations pursue similar ends, despite seeming to transport users to unreal spaces. Facebook’s Oculus Insight computer vision system constantly maps the environment around devices. Within Reality Labs’ research, Facebook has used similar computer-vision techniques to build 3-D maps of real-world environments (such as one’s home). Altogether, this means that Facebook’s VR can capture data about the dimensions, layout, and contents of your actual lounge room even as it tracks your physiological responses to the simulated space. VR may be invited into the home as an entertainment technology, but it might be better understood as another surveillant “smart” device.

As with AR’s promises of timely data when it’s needed, VR companies promise to put their data to use making personalized learning systems that can track and adapt to a user’s engagement with educational content, or personalized fitness simulations that will adapt to our bodies and push us just hard enough to keep us going. But one can also imagine these next-generation forms of surveillance powering new forms of targeted advertising. As Facebook’s updated user license agreement reads, the user’s body and environment may be tracked and fed into a system to power Facebook’s advertising arm.

VR may be invited into the home as an entertainment technology, but it might be better understood as another surveillant “smart” device

Perhaps more concerning is the incorporation of VR data into forms of automated decision-making, whose often inaccessible or inexplicable correlations could become even more complex. Proponents claim that VR data could be used to avoid some of the problems with algorithmic bias by offering rich data about what people are thinking and doing that can be retrieved by no other means — a direct and unmediated pathway from the brain to the algorithm. But the fantasy of perfect data — that more intensive sensors can capture for objective analysis a mirror-like reflection of experience — is based on normative and exclusionary assumptions. Its likely training on data sets of neurotypical male able-bodied engineers is a form of what Shea Swauger (borrowing from disability scholar Lennard J. Davis) calls a eugenic gaze, codifying xenophobia, ableism, and white supremacy behind the black-box of algorithmic bias “while avoiding equity-based critiques because of our belief in the neutrality of data and technology.” For instance, STRIVR uses what it describes as “verbal analytics” to provide an “objective” assessment of “verbal fluency,” which ostensibly translates to insight about a trainee’s ability to deal with customers. Verizon is using such a system to train its call center employees. But speech recognition software works best for white, highly educated, upper-middle class Americans; deploying it in VR scenarios merely extends the application of bias.

Our concerns don’t lie just in the data itself but in the motives for its collection. Thanks to its partnership with Walmart, STRIVR, according to CEO Derek Belch, has “probably a hundred to a thousand times more data than anybody else. And so our models will be that much further along when they start to become more refined and more specific.” This makes plain how platform capitalism and its prerogatives of data extraction for profit is shaping the future of VR.

VR isn’t the unquestionably emancipatory experience that many evangelists frame it as. Its future is being shaped less by the aesthetic or educational possibilities of the medium itself and more by the speculative interest in the economic value of the data it will yield. As Jathan Sadowski writes, “The capitalist is not concerned with the immediate use of a data point or with any single collection, but rather the unceasing flow of data-creating.” Uses can be invented for it all down the line. More data, however, doesn’t mean better data. It means further extraction and privacy invasion, more thorough efforts to impose predictive analytics, and more biased outcomes for the people trapped within the data-driven automated systems. Virtual reality — whether in fiction or in the accounts of industry boosters — is commonly framed as an imaginative escape from the limits of reality. But in practice, it is a reiteration of surveillance: of extraction and profit from data and the reinscription of the inequalities and biases reified within it. This is a reality from which there is currently no escape.

Marcus Carter is a Senior Lecturer at the University of Sydney, where he researches games, play and mixed reality.

Ben Egliston is a postdoctoral research fellow in the Digital Media Research Centre at Queensland University of Technology.