We’ve Not Thought Through the Legal and Ethical Disruption of Augmented Reality

By

Let’s not repeat the mistakes we made with social media

Yelp Monocle, an early hint of change to come. (image)

With recent product and SDK announcements by Apple and Facebook, we have officially entered the 2017 edition of the Augmented Reality hype cycle. Event news sites like the New York Times and Quartz have gotten into the game with their own apps.

As a futurist and scenario planner, helping organizations understand the long-term social, economic, and political impacts that accompany disruptive technology, I feel the timing is right for all types of stakeholders in this technology — policy makers, technology producers, consumers and even just the average citizen who might be in the way of the emerging applications — to understand and get ahead of the types of ethical, legal and regulatory issues that will accompany AR applications.

The simplest description for Augmented Reality is that it is the overlay of digital information and images on top of physical places and things that you typically view using a smart phone or glasses-based display. (More details about what it is and how it works technically are available on a number of Web sites like Wikipedia). As with other new technologies, AR will change how people interact with each other and their surroundings and bring both benefits and challenges to the status quo. However, given the tighter integration between “online” and “real world” experience, AR will create a more immediate effect on people’s actions and perceptions of the world around them, leading to both greater upside value and more controversial and damaging effects.

How society resolves these tensions between the upside and downside of these new applications (through legal, regulatory or other social normalizing means) will affect the rate of adoption for AR, the structure of the AR marketplace, and most importantly, how much AR impacts daily lives. For NewCos looking to provide products and services, it could mean the difference between a market of open competition — with equal access to infrastructure and forms of expression — or a few gatekeepers. And for users, it could mean the difference between truly helpful tools — that provide time saving guidance through daily tasks — and new art forms, versus highly distracting and confusing experiences that create greater tensions and barriers within communities.

Here are three specific examples that demonstrate the range of practical to more existential questions that are bound to arise.

What kind of property rights exist in mixed reality?

A new batch of apps that allow users to create hidden graffiti using AR raises an important question about who is legally allowed to “tag” a place. Physical graffiti itself is illegal without the permission of a property owner, but what about virtual graffiti? If not immediately viewable by the public, how big a crime is it? If it is just for a group of friends, or patrons, or other groups, is it an intrusion on the property? At some point, the answer is probably yes. When a space becomes “public” is in fact defined by law (thought not yet in MR). However, Yelp reviews of a particular “place” have been perfectly acceptable up to now, suggesting some general societal tolerance for open commenting on private places.

In practice, the point at which a line of illegality, or just impropriety, is crossed will always be a fuzzy one, given all the variations of visibility that could exist. This line will be drawn, in part, from a pure ethical standpoint: for instance, one might argue that a work shared with just a handful of friends is harmless, but one that is generally available and prompted for when people walk by is more of a public statement. But it will also be dictated by more practical considerations, such as how someone (the owner, law enforcement, etc.) would even know that there was a tag at all. And of course, it will likely matter as to what type of statement is being made and the degree to which the owner might be impacted by it. The latter takes us into the quagmire of rights to speech and libel law, beyond the pure “who owns the virtual space” question. The debate around this might even lead to arguments that all virtual space is essentially public and can not be controlled by private owners. In many ways, this is an age-old debate requiring revisitation.

Of course, graffiti is just one example of integrating virtual information or images with a particular geolocation. Already, large data companies have created data-rich maps of cities, and it is inevitable that more and more types of digital content will populate these databases — for instance, information from Wikipedia, LinkedIn, or Facebook about a person who lives at a particular address, or an organization that is based there. Will property owners be able to decide what information can and can not be represented in AR around their property?

This will likely open up a wide range of debates about what is public versus private information more generally. For instance, we have standards and norms in the US about publishing personal information of sex offenders, such as their names and addresses. But what about pictures from someone’s vacation, or family profile information? In many ways, the automation and immediacy of AR and geotagging surfaces a more visceral form of the “right to forget” regulatory question begin debated about search.

Much of this will depend on who is providing the AR system in the first place and who already has approved access to other data sets (e.g. only Facebook friends can see certain types of information about you if you set it up that way), but just as they have done in less accessible ways online through web sites, data companies will try to gather as much information about people as they can and look to monetize. Placing this on top of a physically owned space changes the essence of a ethical rationale or legal right, by making the content even more accessible and physically actionable. It will be seen as a more threatening use to most people, in particular in atmospheres where doxxing and targeted harassment campaigns are common.

The rights of use of public space

So far, I have talked mostly about private tagging. But more questions emerge when talking about public spaces too. Again, we have some existing laws and norms for public labeling, and there are natural limits to how many representations — murals, statues, plaques — can be put in a limited physical space. Often, there are vetting mechanisms for deciding on this, via town councils or public votes. But what about a world where we could have unlimited virtual representations? Will cities require a similar permit or approved labeling process for content that is tied to public locations? What history or cultural perspectives will be prioritized or denied, if any? Figuring out the right number and providing equal access to all participants will be a challenging political task, equal to, or even harder to align consensus around than the recent debates over Confederate statues. The “vandalization” of a virtual, public Koons exhibit enabled by Snapchat provides an early sign of the complexity of the debate to come.

Who gets to vet the data?

Resolving the questions above inevitably leads to the next big question. Similar to the fake news management problem, who gets to decide what is appropriate and/or true information about a place? The aforementioned Yelp reviews is already testing these boundaries with its Monocle feature that layers the reviews on top of a map and pointer to the physical location. At some point, there will need to be clear definitions as to who gets to decide what’s true and not true in these reviews. At the very least, we’ll need some legal guidelines as to what is or is not a first amendment right and when such decision rights need to exist.

AR provides a new, unlimited descriptive space for every location. Similar to online-only spaces like search engines and social media platforms, the ethical and legal boundaries of AR have not been fully defined. We are just now seeing online giants like Facebook start to rethink what “open” means on their platform. We will need to do something very similar for AR, where the particular, emotional immediacy and connection to physical concepts don’t exist in the same way in mobile or online content.

What qualifies as exclusion / discrimination?

Lastly, there are a number of features which AR promises that can easily be used to conduct behavior that many would consider in the best case shady, and in the worst, immoral. With advances in machine vision, AR has the promise to recognize people and things in your view. This will essentially render the idea of anonymity in a crowd obsolete — including for socially vulnerable participants like those in a public protest.

However, with overlay technology, AR can also take the next step and, depending on a given set of rules, re-render or even remove the people and things from view. An example might be erasing signs of urban blight while walking down the street. Or perhaps erasing poor people altogether. More generally, AR is ripe for a further implementation of the kind of racially segregating applications that are now being discovered in AI-based systems.

For some, this might just represent a new reality, a “better” way to do what many people do anyway — ignore things they don’t want to confront or ignore people they don’t care about — and a natural evolution from the last wave of disruptive technologies. For others though, its a more extreme version of redlining and discrimination, requiring even greater oversight than technologies that have come before. As with other information systems, we will see different opinions about who is responsible for watching our for this — the users, the providers of the technology, or a third party or government.

Once we figure that out, it will be a huge design challenge to figure out enforcement. Given the amount of time it will take to sort through these questions, it would benefit all involved to start talking about what they want to see from this technology today, even as the first experiences are being rolled out.

Like with so many new technologies — social, AI, drones, sensing and imaging — for every promising application that will help make the world a better place there are also an equal number of uses that at their best, raise “grey area” types of questions about what we want as a society. At their worst, they threaten to harm parts of society in inequitable ways. Solving for how we want to manage and regulate AR will be another important entry point into this much larger, ongoing debate. Best to start the dialog now to help shape the adoption than have to react to some unthinkable event in the future.


Thanks to Tony Liao, whose paper on debating augmented futures lays out a broad framework for thinking about possible ares of contention with AR; and to Adam Flynn, who was an invaluable thought partner on this topic.

Matt Ranen is a scenario planning and strategy consultant, helping organizations navigate future change and uncertainty. More about futures and scenario planning may be found at www.ranenconsulting.com, or follow Matt Ranen on Twitter at @MattRanen.

Leave a Reply