Draper Blog

Should We Own Our Data? The Limits of Privacy as Information Control

—By Nora A. Draper

 

This week, in the Economist magazine, the musician and tech entrepreneur will.i.am reflected on a prediction he had made five years ago while at the World Economic Forum in Davos, Switzerland. When asked what people would be thinking about in 2019, he said “idatity”—a combination of “data” and “identity.” Arguing that personal data and control over it is a human right, will.i.am wrote that “data itself should be treated like property and people should be fairly compensated for it.”

The timing of this original prediction—the winter of 2014—makes sense. The previous year (2013) had been marked by several incidents that sparked new privacy concerns: TSA body scanners that revealed naked images of travelers; the introduction of Google Glass that raised concerns about the intrusions of wearable technology; Edward Snowden’s revelations about the United States government’s extensive use of domestic surveillance. The result was that Dictionary.com named “privacy” its 2013 word of the year.

And will.i.am was correct in predicting that information privacy would continue to capture public attention. If 2013 was a watershed year for privacy, 2018 might well have been the year the levies broke. From news that Americans’ Facebook data had been used by the political marketing firm Cambridge Analytica to target divisive advertising in the 2016 Presidential election, to details about companies’ pervasive use of geolocation information, to cautions about vulnerabilities introduced by the Internet of Things, the past year has been a hail storm of warnings that privacy is on the brink of collapse.

If privacy crises have become an evergreen issue, so too have calls to use tools and services to help individuals control, own, and leverage their personal information. The model will.i.am suggested—one in which a personal assistant acts as a data agent to optimize an individual’s information—is not new. More than 20 years ago, in a 1997 Harvard Business Review article, John Hagel and Jeffrey Rayport imagined the rise of companies they called infomediaries that would act as personal data brokers. By 2001, several companies had developed tools along this model and while few of these companies survived through the first years of the 21st century, the idea behind them—that people were concerned that they were not being fairly compensated for the use of their personal data—did.

By 2013, aided by the global rise of smartphones, personal trackers, and other connected devices, individuals had become engines of data production. To address concerns about the inability to manage this plethora of data, several companies reimagined the infomediary model for a public that seemed at once concerned about privacy, but also was unwilling to give up their various devices. Companies like the Locker Project, Personal.com, Datacoup, and Enliken developed data vaults or data stores that allowed individuals to consolidate the information they were producing as part of their daily activities. Some of these services even worked with advertisers to determine how these data streams could yield a profit for data subject.

Several companies are still working on perfecting and selling this model, but so far, the mass market for the privacy services envisioned in 1997 by Rayport and Hagel has not materialized. Part of this might be due to a general sense that property and ownership rights are insufficient for ensuring the autonomy that privacy provides. What benefits do individuals receive if they own their personal data, but not the algorithmic tools that analyze that information and assign it value? Moreover, the application of market logics to privacy is likely to disproportionately benefit those who are already privileged under capitalist systems.

Putting aside the ideological arguments, there are practical reasons people might not be buying what these companies are selling: widespread skepticism about the very notion that control over personal information is possible. For years, Facebook, which is currently facing intense public and regulatory scrutiny for its handling of personal data, has responded to privacy concerns by insisting users are in control. Similarly, transparency programs rolled out by data brokers have claimed to provide individuals with opportunities to manage how their information is collected and used. These efforts, however, do little to slow the systems of surveillance that pervade our everyday lives.

The public is concerned about privacy. They were in 1997 and in 2013. Today, few Americans believe they have much control over the information that is collected about them. A 2015 study I conducted with colleagues from the University of Pennsylvania found a gap between the desire to control the information companies collect and feelings that they were able to do so: 58% of respondents indicated they wanted control but felt it was out of reach. Given their sense of futility around the possibilities for control over personal information, these individuals, who we described as resigned about their information privacy, may not bother to pay for tools and services that claim to give them agency.

With the daily deluge of stories about the use and misuse of personal information the cynicism people feel about their ability to make informed and actionable choices about their information is understandable. This is particularly true when we consider that privacy is networked and cannot be protected by individuals acting alone. Cambridge Analytica relied on thousands of Facebook users to give them access to the data of millions of their friends. As law enforcement integrates consumer genetic testing services into its arsenal, one person’s choice to trace their ancestry using an at-home DNA kit can affect hundreds of family members.

And while individual tech companies reiterate their respect for personal information, it is important to remember these entities are part of a larger ecosystem that traffics in personal information. Efforts by individuals to opt-out by severing ties with Amazon, Facebook, or Google, even if possible, are unlikely to result in comprehensive protection. More promising is the movement toward regulation taking place in the European Union and in cities and states across the US that tackle the networks and systems that erode privacy protections.

Thinking about privacy as a product of individual control is seductive in part because it fits with market models that sell opportunities for information ownership and regulatory approaches that focus on personal choice. It is time, however, to think about new models for privacy protection that more accurately reflect a heavily connect world.

 

Nora A. Draper is Assistant Professor of Communication at the University of New Hampshire. She is the author of The Identity Trade: Selling Privacy and Reputation Online (2019) with NYU Press which is part of the Critical Cultural Communication series.

 

Feature image used under the CC0 Public Domain copyright taken from pxhere.

 

Website | + posts