Discussions about digital privacy often evoke images of whistle-blowers, journalists, and intelligence agencies. But beyond this, it can sometimes feel as though the business model of corporate data mining presents few negative consequences in our daily lives. The omniscient machinery of state surveillance is rarely an issue visited upon us personally. Amazon, Google, and Facebook are overwhelmingly convenient, well-designed platforms that can be enjoyable to use. It is perfectly possible to worry about the surveillance state in the abstract but, at the same time, think of ourselves as having little to hide personally and, therefore, not much to worry about.
For these reasons, it is easy to tolerate technologies of surveillance in their various forms as a fact of life in the twenty-first century. Just like some people can experience the effects of climate change as pleasantly warmer weather, the insidious potential of mass surveillance often manifests itself as convenience and improved consumer experiences. The importance of systemic and collective privacy can start to fade from view — but at what cost?
Digital Oppression
While we struggle to articulate compelling defenses of privacy, those in power have had little difficulty understanding its significance. The structures of class society are being encoded into our experience of online life at a rapid pace, something that is only possible because of our political ambivalence about privacy’s value. Most recently, this disconnect was made clear in a report by the United Nations special rapporteur on extreme poverty, which highlighted how technology is being used by governments in various oppressive ways in the digitization of welfare services.
This phenomenon takes many forms. Algorithmic decision-making is being applied in all sorts of government programs in the United States — from identifying children at risk, to allocating housing, to assisting with parole applications. In Canada, the government has automated processes associated with immigration and refugee systems.
One-third of councils in the UK use algorithmic technology to help determine benefit claims, identify fraud, and manage social services. Governments in India, Kenya, and South Africa have set up national identity and welfare schemes that incorporate biometric data like fingerprints and retina scanning. In recent times, the Australian government issued hundreds of thousands of incorrect debt notices to welfare recipients, an outcome of a flawed data-matching algorithm that used tax returns to identify potential overpayments.
The digital upgrade of welfare delivery is often presented by government as a neutral, even benign, phenomenon, which allows services to be optimized and resources to be spent efficiently. The reality, according to the special rapporteur’s report, is that these transformations are “revolutionary [and] politically-driven.” It is a reality in which “citizens become ever more visible to their governments, but not the other way around.” Such a rebalance of power is the mundane, insidious consequence of living in a society without any respect for privacy.
Critics of such programs have faced retaliation from the government and harassment by state agencies. Disputing automated decisions can be virtually impossible, at times comically so, as individuals get caught up in bureaucratic loops. Rather than ameliorating inequality, Virginia Eubanks’s book, Automating Inequality, illustrates how these programs exacerbate it.
Such programs do not seek to eradicate poverty. Rather, they aim to manage the poor, confining them to a cycle of stigmatization and entrenched disadvantage. ‘Technologies of poverty management are not neutral,” she writes. “They are shaped by our nation’s fear of economic insecurity and hatred of the poor; they in turn shape the politics and experience of poverty.”
This is not just about government agencies — industry has been quick to capitalize on state investments in digital infrastructure. The notorious Palantir, the data-mining company started by Peter Thiel, has been used in predictive policing operations for authorities in the United States and cost-reduction strategies for local councils in the UK. Palantir’s products generate analytics from disparate data sets — including social media and government sources. The market for biometric technology is estimated to be worth up to $50 billion by 2024, with security and government sectors of North America representing a significant share.
It’s not just specialized tech companies that are set to profit from this. Major platforms like Amazon and Microsoft are working variously with prisons, ICE, and a Chinese military-run university on facial recognition software. Other popular platforms sell information to governments directly — think of Uber selling routing and logistical data, or Toronto outsourcing its city planning to Google. The business of surveillance capitalism is now about more than just selling advertising: it’s about finding new markets for data. Increasingly, purchasers in this marketplace will include governments, who are developing their own methods for using this data in ways that are unaccountable and often deeply worrying.
The development of such markets is the logical extension of the web economy, which has long found ways to monetize class division. The entire business model of surveillance capitalism operates by making judgments about us based on our collective traits and stereotypes drawn from our membership of particular social groups. Joseph Turow has written about how our lives in the digital age are framed by consumption, which produces and reproduces our sense of identity. Online advertising “has embarked on a fundamental and systematic process of social discrimination,” according to Turow, and the effect is to create what he calls “reputation silos,” which can “accelerate the distance people feel between one another.”
Solidarity in a Digital Age
The cultural effects of these trends have real-world consequences for political organizing. Barbara Ehrenreich recently observed that “class is always aspirational in the sense of trying to connect people who haven’t been connected before.” If we understand our political project as one that does not seek to flatten diversity, but rather seeks to draw out the commonalities that arise as a function of class, what are the implications of the fact that our online world does just the opposite?
In his report on the digitization of welfare, the special rapporteur noted how advocates working in the field have “tended to see the technological dimensions as separate from the policy developments, rather than as being integrally linked.” Meanwhile, human rights activists in the digital space have understandably been focused on issues like the surveillance state, discriminatory algorithms, and surveillance capitalism. The growing chasm between these worlds favors those seeking to entrench social division in the digital age. We cannot afford to leave it unbridged. In many ways, all community organizing is digital organizing, and all political issues are technological issues.
A key part of the problem is that privacy as a right has been defined too narrowly, framed as the right to be left alone and little more. Part of our job, then, is to open up the more radical possibilities of this concept, to show that privacy is about the capacity to explore our personal faculties without judgment, to experiment in community-building on our own terms.
The right to privacy is the right to exist in a world in which data generated about you cannot be used as an indelible record of your identity. Privacy is not just a technical approach to information management delegated to individual responsibility. More substantively, it is about the capacity to determine our own sense of self as part of a collective.
This struggle shares commonalities with countless other struggles under capitalism that aim to resist oppressive identities that have been allocated to us at a systemic level, and it aspires to connect a diversity of people in this common purpose.
Frantz Fanon was a psychiatrist, political philosopher, and revolutionary born in the French colony of Martinique, and he is most closely associated with his support for the Algerian war of independence. He wrote about how, growing up in a colonial society, his identity as a black man was curated by the colonial system, and this process served those in power. His sense of self was defined by white supremacy:
I discovered my blackness, my ethnic characteristics; and I was battered down by tom-toms, cannibalism, intellectual deficiency, fetishism, racial defects, slave-ships, and above all else, above all: “sho’ good eatin’.”
For Fanon, his identity was not afforded the dignity of uniqueness or autonomy; the system of white supremacy had “woven me out of a thousand details, anecdotes, stories.” His account sounds a lot like how our data-fied selves are generated and used against us in our everyday experience of online life. This centuries-old practice of oppression is being imported into the digital age.
A more expansive way to think about privacy, then, is to see it as a right to digital self-determination. It is about self-governance, the right to determine our own destiny and be free to write a history of our own sense of self. Self-determination has a long history in legal and philosophical thinking, but it gained new meaning in the latter half of the twentieth century during the explosion of postcolonial struggles, including in the struggle for Algerian independence that Fanon was involved in. There are good reasons to see the struggle for digital self-determination as a successor of these movements.
Advocating for digital rights is an essential part of any class-based movement. We deserve a digital life characterized by autonomy and empowerment. To achieve this, we will need to commit to finding ways, small and large, to break the technologies of surveillance used by both governments and corporations.
This echoes another of Fanon’s arguments, namely the importance of struggle. “The same time that the colonized man braces himself to reject oppression, a radical transformation takes place within him which makes any attempt to maintain the colonial system impossible and shocking.” This ought to be the aspiration of digital self-determination today.
First published in Jacobin: https://www.jacobinmag.com/2019/12/digital-privacy-data-surveillance-google-amazon-facebook