Automated threats of a digital dystopia are closer than you think

February 21, 2020; Publisher: ; Format:

One of the common tropes of the digital age is that we should be worried about the AI robots that are coming to kill our jobs, kill our relationships, and ultimately even kill us. The problem is articulated often as a dystopian imagining, which we need to arrest before it becomes reality.

Such discussions are in many ways justified, given the immense potential of digital technology and its rapid expansion in a largely unregulated context. But however valid, the framing of these concerns also reveals something deeper. The people leading these conversations are too often rich white men — and, as researcher Kate Crawford points out, “perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator. But for those who already face marginalisation or bias, the threats are here.”

The threats may not look like a villain from the latest blockbuster from a comic book franchise, but they certainly already exist. The United Nations Special Rapporteur on Extreme Poverty recently released a report about the digitisation of welfare services, which lays out some of them in troubling detail. Governments in places like Australia, India, Europe and the United States are all experimenting (in real time on real people) with digital technologies designed to streamline the delivery of welfare services.

Numerous countries are also using or exploring digital identity systems, including Argentina, Bangladesh, Chile, Ireland, Jamaica, Malaysia, the and the Philippines. Digital identity systems often combine biometric information from individuals as a way to establish and authenticate their identity for government programs.

Such programs are often associated with the delivery services that vulnerable people rely on and protect them from serious harms. But when they are designed poorly, the consequences can be grave. Losing access to welfare payments can risk destitution, being ruled fit to work incorrectly might put a person’s health at risk, removing a child can have devastating consequences for the family and child.

When these decisions are made badly, infected with bias and lacking in accountability, the outcomes can be damaging and even deadly, causing immense human suffering. Too often, as the Special Rapporteur points out, thus has been the outcome of such digitisation.

The fully automated, artificially intelligent predator of our networked nightmares may be a government department, irresistibly attracted to cheap technologies that provide a veneer of objectivity.

An associated threat comes from the private sector. The Special Rapporteur observed how ‘many large technology corporations operating in an almost human rights free-zone’ which is exacerbated by the fact that ‘the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state.’ The outsourcing to the private sector limits our capacity to hold decision makers to account for design decision. But the potential threats here are much greater than just deflection of responsibility.

When companies are involved in the design of public service delivery, they import a whole different set of values and associated risks. Consider that a provider of electronic health records software was recently prosecuted in the United States for running a scheme that encouraged doctors to prescribe opioids in exchange for kickbacks from pharmaceutical companies. Almost everyone would classify the over-prescribing of opioids as a key contributor to costly and devastating health crisis, but for that company, it was a lucrative business opportunity.

The great systems theorist Stafford Beer reminded us that the purpose of a system is what it does.The central problem with these systems is that they do not appear to be designed around empowerment, public participation, dignity for vulnerable people, or forms of accountability for decision makers – all ideas that are the lifeblood of human rights thinking. Rather these programs are about making government service delivery more efficient. Which, in case it is not obvious, is very much not the same thing.

Too often, discussions about human rights in the digital age bring to mind small groups at specific risk, like journalists or whistle blowers. The threat model we usually think about is understandably focused on those rare few who have been subjected to the full force of the surveillance state. But the process of rendering a datafied self, which is gradually, sometimes subtly being assembled via government and corporate systems of surveillance, has very material, very ubiquitous consequences for the much larger cohorts of people in many societies.

If you are poor, the threats you are facing by surveillance apparatus and automated decision making are just as real. ‘America’s poor and working-class people have long been subject to invasive surveillance, writes Virginia Eubanks in Automating Inequality. In the digital age, she argues that this exercise serves to allow societies to ‘manage the individual poor in order to escape our shared responsibility for eradicating poverty.’ So while one purpose of digital surveillance is undoubtedly to chill dissent, another is to establish a system for treating poor people not as rights holders but a cohort to be managed.

It is possible to turn this around. We need to advocate for transparency in automation of government decision-making, and an active policy of non-discrimination in design, including slowing the pace of roll out so proper testing and public engagement can be facilitated.

We need to minimise the reliance on the private sector, and find ways to build capacity within civil society and civil service that can inform the process of design, and ensure that those affected are involved not as objects to be tested on, but as subjects with dignity, agency and experience that can inform inclusive and empowering design. To get there will require that human rights regulators devote resources to mapping the problem, civil society organisations take such questions seriously, and digital rights organisations lead efforts at collaboration across diverse fields.

Left unchecked, this phenomenon will solidify practices of oppression and prejudice that underpin poverty and many violations of human rights in daily life.

Any civil society organisation that works on human rights issues for vulnerable people has a stake in debates about automated government decision making and digital identity systems that support them. Advancing human rights in the digital environment cannot and should not be a task that is left to digital rights activists alone, but requires a broad coalition of voices representing the diverse and widespread body of people affected.

First published in International Observatory for Human Rights: https://observatoryihr.org/blog/automated-threats-of-a-digital-dystopia-are-closer-than-you-think/