By Philip Di Salvo
The COVID-19 pandemic has been a prism and an amplifier for anything data. It has exposed underlying issues that require the attention of academics, activists, journalists, and policy makers. Health emergencies are enormous stress tests for civil rights and freedoms, and for the platforms through which societies come together. With most of the world population under lockdown or subjected to monitoring, digital platforms and internet infrastructures have become leading spaces where social life takes place. This may sound obvious now, but as Franco “Bifo” Berardi wrote in his pandemic-influenced book Fenomenologia della fine, COVID-19 globally recodified the assumptions of our societies, so we must consider their datafied sides. While we live on the internet more than ever, access to tools, basic services, and social environments is becoming increasingly unequal. Such inequalities have increased due to the uneven distribution of opportunities, resources, and the exclusive design of socially-impactful technologies.
In a piece for Open Democracy written from the Dutch and Italian lockdowns last spring, Stefania Milan and I tried to identify “four enemies” from the pandemic in the context of the “datafied society.” Back then, we claimed that the pandemic was accelerating “potentially dangerous dynamics” capable of causing huge collective damage. In the fall of 2020, those dynamics apparently exploded in plain sight, exacerbated by the long-awaited “second wave” of the virus and political intervention worldwide. As we expected, the pandemic reformulated the relationships between tech, power, and justice, as claimed by Linnet Taylor, Gargi Sharma, Aaron Martin, and Shazade Jameson in their book Data Justice and COVID-19: Global Perspectives. The outcomes of these reformulation have not yet manifested clearly, but their occurrence appears visible in some domains, especially the most marginalised communities. In this essay, I will discuss four keywords: solutionism, surveillance, borders, and infrastructures.
The pandemic has been accompanied by a new wave of solutionism in policy making, healthcare, and beyond. Solutionism has been described by Evgeny Morozov as the the “idea that given the right code, algorithms and robots, technology can solve all of mankind’s problems.” We heard lots of these calls during the pandemic, especially when the release of contact-tracing apps were heralded as the “silver bullet” to the spread of the pandemic. In Italy, the government adopted privacy-respectful solutions and frameworks for its national app Immuni (“the immune ones”). However, the sensitivity of the Italian app development came only from weeks of pressure from privacy activists, academics, and journalists to avoid more invasive software solutions. Even in an established democracy, China was frequently described as a model to follow, especially in regards to the tracking of citizens during the pandemic. Although that pressure led to better decisions and an improved app, privacy and surveillance are not the only potential problems in regards to these apps. Whereas they’re undoubtedly effective to trace cases and are one more solution that states can adopt in the battle against COVID-19, they’re not the most fundamental solution.
Even when privacy-respectful, contact tracing apps may exclude enormous segments of the population: Singapore has been an interesting and dramatic case study in these regards, also because the city has been frequently indicated as an excellent example in the response to the pandemic, especially in regards to technology usage. As the BBC reports, though, “success crumbled when the virus reached its many foreign worker dormitories” that are home to over 300,000 low-wage foreign workers, living in inadequate conditions where social distancing is impossible and contact-tracing apps fail in their mission. As the cases number in the dorms sky-rocketed, Singapore authorities started releasing different statistics about the contagion: one about the city community, and one about the population in dorms. Excluded from any form of assistance and prevention, foreing workers were even hidden from the main data, ending up in dedicated statistics highlighting a clear inequality pattern. Stories of exclusion and blatant inequality related to technological responses to the pandemic have emerged from all over the world and also in developed and fully democratic countries. In Canada, for instance, it has been reported that the national contact-tracing app was released in French and English only, signaling another sign of exclusion for the four million Canadians who do not command those languages. In the UK, an expert board reporting to the government highlighted that some 21% of the UK adults do not use a smartphone, de facto excluding them to the access to contact-tracing apps. In Italy, the national contact tracing app doesn’t run on an array of older Android and Apple phones (and has shown some bugs also with more recents models), making income and consumer electronics competence as decisive factors in the spread of the app among the Italian population. The predominance of older versions of smartphones in Italy has been indicated as a driver of the low adoption of the app, as Wired reports. Although the Bangladesh and Western stories can’t be put on the same level in regards of their severity, it is clear that at every latitude technological determinism, when pushed with too much sublime emphasis on “smart” and “shiny” digital technologies, may in any case lead to forms of inequality and exclusion. Furthermore, evidence about the effectiveness of contact tracing apps is also limited, as reported by Lancet in August.
Whereas much of the debate about privacy in the context of the COVID-19 pandemic was about contact tracing apps, they’re certainly not the only potentially harmful technology revitalized in recent months. Surveillance studies scholars Martin French and Torin Monahan have pointed out that there is “evidence of surveillance dynamics at play with how bodies and pathogens are being measured, tracked, predicted, and regulated.” Basically, controlling a pandemic spread involves forms of surveillance. The spread of the pandemic has seen an acceleration in the adoption of various monitoring technologies and automated decision-making systems, according to an AlgorithmWatch report. These technologies include bracelets, selfies-apps, thermal scanners, facial recognition systemsm and programs for digital data collection and analysis. As AlgorithWatch posits, are these technologies becoming the “new normal?” The pandemic has seen an acceleration of the implementation of these technologies, frequently supported by a deterministic approach, raising critical questions about informed consent and the impact of such technologies on our fundamental rights. As we wrote at the beginning of this essay, global emergencies are also stress tests for societies and democracies at large, since they’re forced to cope with extraordinary situations. As Elise Racine, a research associate at A Path for Europe (PfEU), argues, “risk for function creep means that these tools may be co-opted by other security initiatives.” In this way, data-driven technologies may endanger the fundamental rights of the most vulnerable, who are more exposed to abusive forms of monitoring and surveillance.
The pandemic has revitalized the appetite for surveillance all around the world, with facial recognition and other controversial technologies leading the way. As the Centre for Security Studies at ETH Zürich reports, the market for surveillance cameras is expecting a substantial growth in 2021, reaching 300,000 new cameras being installed every day globally and a billion cameras installed by the end of the same year.13 Democratic institutions are at stake, since intrusive technologies undermine democratic values and have been shown to be disproportionately used to target minorities and exacerbate racial biases.
Examples of facial recognition being used to enforce COVID-19 restrictions have already emerged from Russia, where Moscow’s enormous network of cameras has been used to control residents during the lockdown. Even in democratic contexts like Italy, facial recognition is making its way into public spaces, often pushed as migration-containment strategy, as happened in the Italian city of Como—another sign that the most vulnerable communities of our societies are also the most exposed to constant monitoring. Crises set new standards. Are we slowly moving into a surveillance state where immediate health measures are paving the way for overreaching forms of surveillance that are here to stay? Without proper testing, clear frameworks, and guidelines, we risk endorsing a normalization of surveillance with effects that could be difficult to assess and take years to be de-implemented.
Borders have traditionally been surveilled. Unsurprisingly, technologies for monitoring borders are also accelerating their adoption across the world, riding promises to make life easier and safer during the pandemic. Whereas boarding a plane without touching any surface may sound like a viable solution to prevent the further spread of the virus, boarding a plane only through facial recognition raises obvious privacy concerns. Datafied “immunity passports” now being discussed in various countries pose serious threats to various segments of the population. They have been sold as another “crisis-response that depends on technology, as we saw with contact-tracing apps,” writes Privacy International. As the London-based NGO argues, these technical solutions are currently being hyped and pushed by private actors involved in travelling and border services, but their adoption may have serious impacts on the right of citizens to movement, and the lives of those most discriminated against. Also, these tools may become useful for profiling, as they may give “the police and security services more powers to not only know information about our health, but also to stop people and demand proof of immunity in certain situations,” as Privacy International again argues. The global lockdown has also deeply changed the nature or geographical borders and their political meanings, as migrants have been disproportionately victimized by this new status quo. Frequently, migrants and refugees failed to be included in COVID-19 statistics and figures, given their invisibility. Refugees are usually the first targets of the datafied surveillance practices discussed here. In April, the Bureau of Investigative Journalism reported how the digital monitoring and surveillance technological practices being now adopted during the pandemic were originally tested on refugees and migrants during the 2015 migration crisis in Europe. In Singapore, migrant workers have been forced to download a contact-tracing app, while Russia is reportedly considering following suit. Vulnerable communities, like migrants on the move, who are already suffering from weaker safeguards for their rights and freedoms, are now also increasingly becoming a testing ground for implementing datafied monitoring practices that may end up becoming standardized practices in a post-pandemic world.
Digital infrastructures and platforms gained new centrality in our daily lives because of the pandemic. Smartworking, remote teaching, and public services were forced to migrate online and still rely on digital tools to function. This evolution also has profound implications in a society pushing for more datafication. It is time to ask, what are the long-term implications of making private services the de facto infrastructure of social life, citizenship, and agency? Coming back to contact-tracing apps as an example, there is little doubt that the framework provided by the Apple-Google alliance made a privacy-respectful structure readily available. Yet, we should demand greater transparency when such powerful companies become official suppliers of digital infrastructures used for health services. Power balances between national states and private entities are at stake. Most urgently, as David Lyon urges, the pandemic should be the moment when we start considering surveillance implications beyond the singular privacy issue.22 More is at stake, because surveillance has become a structural element of today’s societies. With most of our lives moving online, we’re also moving into spaces where what Shoshana Zuboff calls “surveillance capitalism” is the ruling economical, political, and social structure. Surveillance capitalism is increasingly exposing all societies’ activities to extended datafatication: the constant monitoring, sorting, and profiling of people for profit. It is time to build exit strategies and new forms of resistance; the datafied society is now an established reality and is already affected by global issues such as a pandemic. The view from inside this crisis has indicated that, in its current shape, the datafied society is increasingly working against its own citizens.
Philip Di Salvo is a post-doc and Professor at the Media and Journalism Institute of Università della Svizzera italiana in Lugano, Switzerland. His areas of research include leaks, the relationship between journalism, hacking, and internet surveillance. His latest books are Leaks. Whistleblowing e hacking nell’età senza segreti (LUISS University Press, 2019) and Digital Whistleblowing Platforms in Journalism. Encrypting Leaks (Palgrave Macmillan, 2020). He tweets at@philipdisalvo.