Author: Stefania

Stefania at the presentation of the book ‘Lives of Data. Essays on Computational Cultures from India’

On February 19th, 5pm Indian time (12.30 CET) Stefania will join the presentation of the book ‘Lives of Data. Essays on Computational Cultures from India’, edited by Sandeep Mertia and published by the Institute of Network Cultures (2020). The volume is open access and can be downloaded from this link.

Lives of Data is based on research projects and workshops at the Sarai programme of CSDS. The book brings together fifteen interdisciplinary scholars and practitioners to open up inquiries into computational cultures in India. Encompassing history, anthropology, science and technology studies (STS), media studies, civic technology, data science, digital humanities and journalism, the essays open up possibilities for a cross disciplinary dialogue on data. Lives of Data is an open access publication from the Institute of Network Cultures Amsterdam in collaboration with the Sarai programme of the CSDS.

Sandeep Mertia is a PhD Candidate at the Department of Media, Culture, and Communication, and Urban Doctoral Fellow, New York City.

Jahnavi Phalkey is Founding Director of Science Gallery, Bengaluru.

Stefania Milan is Associate Professor of New Media, University of Amsterdam.

Nimmi Rangaswamy is Associate Professor at IIIT and Adjunct Professor at IIT, both at Hyderabad.

Ravi Sundaram is Professor at Centre for the Study of Developing Societies, Delhi.

The discussion will be held on Zoom

Link: http://bit.ly/3qjKnEo

Meeting ID: 991 2507 4788

Passcode: csdsdelhi

The full invite can be found here.

 

[BigDataSur-COVID19] Come la sorveglianza biometrica si sta insinuando nel trasporto pubblico

Durante la pandemia i lavoratori e le lavoratrici essenziali sono stati i soggetti più vulnerabili. Questo articolo discute come la sorveglianza introdotta per limitare il COVID-19 molto probabilmente sarà normalizzata nel contesto post-pandemia.

by Laura Carrer and Riccardo Coluccini

 

COVID-19 has shown how essential workers, while fundamental to our societies, are constantly being exploited and marginalized. This is even more true if we consider how smart working has fundamentally changed our perception of public spaces: working from home is a privilege for few people and the public space is something to be monitored. Many essential workers are still forced to commute to their workplaces using public transport and tech companies are taking advantage of the pandemic to introduce anti-COVID solutions that further push for dataficaton of our lives. We see the deployment of video surveillance systems enhanced by algorithms to monitor distance between people on public transport systems and software that can detect a person’s face and temperature and check if they are wearing a face mask. Forced to move in our public spaces, essential workers become guinea pigs for technological experiments that risk further normalizing biometric surveillance.

 

La pandemia di COVID-19 ha creato uno spartiacque nel modo in cui abitiamo il nostro spazio pubblico: mentre alcune fasce privilegiate della popolazione mondiale hanno beneficiato del lavoro da remoto, milioni di persone nel settore della sanità, dell’istruzione, della ristorazione, nell’infrastruttura logistica e di produzione non hanno avuto gli stessi privilegi e spesso hanno lavorato senza adeguati dispositivi di protezione individuale, continuando a recarsi a lavoro quando possibile con i mezzi pubblici. Molto spesso queste categorie di lavoratori essenziali sono anche appartenenti a minoranze e hanno vissuto quindi doppiamente il pesante bilancio della pandemia di COVID-19, pagando un prezzo molto alto.

Se da una parte è sembrata esserci una presa di coscienza nei confronti di queste lavoratrici e lavoratori essenziali—unici a muoversi e continuare a garantire un certo grado di normalità nella nostra vita quotidiana durante la pandemia—dall’altra queste persone rischiano di finire al centro di un nuovo disturbante esperimento tecnologico che potrebbe normalizzare l’utilizzo della sorveglianza all’interno delle nostre città.

I mezzi pubblici sono diventati il campo di test per soluzioni tecnologiche anti-COVID che si basano sulla videosorveglianza: dagli algoritmi per monitorare la distanza tra passeggeri a bordo fino ai software in grado di riconoscere se la persona indossa o meno una mascherina.

L’innovazione tecnologica sembra trainare la risposta alla pandemia in tutto il mondo, non solo sotto forma di app per il tracciamento dei contagi, ma anche e soprattutto sfruttando l’infrastruttura di videosorveglianza già ampiamente diffusa. A Città del Messico, il sistema di videosorveglianza cittadina è stato subito riconvertito per monitorare l’uso delle mascherine. A Mosca, la rete capillare di videocamere (più di 100.000) è stata utilizzata per controllare in tempo reale i cittadini positivi al coronavirus che per varie ragioni si allontanavano da casa. In Messico, il primo sistema di riconoscimento facciale nazionale (nello stato di Coahuila) implementato nel 2019 ha incluso la rilevazione termica ad aprile 2020, un mese dopo l’inizio della pandemia. Un’infrastruttura preesistente rende la possibilità di normalizzazione e controllo dei cittadini da parte dello Stato inevitabilmente più semplice.

Tutto questo avviene spesso a scapito di una corretta valutazione dei rischi per i diritti umani e si sta espandendo in maniera poco trasparente anche sui mezzi pubblici.

Lo scorso maggio, a Parigi, sono state introdotte nelle linee della metropolitana videocamere in grado di monitorare il numero di passeggeri e l’effettivo utilizzo delle mascherine. La stessa tecnologia è stata introdotta in alcuni mercati all’aperto e sui bus della città di Cannes. Tecnologie simili sono state introdotte in India a bordo di bus di lunga distanza e in alcune stazioni ferroviarie.

Il sistema di trasporti dello stato del New Jersey ha annunciato a gennaio 2021 il test di una serie di tecnologie per rilevare la temperatura, individuare l’uso delle mascherine e usare algoritmi di intelligenza artificiale per monitorare il flusso di persone. In Cina, l’azienda di trasporti Shangai Sunwin Bus ha già introdotto quelli che chiama “Healthcare Bus” muniti di tecnologie biometriche.

Le aziende del settore hanno subito sfruttato questo spiraglio per pubblicizzare le proprie tecnologie, come ad esempio l’azienda Hikvision, produttrice mondiale di videocamere. In Italia, l’azienda RECO3.26 che offre il sistema di riconoscimento facciale alla polizia scientifica italiana ha da subito approfittato della situazione offrendo una suite di prodotti anti-COVID: tra questi ci sono il DPI Check, per controllare appunto l’utilizzo della mascherina chirurgica da parte dei soggetti che rientrano nell’area videosorvegliata; Crowd Detection e People Counting per monitorare gli assembramenti; oltre a funzioni per la misurazione in tempo reale della distanza di sicurezza tra le persone videosorvegliate e il rilevamento della temperatura corporea. In Italia, alcune di queste tecnologie sono state subito acquistate da parte dell’Azienda Trasporti Milanesi ATM. E non è chiaro se l’Autorità per la privacy italiana sia stata informata al riguardo.

L’utilizzo di queste tecnologie, oltre ad essere invocato come primaria e più efficiente soluzione per la risoluzione di un problema emergenziale ben più complesso e intricato, è problematico anche sotto un altro punto di vista. L’ente governativo americano National Institute of Standards and Technology (NIST) ha recentemente pubblicato un report di analisi dei software di riconoscimento facciale presenti al momento sul mercato, evidenziando come l’accuratezza di questi ultimi sia molto bassa soprattutto ora che l’utilizzo della mascherina è obbligatorio in molti paesi del mondo. Un prezzo che, visto l’utilizzo della tecnologia biometrica al giorno d’oggi, molte persone—soprattutto appartenenti a categorie già ampiamente discriminate—saranno costrette a pagare caro.

Nella narrazione odierna, tecno-soluzionista e tecno-ottimista, la sorveglianza dei corpi per contrastare un virus che si diffonde velocemente può sembrare l’unica via d’uscita. In molti casi le lavoratrici e i lavoratori essenziali sono già vittime della sorveglianza sul luogo di lavoro, come nel caso delle tecnologie sviluppate da Amazon per monitorare la situazione nei propri magazzini, ma ora questa sorveglianza rischia di espandersi e impossessarsi ulteriormente dei nostri spazi pubblici.  La Commission nationale de l’informatique et des libertés (CNIL), l’autorità garante per la protezione dei dati personali francese, ha già sottolineato che questa tecnologia “presenta il rischio di normalizzare la sensazione della sorveglianza tra i cittadini, di creare un fenomeno di assuefazione e banalizzazione di tecnologie intrusive.” Nel caso della città di Cannes, l’intervento del CNIL ha condotto al blocco dell’impianto di monitoraggio delle mascherine.

La campagna intereuropea Reclaim Your Face sta cercando di mettere in guardia dagli effetti che il controllo demandato alla tecnologia può avere sulle nostre vite e come i nostri spazi pubblici rischiano di essere trasformati in un luogo disumanizzante: la falsa percezione di sicurezza e il chilling effect—la modifica del nostro comportamento quando sappiamo di essere osservati—ne sono gli esempi più che concreti. Avere telecamere puntate addosso in ogni nostro spostamento significa davvero sentirsi più sicuri? E quando questo assunto è puntualmente smentito da studi e fatti di cronaca, quale sarà la successiva soluzione da mettere in campo? Come ci rapporteremo, poi, alla crescente possibilità di non essere più realmente capaci di muoverci liberamente nello spazio pubblico per paura di essere giudicati? Lo sguardo degli algoritmi ci strappa di dosso ogni forma di umanità e ci riduce a vuote categorie e dati digitali.

In questo modo, le persone costrette a spostarsi di casa per recarsi a lavoro diventano cavie per esperimenti tecnologici—normalizzando di fatto la sorveglianza. Lo spazio pubblico viene ridotto a laboratorio e tutti i lavoratori e lavoratrici essenziali rischiano di essere trasformati in dati digitali senza vita.

 

About the authors

Laura Carrer is head of FOI at Transparency International Italy and researcher at the Hermes Center for Transparency and Digital Human Rights. She is also a freelance journalist writing on facial recognition, digital rights and gender issues.

Riccardo Coluccini is one of the Vice Presidents of the Italian NGO Hermes Center for Transparency and Digital Human Rights. He is also a freelance journalist writing about hacking, surveillance and digital rights.

 

 

[BigDataSur-COVID] Consent Design Flaws in Aarogya Setu and The Health Stack

by Gyan Tripathi and Setu Bandh Upadhyay

“The use of a person’s body or space without his consent to obtain information about him invades an area of personal privacy essential to the maintenance of his human dignity,” observed the Canadian Supreme Court in the matter of Her Majesty, The Queen v. Brandon Roy Dyment, (1988) 2 SCR 417 (1988).

The Government of India released its digital contact tracing application “Aarogya Setu” (the app) on April 2, 2020, following a rampage of similar digital contact tracing (DCT) applications worldwide. Some DCTs, like the one in Singapore, have been largely successful, while others like in Norway had to be pulled owing to assessment by the country’s data protection authority, which raised concerns the application posed a disproportionate threat to user privacy — including by continuously uploading people’s location. Interestingly, Aarogya Setu not only continuously collects people’s location, but it also binds it with other Personally Identifiable Information (PII).

While India has more than 17 other similar apps at various state levels, Aarogya Setu is perhaps the most ambitious digital contact tracing tool in the world. However, the app has been the center of heavy public backlash for posing a grave threat to the constitutionally guaranteed right to privacy.

According to the much-celebrated judgment in K. S. Puttaswamy v. Union of India (the judgment), any restriction on the fundamental right to privacy must pass the three-prong test of legality, which postulates the existence of law; need, defined in terms of a legitimate state aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them; Aarogya Setu fails on all three counts with a lack of any legislative backing, unclear and shifting objectives that the state could have achieved with the deployment of the application, and owing to the huge amount of Personally Identifiable Information (PII) it collects, the near-opaque team of researchers that ‘volunteered’ to build it, the faulty technology used, lack of any legislative backing, absence of clear guidelines on usage and data storage, and lack of any data protection authority oversight.

Following a slew of legal challenges and public outcry, the government released the Aarogya Setu Data Sharing and Storage Protocol (the protocol) which was intended to govern the data-sharing practices of the data collected by the app between governments (Central and State), administrative bodies and medical institutions. However, there was a continued lack to provide an effective mechanism to check the practicality and execution of the protocol. Subsequent responses sought under the Right to Information queries revealed that the data management and sharing protocols as envisaged in the document were never realized. Earlier, various activists and security experts had criticized the government for releasing an incomplete source code while claiming that it was making the application ‘open-source’. Therefore, in the case of Aarogya Setu, there was a systematic breakdown of established laws and reasonable expectations of privacy.

While the judgment also talks about granting more practical ways of control over information by the citizens, and the same is also talked about in Section 11 of the proposed Personal Data Protection Bill, 2019 by way of specific consent, the very architecture of the application does not allow users to exercise control over their data. In an event that a person is tested positive for the novel coronavirus, the application would upload not only their data but also the data of all those with whom they came in contact, based on the interaction they have had in the previous fourteen days.

The 9th Empowered Group, constituted by the Union Government for ‘Technology & Data Management’, to do away with the discrepancy and/or duplicity of data of the individual who had tested positive, opted for 2-way communication between the application and the Ayushman Bharat dashboard, umbrella scheme for healthcare in India. This has been revealed by the minutes of the meeting obtained under the Right to Information by the Internet Freedom Foundation. The minutes show that the data collected through Aarogya Setu was not only integrated with Ayushman Bharat but was also in communication with Aarogya Rekha, the geo-fencing surveillance employed by governments to enforce quarantine measure and track those who were put under mandatory quarantine, institutional or home.

Fears of a scope creep are already manifesting in the Aarogya Setu development team’s plans for integrating telemedicine, e-pharmacies, and home diagnostics to the app in a separate section called AarogyaSetu Mitr.

On 7 August, the National Health Data Mission (NDHM) released its strategic document detailing the requirement of digitizing all medical registries and thereby creating a National Health Stack (the health stack) based on a June 2018 white-paper by NITI Aayog, policy think tank of the Government of India. The National Health Authority, the nodal agency for Ayushman Bharat, indicated that it would migrate all data collected by the Aarogya Setu application and integrate it with the health stack. Various media reports and occasional public statements have confirmed that the data collected by the Aarogya Setu app would be the starter for the health stack.

It is here that lies a grave point of concern: owing to the faulty data collection mechanism of the application, lack of an express concern for data sharing with Health Stack, and inherent flaws within the health stack, millions will be put at risk of algorithmic or systematic exclusion. There is a massive effort deficit in the competence and effort of public and private providers of health care services in India. It is often observed that healthcare workers are absent for more part of their jobs, and even in cases they are, allied conditions like lack of proper equipment and facilities are a major block. As algorithms and artificial intelligence systems are made commonplace in the healthcare sector, on the pretext of them being more cost-effective and accurate, and equal importance should be given to lack of records, already stretched health infrastructure, outdated research, overburdened medical institutions, and personnel. The subsequent use of data collected, and the use of automated tools for decision making might also exacerbate the existing problems such as underrepresentation of minorities, women, and non-cis males.

There is a lack of any specific legislation concerning the disclosure of medical records in India. However, under the regulations notified by the Indian Medical Council, every medical professional is obligated to maintain physician-patient confidentiality. But this obligation does not extend to other entities, third parties, and data processors responsible for processing patient data, either under the mandate of a state body or a body corporate.

Presently, India has an outdated Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules of 2011 in force, but the rules fail to provide a comprehensive framework based on other internationally accepted practices. On matters of health information security, India currently has a draft Digital Information Security in Healthcare Act which provides for the establishment of eHealth Authorities and Health Information Exchanges at central as well as state-level

Computational systems are mostly data-driven and are ultimately based on the brute force of complex statistical calculations. Since the technical architecture of the proposed National Health Stack is unknown at moment, it further adds to the uncertainty on how the data shared would be used. These raises, as Prof. Hildebrandt points out, the question of to what extent such design should support legal requirements, thus contributing to interactions that fit the system of checks and balances typical for a society that demands that all of its human and institutional agents be “under the rule of law”. The issue of consent is very inherent to the rule of law, as in the digital social contract it ensures the individualistic right to self-determination.

The need for an informed consent overlaps with the ‘purpose limitation’ and ‘collection limitation’ principles, part of the core Fair Information Principles (FIPs), as part of the Guidelines governing the protection of privacy and transborder flows of personal data, by OECD, which came out first in 1980. The principles stipulate that “There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject”, all while ensuring that “the purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose”.

Privacy, like other abstract and subjective freedoms, cannot be reduced to the fulfillment of certain conditions, nor it can be given a delineated shape. However, we must endeavor to give users at least some level of control so that they can better understand and balance privacy considerations against countervailing interests.

 

About the authors

Gyan Tripathi is a student of law at Symbiosis International (Deemed University), Pune; and a Research Associate with Scriboard [Advocates and Legal Consultants]. He particularly loves to research the intersection of technology and laws and its impact on society. He tweets at @tripathi_gy.

Setu Bandh Upadhyay is a lawyer and policy analyst working on Technology Policy issues in the global south. Along with a law degree, he holds a graduate Public Policy degree from the Central European University. He has a diverse set of experiences working with different stakeholders in India, East Africa, and Europe. Currently, he is also serving as the Country Expert for India for the Varieties of Democracy (V-Dem project). He tweets at @setubupadhyay.

NOW OUT: COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society (free download)

We are thrilled to announce the publication of the collection “COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society”, edited by Stefania Milan, Emiliano Treré (Cardiff University) and Silvia Masiero (University of Oslo) for the Theory on Demand series of the Institute of Network Cultures!

The book explores pandemic invisibilities and datafied policies, but also forms of resistance and creativity of communities at the margins as they try to negotiate survival during the COVID-19 crisis. It features 75 authors writing in 5 languages in 282 pages that amplify the silenced voices of the first pandemic of the datafied society. In so doing, it seeks to de-center dominant ways of being and knowing while contributing a decolonial approach to the narration of the COVID-19 emergency. It brings researchers, activists, practitioners, and communities on the ground into dialogue to offer critical reflections in near-real time and in an accessible language, from indigenous groups in New Zealand to impoverished families in Spain, from data activists in South Africa to gig workers in India, from feminicidios in Mexico to North/South stereotypes in Europe, from astronomers in Brazil to questions of infrastructure in Russia and Github activism in China—and much more!

The book is **open access**. You can download the .pdf and .epub versions from this page.
While supplies last, we are also distributing printed copies for free (use the same link to order yours).

“COVID-19 from the Margins caringly and thoughtfully demonstrates why the multiplicity we call “the poor” is more than ever at the receiving end of the worst effects of globalized, patriarchal/colonial racist capitalism. But they are not passive victims, for their everyday forms of activism and re-existence, including their daily tweaking of the digital for purposes of community, care, and survival, has incredible insights about design and digital justice that this book takes to heart as we strive to undo the lethal effects of ‘the first pandemic of the datafied society’ “, wrote about the book Colombian anthropologist Arturo Escobar, author of ‘Designs for the Pluriverse. Radical Interdependence, Autonomy, and the Making of Worlds’ (Duke UP, 2018).

A number of book launch event will follow in the coming weeks. Visit this website to stay tuned, or follow the project on Twitter (@BigDataSur).

We wish to thank a number of sponsors without whom this project and the blog where it all started would not have been possible. In order of appearance, the Amsterdam School of Cultural Analysis, the School of Journalism, Media and Culture at Cardiff University, the European Research Council, and the Research Priority Areas of the University of Amsterdam Global Digital Cultures and Amsterdam Center for European Studies. Finally, a big heartfelt thanks goes to Geert Lovink and his INC team, for believing in this project from the start and giving us the chance to experiment with multilingualism and knowledge sharing.

Stefania on ‘Tech-Based States of Emergency’ (PRIO, January 27)

On January the 27th, Stefania will contribute to the event ‘Tech-Based States of Emergency: Public Responses and Societal implications’ organised by the Peace Research Institute Oslo (PRIO). She will join Brenda Jimris-Rekve (Basic Internet Foundation) and Sean Boots (Canadian Digital Services) as speakers, with Maria Gabrielsen Jumbert and Kristoffer Lidén from PRIO as moderator and an introduction by Bruno Oliveira Martins (PRIO), project leader of “States of Emergency as Disruptive Pandemic Politics”. As this is a virtual event, you can register to attend following this link.

Event description: Tech-Based States of Emergency: Public Responses and Societal implications

A one-in-a-century pandemic challenges global stability, threatening the lives of millions and the economic well-being of most countries on earth. Many states are invoking state of emergencies as the world collectively faces the challenges posed by the COVID-19 pandemic. States have relied on technologies to help mitigate the spread of the disease by deploying the use of metadata analysis, geolocation tracking, facial recognition screening and drones. But the resort to tech-based solutions to a complex social problem raises new questions that demand public and societal scrutiny.

just out: ‘Latin American Visions for a Digital New Deal: Towards Buen Vivir with Data’

Stefania Milan and Emiliano Treré, co-founders of the Big Data from the South Research Initiative, have contributed a piece entitled ‘Latin American Visions for a Digital New Deal: Towards Buen Vivir with Data‘ to the essay collection ‘A Digital New Deal. Visions of Justice in a Post-Covid World‘, edited by the Just Net Coalition and IT for Change (India). Their piece is accompanied by the beautiful illustration of Mansi Thakkar.

Read the project description, and download the full collection as pdf from this link.

The Just Net Coalition and IT for Change invite you to explore and engage with our Digital New Deal essay series, a thoughtfully curated set of long reads authored by passionate and committed scholars, activists and visionaries from around the world. In these essays, authors reflect on the current global Covid moment and its challenges from various standpoints and how the digital fits into this equation. From activists steeped in long standing battles against corporate capture of our resources and pushing for food sovereignty, labor rights, climate justice, equitable development, to scholars pondering the new questions of the internet, data, AI and the state of our public sphere, to practitioners seeking to address the disenfranchisement of countless communities and people from digital systems, the Digital New Deal captures the current anxieties, challenges, hopes and visions for the future. Beyond calling out what ails the world, our authors set for themselves in these poignant, informative, and radical pieces, the difficult challenge of outlining progressive solutions…to future gaze, imagine new possibilities and to reclaim the digital for justice.

 

 

Niels and Stefania at Privacy Camp

On January the 26th, Niels ten Oever and Stefania Milan will partake in the annual appointment of the Privacy Camp, this time round however only in virtual format.

Both will feature in the panel “Wiring digital justice: Embedding rights in Internet governance ‘by infrastructure’” (12.05-13), where Niels is a speaker and Stefania co-moderates together with Francesca Musiani (CNRS Paris). Know more about topic and speaker line-up. Niels and Francesca are also the organisers of the session.

Later in the day, Stefania will contribute to the panel “Reclaim Your Face, Reclaim Your Space: resisting the criminalisation of public spaces under biometric mass surveillance” (14.05-15), organised by Ella Jakubowska (European Digital Rights). Further details can be found here.

 

[BigDataSur-COVID] Digital Social Protection during COVID-19: The Shifted Meaning of Data during the Pandemic

Silvia Masiero reflects on changes in digital social protection during the pandemic, outlines the implications of such changes for data justice, and co-proposes an initiative to discuss them.

by Silvia Masiero

One year ago today was my last time leaving a field site. Leaving friends and colleagues in India, promising to return as usual for the Easter break, it was hard to imagine to be suddenly plugged into the world we live in today. As a researcher of social protection schemes, little did I know that my research universe – digital anti-poverty programmes across the Global South – would have changed as it has over the last 12 months. As I have recently stated in an open commentary, COVID-19 has yielded manifold implications on social protection systems, implications that require reflection as conditions of pandemic exceptionalism perdurate over time and across regions.

The Shifted Meaning of Beneficiary Data

My latest study was on a farmer subsidy programme based on the datafication of recipients – a term that indicates, from previous work, the conversion of human beneficiaries into machine-readable data. The programme epitomises the larger global trend of matching demographic and, increasingly, biometric credentials of individuals with data on eligibility for anti-poverty schemes, such as poverty status, family size and membership of protected groups. Seeding social protection databases with biometric details, a practice exemplified by India’s Aadhaar, is supposed to combat exclusion and inclusion errors alike, assigning benefits to all entitled subjects while scrapping all the non-entitled. At the same time, quantitative and qualitative research works have shown the limits of datafication, especially its consequences in reinforcing exclusions of entitled subjects whose ability to authenticate is reduced by failures in recognition, sometimes resulting in denial of vital schemes.

During the pandemic, as numerous contributions to this blog have illustrated, existing vulnerabilities have become deeper and new ones have emerged, expanding the pool of people in need for social protection. Instances of the former are daily-wage and gig workers – who have seen their extant subalternities deepened in the pandemic, in terms of loss of income or severely heightened risks at work. Instances of new vulnerabilities, instead, are associated to the “new poor” of the pandemic, affected in many ways by the backlashes of economic paralyses across the globe. The result is the heightened global need for social protection to work smoothly, making the affordance of inclusiveness – being able to cover for the (old and new) needful – arguably prioritarian to that of exclusiveness, aimed at “curbing fraud” by secure biometric identification.

Since its launch in May 2020, this blog has hosted contributions on social protection schemes from countries including Colombia, Peru, India, Brazil and Spain, all highlighting the heightened needs of social protection under COVID-19. While describing different world realities, all contributions remark how the vulnerabilities brought by COVID-19 call for means to combat wrongful exclusions, for example using excess stocks of commodities to expand scheme coverage. Against the backdrop of a world in which the priority was “curbing fraud” through the most up-to-date biometrics, the pandemic threw us in a world in which inclusion of the needful takes priority over the roles of anti-poverty scheme datafication. The first implication, for researchers of digital social protection, is the need to devise ways to learn from examples of expanded coverage in social protection, of which India’s National Food Security Act has offered an important instantiation over the last decade.

Social Protection in the Pandemic: New Data Injustices

As the edited book “Data Justice and COVID-19: Global Perspectives” notes, the hybrid public-private architectures emerged during COVID-19 have generated new forms of data injustice, detailed in the volume through 33 country cases. The book opens, along with important debates on the meaning of data in a post-pandemic world, the question on data justice implications of COVID-19 for digital social protection. Drawing on contributions published in this blog, as well as reports of social protection initiatives taken during the pandemic, I have recently highlighted three forms of data injustice – legal, informational and design-related – that need monitoring as the pandemic scenario persists.

From a legal perspective, injustice is afforded by the subordination of entitlements to registration of users into biometric databases, which become a condition for access – leading to scenaria of forced trading of data for entitlements, widely explored in the literature before COVID-19. The heightened need for social protection in the pandemic deepens the adverse implications of exclusions, exacerbating the consequences of injustice for those excluded from the biometric datasets. Stories from urban poor contexts ranging from Nebraska, US to São Paulo, Brazil, underscore the same point: while the legal data injustice of exclusion was problematic before, it only heightens its problematicity in the context of the economic backlash of the pandemic on the poor.

From an informational perspective, the way entitlements are determined – specifically, the use of citizens’ information across databases to determine entitlements – has become crucial during the pandemic. Two cases from this blog especially detail this point. In Colombia, information to determine eligibility for the Ingreso Solidario (Solidarity Income) program was combined from existing data repositories, but without detail on how the algorithm combined information and thus, on how eligibility was determined. In Peru, subsidies have leveraged information gathered through databases such as the Census, property registry and electricity consumption, again without further light on how information was combined. Uncertainty on eligibility criteria, beyond deepening pandemic distress, arguably limits beneficiaries’ ability to contest eligibility decisions, due to lack of clarity on the very grounds on which these are taken.

Finally, design-related data injustices arise from the misalignment of datafied social protection schemes with the effective needs of beneficiaries. In the pandemic, the trade-off brought by biometric social protection – entailing increased accuracy of identification, at the cost of greater exclusions – has been brought to its extreme consequences, as extreme are the implications of denial of subsidy for households left out by social protection schemes. This brings to light a trade-off whose problematicity was already known well before the pandemic started, and further heightened by studies questioning the effective ability of biometrics to increase the accuracy of targeting. As a result, a third, design-related form of data injustice needs monitoring as we trace the evolution of social protection systems through COVID-19.

Ways Forward: A Roundtable to Discuss

As the pandemic and its consequences perdurate, new ways are needed to appraise the consequences of shifts in datafied social protection that the crisis that the crisis has brought. Not surprisingly, my promise of going back to the field for Easter 2020 could not be maintained, and established ways to conduct research on global social protection needed reinvention. It is against this backdrop that a current initiative, launched in the context of the TILTing Perspectives Conference 2021, may make a substantial contribution to knowledge on the theme.

The initiative, a Roundtable on COVID-19 and Biometric ID: Implications for Social Protection, invites presentations on how social protection systems have transformed during the pandemic, with a focus on biometric social protection and the evolution of its roles and systems. Abstracts (150-200 words) are invited as submissions to the TILTing Perspectives Conference, with the objective of gathering presentations from diverse world regions and draw conclusions together. Proposals for the role of discussants – to take part in the roundtable and formulate questions for debate – are also invited through the system. In an epoch where established ways to do fieldwork are no longer practicable, we want the roundtable to be an occasion to advance collective knowledge, together deepening our awareness of how social protection has changed in the first pandemic of the datafied society.

Submission to the Roundtable are invited at: https://easychair.org/cfp/Tilting2021

 

 

DATACTIVE 2020 year-in-review

2020 has been an intense year under many points of view. As you know, the DATACTIVE project was supposed to end in August 2020, but due to COVID-19 we negotiated a so-called no-cost extension with our funder, the European Research Council, which extended the life span of the project until June 30th, 2021.

Over these months, we have kept busy despite the many uncertainties and logistical problems imposed by the pandemic. We would love to share the good news and our main accomplishments, together with our best wishes for the new year.

What are we most proud of? The first of the four DATACTIVE PhD students successfully defended his PhD in October 2020! The dissertation, entitled ‘Wired Norms: Inscription, resistance, and subversion in the governance of the Internet infrastructure’, can be found here [0]. Watch out for the other PhD candidates…

We gave many talks, mostly on Zoom!  But we also managed to host the workshop ‘Contentious Data: The Social Movement Society in the Age of Datafication’, organized by Davide Beraldo and Stefania on November 12-13 2020 and contributing towards the Special Issue of the same title, in preparation for the journal Social Movement Studies.

We completed data collection and analysis of over 250 interviews with civil society actors from all corners of the globe. Our developer Christo have finalized (and will soon release in GitHub) an open-source infrastructure that allows to collaboratively analyze and manage qualitative data outside the corporate environment of mainstream data analysis software and safeguarding the privacy and safety of our informants. We are now busy making sense of all these beautiful ‘thick’ data and writing up articles and chapters.

Stefania has been particularly busy with the spin-off blog of the Big Data from the South Research Initiative, dedicated to exploring the first pandemic of the datafied society seen from… communities and individuals left at the margins of media coverage, public concern and government response. You can access the many contributions published since May in COVID-19 from the Margins [1]. We are happy to announce that the blog resulted in an open-access multilingual book edited by Stefania, Emiliano Treré and Silvia Masiero for the Amsterdam-based Institute of Network Culture. The book will be released in both digital and printed form in January 2021. Book your copy if you want to receive one!

Collectively, we published four articles and three book chapters, listed below. Four articles—for New Media & Society, Globalization, and Big Data & Society, will be released in early 2021, alongside three book chapters. A co-edited special issue on media innovation and social change has been released in early 2020, while three co-edited special issues, respectively for the peer-reviewed international journals Internet Policy Review, Social Movement Studies and Palabra Clave, are in the working and will be released in the course of 2021.

Many people worked in the background alongside with PI Stefania, in particular our tireless project manager Jeroen de Vos, our developer Christo, our PhD candidates Guillén Torres and Niels ten Oever, and postdoc Davide Beraldo and to them goes our gratitude.

We wish you happy holidays and a peaceful and healthy 2021!

Best regards, Stefania for the DATACTIVE team

[0] https://nielstenoever.net/wp-content/uploads/2020/09/WiredNorms-NielstenOever.pdf
[1] https://data-activism.net/blog-covid-19-from-the-margins/

OUR PUBLICATIONS IN 2020

PhD DISSERTATION

ten Oever, Niels. (2020). Wired Norms: Inscription, resistance, and subversion in the governance of the Internet infrastructure. Ph.D thesis, University of Amsterdam

ARTICLES

Milan, S., & Treré, E. (2020). The rise of the data poor: The COVID-19 pandemic seen from the margins. Social Media + Society, July. https://doi.org/10.1177/2056305120948233

Milan, S., & Barbosa, S. (2020). Enter the WhatsApper: Reinventing digital activism at the time of chat apps. First Monday, 25(1). https://doi.org/10.5210/fm.v25i12.10414

Tanczer, L. M., Deibert, R. J., Bigo, D., Franklin, M. I., Melgaço, L., Lyon, D., Kazansky, B., & Milan, S. (2020). Online Surveillance, Censorship, and Encryption in Academia. International Studies Perspectives, 21(1), 1–36. https://doi.org/10.1093/isp/ekz016

Milan, S. (2020). Techno-solutionism and the standard human in the making of the COVID-19 pandemic. Big Data & Society. https://doi.org/10.1177/2053951720966781

SPECIAL ISSUES

Ni Bhroin, N., & Milan, S. (Eds.). (2020). Special issue: Media Innovation and Social Change. Journal of Media Innovations, 6(1). https://journals.uio.no/TJMI

BOOK CHAPTERS

ten Oever, N., Milan, S., & Beraldo, D. (2020). Studying Discourse in Internet Governance through Mailing-list Analysis. In D. L. Cogburn, L. DeNardis, N. S. Levinson, & F. Musiani (Eds.), Research Methods in Internet Governance (pp. 213–229). MIT Press. https://doi.org/10.7551/mitpress/12400.003.0011

Milan, S. (2020a). Big Data. In B. Blaagaard, L. Pérez-González, & M. Baker (Eds.), Routledge Encyclopedia of Citizen Media (pp. 37–42). Routledge.

Milan, S., & Treré, E. (2020b). Una brecha de datos cada vez mayor: La Covid-19 y el Sur Global. In B. M. Bringel & G. Pleyers (Eds.), Alerta global. Políticas, movimientos sociales y futuros en disputa en tiempos de pandemia (pp. 95–100). CLACSO and ALAS. http://biblioteca.clacso.edu.ar/clacso/se/20200826014541/Alerta-global.pdf

OTHER

Milan, S., & Treré, E. (2020c, April 3). A widening data divide: COVID-19 and the Global South. OpenDemocracy. https://www.opendemocracy.net/en/openmovements/widening-data-divide-covid-19-and-global-south/

ten Oever, Niels. (2020). ‘Cybernetica, dataficatie en surveillance in de polder‘ in: Ni Dieu, Ni Maitre. Festschrift for Ruud Kaulingfrek. Waardenwerk, Journal for Humanistic Studies, SWP.

Di Salvo, P., & Milan, S. (2020, April 24). I quattro nemici (quasi) invisibili nella prima pandemia dell’era della società dei dati. Il Manifesto. https://ilmanifesto.it/i-quattro-nemici-quasi-invisibili-nella-prima-pandemia-dellera-della-societa-dei-dati/

Milan, S., & Di Salvo, P. (2020, June 8). Four invisible enemies in the first pandemic of a “datafied society.” Open Democracy. https://www.opendemocracy.net/en/can-europe-make-it/four-invisible-enemies-in-the-first-pandemic-of-a-datafied-society/

Milan, S., Pelizza, A., & Lausberg, Y. (2020, April 28). Making migrants visible to COVID-19 counting: The dilemma. OpenDemocracy. https://www.opendemocracy.net/en/can-europe-make-it/making-migrants-visible-covid-19-counting-dilemma/

Pelizza, A., Lausberg, Y., & Milan, S. (2020, maggio). Come e perché rendere visibili i migranti nei dati della pandemia. Internazionale. https://www.internazionale.it/opinione/annalisa-pelizza/2020/05/14/migranti-dati-pandemiazza/2020/05/14/migranti-dati-pandemia

IN PRESS

BOOKS

Milan, S., Treré, E., & Masiero, S. (2021). COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society. Institute for Networked Cultures.

ARTICLES

Kazansky B (2021). “It depends on your threat model”: Understanding the anticipatory dimensions of resistance to datafication harms. Big Data & Society.

Kazansky, B., & Milan, S. (2021). Bodies Not Templates: Contesting Mainstream Algorithmic Imaginaries. New Media & Society.

ten Oever, N. (2021). ‘This is not how we imagined it’ – Technological Affordances, Economic Drivers and the Internet Architecture Imaginary. New Media & Society.

ten Oever, Niels (2021). Norm conflict in the governance of transnational and distributed i­nfrastructures: the case of Internet routing. Globalizations.

CHAPTERS

Milan, S., & Treré, E. (2021). Big Data From the South(s): An Analytical Matrix to Investigate Data at the Margins. In D. Rohlinger & S. Sobieraj (Eds.), The Oxford Handbook of Sociology and Digital Media. Oxford University Press.

Milan, S., & Treré, E. (2021). Latin American visions for a Digital New Deal: Learning from critical ecology, liberation pedagogy and autonomous design. In IT for Change (Ed.), Digital New Deal. IT for Change.

ten Oever, Niels. 2021. ‘The metagovernance of internet governance’. In eds. B. Haggart, N. Tusikov, and J.A. Scholte, Power and Authority in Internet Governance: Return of the State?. Routeledge Global Cooperation Series

SPECIAL ISSUES IN THE WORKING

Three special issues we are very excited about

Milan, S., Beraldo, D., & Flesher Fominaya, C. Contentious Data: The Social Movement Society in the Age of Datafication, Social Movement Studies

Treré, E., & Milan, S., Latin American Perspectives on Datafication and Artificial Intelligence, Palabra Clave

Burri, M., Irion, K, Milan, S.& Kolk, A. Governing European values inside data flows, Internet Policy Review

ALSO FROM THE TEAM….

Beraldo, D. (2020). Movements as multiplicities and contentious branding: lessons from the digital exploration of# Occupy and# Anonymous, Information, Communication & Society, DOI: 10.1080/1369118X.2020.1847164

Grover, G., & ten Oever, N. (2021). Guidelines for Human Rights Protocol and Architecture Considerations, RFC-series, Internet Research Taskforce.

Knodel, Mallory., Uhlig, Ulrike., ten Oever, Niels., Cath, Corinne. (2020) How the Internet Really Works: An Illustrated Guide to Protocols, Privacy, Censorship, and Governance. No Starch Press, San Francisco, United States.

Milan, C., & Milan, S. (2020). Fighting Gentrification from the Boxing Ring: How Community Gyms reclaim the Right to the City. Social Movement Studies. https://doi.org/10.1080/14742837.2020.1839406.

BigDataSur 2020 year-in-review

by Stefania Milan, Emiliano Treré and Silvia Masiero

December 18, 2020

2020 has been a tough year for many reasons. The COVID-19 global health emergency has claimed lives, exposed our dependence on the digital infrastructure, and impoverished many communities even further. We were forced to change plans, subvert our lifestyles, distance ourselves from our loved ones. The first pandemic of the datafied society has exposed the weakness of people and communities at the margins. Not only has the Global South been severely hit, but also gig workers, impoverished families, domestic violence survivors, LGBTQ+, indigenous, migrant, racialized and rural communities have paid an even higher price in terms of lowered income, loneliness, violence, death. If anything, this pandemic has made clear the need for an initiative like Big Data from the South, tasked with interrogating and exposing impending forms of inequality, injustice and poverty as they intercept the datafied society. 

Against this backdrop, Big Data Sur has not remained quiet. Our network has produced a number of critical cutting-edge reflections on the main challenges of the pandemic. The thematic, multilingual blog ‘COVID-19 from the margins’, launched in May 2020, has given voice to the many fringes left in the dark by the mainstream coverage of the pandemic. It has and continues offering precious food for thought to reflect on the challenges of the pandemic for the disempowered.

To date, we published contributions from over 80 authors, in five idioms, and reporting from some 25 countries. We covered all continents–from Indonesia to Mexico, from Peru to New Zealand, from Namibia to China to Spain. Among others, we ran a special on Brazil when the controversial president Jair Bolsonaro dismissed the pandemic as a mere ‘gripezinha’ (light flu). Lately, a group of astronomers contributed their experience with working with indigenous communities in the rural areas of Brazil. 

We worked in the shadows (we even designed the logo ourselves!), we worked nights. We fundraised to be able to pay a small contributor fee to authors in need, and provided editorial support in several languages to empower less experienced writers to share their stories for a global audience. This was only possible thanks to new team members that joined us. Silvia Masiero, Associate Professor of Information Systems at the University of Oslo, has joined Emiliano Treré and Stefania Milan in the editorial team. Nicolás Fuster, Guillén Torres, Zhen Ye, Jeroen de Vos, and Yiran Zhao provided key support in the background. Volunteer proof-readers like Sergio Barbosa (Portuguese) e Giulia Polettini (Chinese) helped us occasionally. To this splendid team goes our gratitude and appreciation: without their precious help, we would not have been able to publish in so many idioms and with such a high frequency.

Unfortunately, our project is chronically underfunded. But some illuminated organizations believed in the urgency of the BigDataSur agenda. In particular, the COVID-19 blog was supported by the Amsterdam School of Cultural Analysis at the University of Amsterdam (The Netherlands) and School of Journalism, Media and Culture at Cardiff University (UK), and by the European Research Council via the DATACTIVE project: thank you!

Besides the blog, also in 2020 BigDataSur work and values has been featured in public talks and lectures, and in a seizable number of academic writings. An analytical matrix to study ‘data from the margins’ will soon appear as part of the Oxford Handbook of Sociology and Digital Media edited by Rohlinger and Sobieraj for Oxford University Press. A special issue of the multilingual journal Palabra Clave will be released in August 2021 exploring ‘Latin American perspectives on datafication and Artificial Intelligence’. And more is in store, including plans for a course on ‘Decolonizing Datafication’ to be added to the teaching curriculum at the University of Amsterdam—for a start. 

What’s next?

Due to lack of funding, the COVID-19 blog will progressively wind down. So hurry up and send us your posts if you want to join the conversation! 

But we also have great news in store for you: the blog has given birth to the multi-vocal book COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society. The book—proudly multilingual and rigorously open access—will be released in early January by the Amsterdam-based Institute for Networked Cultures, as part of their edgy series ‘Theory on Demand’. As some of you know, the December release date had to be postponed because COVID-19 hit the publisher, too. We wish to extend our heartfelt thanks to our amazing copy-editor Andrew Schrock from Indelible Voice, who worked against the clock to deliver the final manuscript. Thanks to funding by two of the University of Amsterdam’s Research Priority Areas, namely ’Global Digital Culture’ and ‘Amsterdam Center for European Studies’, as well as the DATACTIVE project, we will print a sizable number of copies for free distribution. Let us know if we should reserve a copy for you! We can mail it anywhere.