Category: show in Big Data from the South

[BigDataSur-COVID] Kampf um die Risikogruppe: Die Auswirkungen von Covid-19 auf ältere Menschen und Menschen mit Behinderung im digitalen Deutschland

The Battle for the At-Risk Group: The Impact of Covid-19 on Elderly People and People with Disabilities in Digital Germany

People with disabilities and elderly people are not a homogeneous group – whether in how they experience datafication in a wealthy country such as Germany nor in how Covid-19 effects their lives. However, what unites them is the old ambivalent struggle over classifications: Who counts as being at-high-risk? Who receives vaccination? Who has to stay at home until fall?  

 

by Ute Kalender

Wie ist die Situation von Menschen mit Behinderung und älteren Menschen in einem Land wie Deutschland, das im letzten Jahr durch Corona einem enormen Digitalisierungsschub unterlag?

Glauben wir neuen und alten Cyberfeminismen, dann müssten Menschen mit Behinderung und ältere Menschen ganz oben auf der coronabedingten Datafizierungswelle surfen. Für Donna Haraway etwa galten Menschen mit Behinderung als Cyborgs schlechthin – als ideale Subjekte einer technologischen Welt. Sie vermutete, dass wegen ihren intimen Verbindungen mit Prothesen und Technologien “[p]erhaps paraplegics and other severely handicapped people can (and sometimes do) have the most intense experiences of complex hybridization”. Und auch in den aktuellen Texten des computerfreundlichen Xenofeminismus sind Menschen mit Behinderung vielfach anzutreffen. Durch die selbstbestimmte Aneignung von digitalen Technologien weisen sie Diskriminierungen im Namen einer natürlichen Ordnung zurück.

Dass sich der technologische Corona-Alltag komplexer gestaltet, ist schwer zu erraten. Bei wem sich jetzt allerdings ein Unbehagen breit macht, weil sie die Authentizitätskeule fürchtet – gut so. Ich habe nicht vor, mich auf die wirklichen Erfahrungen von Menschen mit Behinderung in einer datafizierten Coronawelt zu beziehen, um die digitalfeministischen Vorstellungen von Menschen mit Behinderung als idealisiert oder ideologisch zu entlarven. Erfahrung ist ja bekanntlich etwas zu Klebriges, als dass wir uns ernsthaft darauf beziehen könnten. Nein ich bleibe beim Synthetischen, bei den Instagram-Videos von Krüppelaktivis_innen zu #ZeroCovid, einer Bewegung, die sich für den solidarischen europaweiten Shutdown einsetzt, bei Eindrücken vom Leben meiner Eltern und bei den Projektionen, die ich als nicht-behinderte Frau auf Menschen mit Behinderung habe, die ich in den Straßen Berlins treffe.

Beginnen wir mit meinen um die 80-jährigen, teils stark gehandicapten, westdeutschen Unterklasse-Eltern. Sie wurden 2020 zwangsdigitalisiert. Ein großes Telekommunikations-Unternehmen stellte von analog auf digital um, kündigte ihnen den preiswerten jahrzehntealten Vertrag und ließ sie einen neuen, teureren abschließen. Nach anfänglichem Ärger kaufte ich meinen Eltern ein Tablet. Mein Vater musste in immer kürzeren Abständen ins Krankenhaus und durfte dort wegen Corona nicht besucht werden. Vielleicht würden Videotelefonate seine Aufenthalte erträglicher machen. Meine Mutter verschickte bald eifrig Nachrichten über Messengerdienste, beanstandete meine Frisur auf Fotos, die sie von mir im Internet fand und verwickelte etliche verdutzte Bekannte in entfernten Städten in Videokonferenzen. Wenn mein Vater meines und das Gesicht meiner Schwester auf dem Tablet sah, freute er sich zwar immer, war den Tränen nahe und küsste begeistert die Oberfläche des Tablets, fand aber keinen Zugang und würde das Gerät im Krankenhaus wohl niemals anschmeißen. Die Schrift zu klein, die Oberfläche zu unruhig, die Schritte in den Onlineraum nicht zu merken. Ganz anders als Freunde von mir in Brasilien. Im gleichen Alter, aber digital kompetenter. Immerhin: Bevor sich wieder ein Krankenhausaufenthalt ankündigte, ergatterten wir für meinen Vater einen frühen Impftermin Ende Februar. Trotz zusammenbrechender Server zur Terminvergabe in Nord Rhein Westphalen, einem Bundesland im Westen von Deutschland gelegen.

Andere Menschen mit Behinderung und chronischen Krankheiten müssen vermutlich bis Ende des Sommers auf eine Impfung warten. Der jüngere Student mit Spina Bifida, der nicht im Heim lebt ebenso wie die 55-jährige Frau mit Lungenkrebs, die frisch operiert jetzt isoliert zu Hause sitzt. Die deutsche Corona-Impfverortung schließt sie von einer baldigen Impfung aus. Der Behinderten-Aktivist Raul Krauthausen sieht darin eines der größten Missverständnisse in Deutschland – dass die Risikogruppe ausschließlich aus Hochaltrigen besteht und in Heimen lebt. Es gebe 100-tausende jüngere Menschen mit chronischen Krankheiten. Sie beschäftigen Assistent_innen, haben Kinder, Partner_innen und Freund_innen. All diese Leute fallen seit Beginn der Schutzmaßnahmen durch das Raster, erhalten keine Masken, Schutzkleidungen und Schnelltests. Sie bekommen keine Pflegeboni für ihre Assistent_innen oder pflegende Angehörige und es gibt für diese Menschen keine Impfungen.

Eine Entsolidarisierung mit denen, die in der Todesfalle Heim sitzen oder mit den über 80-Jährigen liegt den Kritiker_innen fern. Dennoch zeigen die Statements: Der Kampf um das zweischneidige Schwert der Risikoklassifikation – des Einschlusses in die Risikogruppe mit hoher Impfpriorität – ist entbrannt. Und die Interventionen erinnern auch daran, dass jedes Klassifizieren eine moralische Agenda beinhaltet. Klassifikationen wertschätzen das Leben der einen und blenden andere Leben aus. Klassifikationen gewähren einer Gruppe Zugang zu Ressourcen und verweigern sie einer anderen.

Die Interventionen werden aber auch von jenen geführt, die Zugang zu digitalen Endgeräten und Infrastrukturen haben und sich gekonnt in den sozialen Medien wie Instagram und Facebook bewegen. Von den privilegierten Behinderten, den Edel-Krüppeln, wie der leider im letzten Jahr verstorbene Behinderten-Aktivist Matthias Vernaldi zu sagen pflegte. Und nicht von jenen die in einem reichen Land wie Deutschland in Zonen der Globalen Süden leben. Die Interventionen zeigen, dass Menschen mit Behinderung und ältere Menschen keine homogene, keine passive, schon gar keine immer leidende Gruppe ist. Dennoch frage ich mich auch, welchen Standpunkt die haben, mit denen ich viel zu selten, meist gar nicht spreche, die mir aber einige Male am Tag begegnen. Betty die Pöbel-Prinzessin vom Kottbusser Tor oder Schrei-Stubi, der in einem Zelt am S-Bahn Ring in Neukölln wohnt. Bei ihnen lässt sich nicht sagen, ob die Behinderung, ihre mentalen Angelegenheiten, im Zuge ihres Lebens auf der Straße entstanden sind, oder ob umgekehrt ihre Behinderungen zu einem Leben auf der Straße geführt haben. Betty und Stubi verfügen weder über ein Handy noch werden sie derzeit angeschrieben und zur Impfung eingeladen. Sie fallen durch das deutsche Datenraster – wollen vielleicht durchfallen, gar nicht erfasst werden. Und vielleicht ist es so wie ein Bekannter mit Behinderung, ein Professor für Rehabilitationswissenschaften, mal in einer unserer nächtelangen Diskussion über barrierefreie Apps sagte: Das Problem heißt nicht Digitalisierung, das Problem heißt Armut.

 

About the author

Dr. Ute Kalender is a cultural scientist from Berlin. As a qualitative researcher, she works in a research project on intersexuality at Charité University Medicine and in Digitale Akademie Pflege 4.0–a project on the digitalisation of the care sector.

Dr. Ute Kalender ist Kulturwissenschaftlerin und lebt in Berlin. Als qualitative Forscherin arbeitet sie in einem Forschung zu Intersexualität an der Charité Universitätsmedizin. Und in dem BMBF-Projekt Digitale Akademie Pflege 4.0 – einem Forschungsprojekt zur Digitalisierung des Pflegesektors.

Stefania at the presentation of the book ‘Lives of Data. Essays on Computational Cultures from India’

On February 19th, 5pm Indian time (12.30 CET) Stefania will join the presentation of the book ‘Lives of Data. Essays on Computational Cultures from India’, edited by Sandeep Mertia and published by the Institute of Network Cultures (2020). The volume is open access and can be downloaded from this link.

Lives of Data is based on research projects and workshops at the Sarai programme of CSDS. The book brings together fifteen interdisciplinary scholars and practitioners to open up inquiries into computational cultures in India. Encompassing history, anthropology, science and technology studies (STS), media studies, civic technology, data science, digital humanities and journalism, the essays open up possibilities for a cross disciplinary dialogue on data. Lives of Data is an open access publication from the Institute of Network Cultures Amsterdam in collaboration with the Sarai programme of the CSDS.

Sandeep Mertia is a PhD Candidate at the Department of Media, Culture, and Communication, and Urban Doctoral Fellow, New York City.

Jahnavi Phalkey is Founding Director of Science Gallery, Bengaluru.

Stefania Milan is Associate Professor of New Media, University of Amsterdam.

Nimmi Rangaswamy is Associate Professor at IIIT and Adjunct Professor at IIT, both at Hyderabad.

Ravi Sundaram is Professor at Centre for the Study of Developing Societies, Delhi.

The discussion will be held on Zoom

Link: http://bit.ly/3qjKnEo

Meeting ID: 991 2507 4788

Passcode: csdsdelhi

The full invite can be found here.

 

[BigDataSur-COVID19] Come la sorveglianza biometrica si sta insinuando nel trasporto pubblico

Durante la pandemia i lavoratori e le lavoratrici essenziali sono stati i soggetti più vulnerabili. Questo articolo discute come la sorveglianza introdotta per limitare il COVID-19 molto probabilmente sarà normalizzata nel contesto post-pandemia.

by Laura Carrer and Riccardo Coluccini

 

COVID-19 has shown how essential workers, while fundamental to our societies, are constantly being exploited and marginalized. This is even more true if we consider how smart working has fundamentally changed our perception of public spaces: working from home is a privilege for few people and the public space is something to be monitored. Many essential workers are still forced to commute to their workplaces using public transport and tech companies are taking advantage of the pandemic to introduce anti-COVID solutions that further push for dataficaton of our lives. We see the deployment of video surveillance systems enhanced by algorithms to monitor distance between people on public transport systems and software that can detect a person’s face and temperature and check if they are wearing a face mask. Forced to move in our public spaces, essential workers become guinea pigs for technological experiments that risk further normalizing biometric surveillance.

 

La pandemia di COVID-19 ha creato uno spartiacque nel modo in cui abitiamo il nostro spazio pubblico: mentre alcune fasce privilegiate della popolazione mondiale hanno beneficiato del lavoro da remoto, milioni di persone nel settore della sanità, dell’istruzione, della ristorazione, nell’infrastruttura logistica e di produzione non hanno avuto gli stessi privilegi e spesso hanno lavorato senza adeguati dispositivi di protezione individuale, continuando a recarsi a lavoro quando possibile con i mezzi pubblici. Molto spesso queste categorie di lavoratori essenziali sono anche appartenenti a minoranze e hanno vissuto quindi doppiamente il pesante bilancio della pandemia di COVID-19, pagando un prezzo molto alto.

Se da una parte è sembrata esserci una presa di coscienza nei confronti di queste lavoratrici e lavoratori essenziali—unici a muoversi e continuare a garantire un certo grado di normalità nella nostra vita quotidiana durante la pandemia—dall’altra queste persone rischiano di finire al centro di un nuovo disturbante esperimento tecnologico che potrebbe normalizzare l’utilizzo della sorveglianza all’interno delle nostre città.

I mezzi pubblici sono diventati il campo di test per soluzioni tecnologiche anti-COVID che si basano sulla videosorveglianza: dagli algoritmi per monitorare la distanza tra passeggeri a bordo fino ai software in grado di riconoscere se la persona indossa o meno una mascherina.

L’innovazione tecnologica sembra trainare la risposta alla pandemia in tutto il mondo, non solo sotto forma di app per il tracciamento dei contagi, ma anche e soprattutto sfruttando l’infrastruttura di videosorveglianza già ampiamente diffusa. A Città del Messico, il sistema di videosorveglianza cittadina è stato subito riconvertito per monitorare l’uso delle mascherine. A Mosca, la rete capillare di videocamere (più di 100.000) è stata utilizzata per controllare in tempo reale i cittadini positivi al coronavirus che per varie ragioni si allontanavano da casa. In Messico, il primo sistema di riconoscimento facciale nazionale (nello stato di Coahuila) implementato nel 2019 ha incluso la rilevazione termica ad aprile 2020, un mese dopo l’inizio della pandemia. Un’infrastruttura preesistente rende la possibilità di normalizzazione e controllo dei cittadini da parte dello Stato inevitabilmente più semplice.

Tutto questo avviene spesso a scapito di una corretta valutazione dei rischi per i diritti umani e si sta espandendo in maniera poco trasparente anche sui mezzi pubblici.

Lo scorso maggio, a Parigi, sono state introdotte nelle linee della metropolitana videocamere in grado di monitorare il numero di passeggeri e l’effettivo utilizzo delle mascherine. La stessa tecnologia è stata introdotta in alcuni mercati all’aperto e sui bus della città di Cannes. Tecnologie simili sono state introdotte in India a bordo di bus di lunga distanza e in alcune stazioni ferroviarie.

Il sistema di trasporti dello stato del New Jersey ha annunciato a gennaio 2021 il test di una serie di tecnologie per rilevare la temperatura, individuare l’uso delle mascherine e usare algoritmi di intelligenza artificiale per monitorare il flusso di persone. In Cina, l’azienda di trasporti Shangai Sunwin Bus ha già introdotto quelli che chiama “Healthcare Bus” muniti di tecnologie biometriche.

Le aziende del settore hanno subito sfruttato questo spiraglio per pubblicizzare le proprie tecnologie, come ad esempio l’azienda Hikvision, produttrice mondiale di videocamere. In Italia, l’azienda RECO3.26 che offre il sistema di riconoscimento facciale alla polizia scientifica italiana ha da subito approfittato della situazione offrendo una suite di prodotti anti-COVID: tra questi ci sono il DPI Check, per controllare appunto l’utilizzo della mascherina chirurgica da parte dei soggetti che rientrano nell’area videosorvegliata; Crowd Detection e People Counting per monitorare gli assembramenti; oltre a funzioni per la misurazione in tempo reale della distanza di sicurezza tra le persone videosorvegliate e il rilevamento della temperatura corporea. In Italia, alcune di queste tecnologie sono state subito acquistate da parte dell’Azienda Trasporti Milanesi ATM. E non è chiaro se l’Autorità per la privacy italiana sia stata informata al riguardo.

L’utilizzo di queste tecnologie, oltre ad essere invocato come primaria e più efficiente soluzione per la risoluzione di un problema emergenziale ben più complesso e intricato, è problematico anche sotto un altro punto di vista. L’ente governativo americano National Institute of Standards and Technology (NIST) ha recentemente pubblicato un report di analisi dei software di riconoscimento facciale presenti al momento sul mercato, evidenziando come l’accuratezza di questi ultimi sia molto bassa soprattutto ora che l’utilizzo della mascherina è obbligatorio in molti paesi del mondo. Un prezzo che, visto l’utilizzo della tecnologia biometrica al giorno d’oggi, molte persone—soprattutto appartenenti a categorie già ampiamente discriminate—saranno costrette a pagare caro.

Nella narrazione odierna, tecno-soluzionista e tecno-ottimista, la sorveglianza dei corpi per contrastare un virus che si diffonde velocemente può sembrare l’unica via d’uscita. In molti casi le lavoratrici e i lavoratori essenziali sono già vittime della sorveglianza sul luogo di lavoro, come nel caso delle tecnologie sviluppate da Amazon per monitorare la situazione nei propri magazzini, ma ora questa sorveglianza rischia di espandersi e impossessarsi ulteriormente dei nostri spazi pubblici.  La Commission nationale de l’informatique et des libertés (CNIL), l’autorità garante per la protezione dei dati personali francese, ha già sottolineato che questa tecnologia “presenta il rischio di normalizzare la sensazione della sorveglianza tra i cittadini, di creare un fenomeno di assuefazione e banalizzazione di tecnologie intrusive.” Nel caso della città di Cannes, l’intervento del CNIL ha condotto al blocco dell’impianto di monitoraggio delle mascherine.

La campagna intereuropea Reclaim Your Face sta cercando di mettere in guardia dagli effetti che il controllo demandato alla tecnologia può avere sulle nostre vite e come i nostri spazi pubblici rischiano di essere trasformati in un luogo disumanizzante: la falsa percezione di sicurezza e il chilling effect—la modifica del nostro comportamento quando sappiamo di essere osservati—ne sono gli esempi più che concreti. Avere telecamere puntate addosso in ogni nostro spostamento significa davvero sentirsi più sicuri? E quando questo assunto è puntualmente smentito da studi e fatti di cronaca, quale sarà la successiva soluzione da mettere in campo? Come ci rapporteremo, poi, alla crescente possibilità di non essere più realmente capaci di muoverci liberamente nello spazio pubblico per paura di essere giudicati? Lo sguardo degli algoritmi ci strappa di dosso ogni forma di umanità e ci riduce a vuote categorie e dati digitali.

In questo modo, le persone costrette a spostarsi di casa per recarsi a lavoro diventano cavie per esperimenti tecnologici—normalizzando di fatto la sorveglianza. Lo spazio pubblico viene ridotto a laboratorio e tutti i lavoratori e lavoratrici essenziali rischiano di essere trasformati in dati digitali senza vita.

 

About the authors

Laura Carrer is head of FOI at Transparency International Italy and researcher at the Hermes Center for Transparency and Digital Human Rights. She is also a freelance journalist writing on facial recognition, digital rights and gender issues.

Riccardo Coluccini is one of the Vice Presidents of the Italian NGO Hermes Center for Transparency and Digital Human Rights. He is also a freelance journalist writing about hacking, surveillance and digital rights.

 

 

[BigDataSur-COVID] Consent Design Flaws in Aarogya Setu and The Health Stack

by Gyan Tripathi and Setu Bandh Upadhyay

“The use of a person’s body or space without his consent to obtain information about him invades an area of personal privacy essential to the maintenance of his human dignity,” observed the Canadian Supreme Court in the matter of Her Majesty, The Queen v. Brandon Roy Dyment, (1988) 2 SCR 417 (1988).

The Government of India released its digital contact tracing application “Aarogya Setu” (the app) on April 2, 2020, following a rampage of similar digital contact tracing (DCT) applications worldwide. Some DCTs, like the one in Singapore, have been largely successful, while others like in Norway had to be pulled owing to assessment by the country’s data protection authority, which raised concerns the application posed a disproportionate threat to user privacy — including by continuously uploading people’s location. Interestingly, Aarogya Setu not only continuously collects people’s location, but it also binds it with other Personally Identifiable Information (PII).

While India has more than 17 other similar apps at various state levels, Aarogya Setu is perhaps the most ambitious digital contact tracing tool in the world. However, the app has been the center of heavy public backlash for posing a grave threat to the constitutionally guaranteed right to privacy.

According to the much-celebrated judgment in K. S. Puttaswamy v. Union of India (the judgment), any restriction on the fundamental right to privacy must pass the three-prong test of legality, which postulates the existence of law; need, defined in terms of a legitimate state aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them; Aarogya Setu fails on all three counts with a lack of any legislative backing, unclear and shifting objectives that the state could have achieved with the deployment of the application, and owing to the huge amount of Personally Identifiable Information (PII) it collects, the near-opaque team of researchers that ‘volunteered’ to build it, the faulty technology used, lack of any legislative backing, absence of clear guidelines on usage and data storage, and lack of any data protection authority oversight.

Following a slew of legal challenges and public outcry, the government released the Aarogya Setu Data Sharing and Storage Protocol (the protocol) which was intended to govern the data-sharing practices of the data collected by the app between governments (Central and State), administrative bodies and medical institutions. However, there was a continued lack to provide an effective mechanism to check the practicality and execution of the protocol. Subsequent responses sought under the Right to Information queries revealed that the data management and sharing protocols as envisaged in the document were never realized. Earlier, various activists and security experts had criticized the government for releasing an incomplete source code while claiming that it was making the application ‘open-source’. Therefore, in the case of Aarogya Setu, there was a systematic breakdown of established laws and reasonable expectations of privacy.

While the judgment also talks about granting more practical ways of control over information by the citizens, and the same is also talked about in Section 11 of the proposed Personal Data Protection Bill, 2019 by way of specific consent, the very architecture of the application does not allow users to exercise control over their data. In an event that a person is tested positive for the novel coronavirus, the application would upload not only their data but also the data of all those with whom they came in contact, based on the interaction they have had in the previous fourteen days.

The 9th Empowered Group, constituted by the Union Government for ‘Technology & Data Management’, to do away with the discrepancy and/or duplicity of data of the individual who had tested positive, opted for 2-way communication between the application and the Ayushman Bharat dashboard, umbrella scheme for healthcare in India. This has been revealed by the minutes of the meeting obtained under the Right to Information by the Internet Freedom Foundation. The minutes show that the data collected through Aarogya Setu was not only integrated with Ayushman Bharat but was also in communication with Aarogya Rekha, the geo-fencing surveillance employed by governments to enforce quarantine measure and track those who were put under mandatory quarantine, institutional or home.

Fears of a scope creep are already manifesting in the Aarogya Setu development team’s plans for integrating telemedicine, e-pharmacies, and home diagnostics to the app in a separate section called AarogyaSetu Mitr.

On 7 August, the National Health Data Mission (NDHM) released its strategic document detailing the requirement of digitizing all medical registries and thereby creating a National Health Stack (the health stack) based on a June 2018 white-paper by NITI Aayog, policy think tank of the Government of India. The National Health Authority, the nodal agency for Ayushman Bharat, indicated that it would migrate all data collected by the Aarogya Setu application and integrate it with the health stack. Various media reports and occasional public statements have confirmed that the data collected by the Aarogya Setu app would be the starter for the health stack.

It is here that lies a grave point of concern: owing to the faulty data collection mechanism of the application, lack of an express concern for data sharing with Health Stack, and inherent flaws within the health stack, millions will be put at risk of algorithmic or systematic exclusion. There is a massive effort deficit in the competence and effort of public and private providers of health care services in India. It is often observed that healthcare workers are absent for more part of their jobs, and even in cases they are, allied conditions like lack of proper equipment and facilities are a major block. As algorithms and artificial intelligence systems are made commonplace in the healthcare sector, on the pretext of them being more cost-effective and accurate, and equal importance should be given to lack of records, already stretched health infrastructure, outdated research, overburdened medical institutions, and personnel. The subsequent use of data collected, and the use of automated tools for decision making might also exacerbate the existing problems such as underrepresentation of minorities, women, and non-cis males.

There is a lack of any specific legislation concerning the disclosure of medical records in India. However, under the regulations notified by the Indian Medical Council, every medical professional is obligated to maintain physician-patient confidentiality. But this obligation does not extend to other entities, third parties, and data processors responsible for processing patient data, either under the mandate of a state body or a body corporate.

Presently, India has an outdated Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules of 2011 in force, but the rules fail to provide a comprehensive framework based on other internationally accepted practices. On matters of health information security, India currently has a draft Digital Information Security in Healthcare Act which provides for the establishment of eHealth Authorities and Health Information Exchanges at central as well as state-level

Computational systems are mostly data-driven and are ultimately based on the brute force of complex statistical calculations. Since the technical architecture of the proposed National Health Stack is unknown at moment, it further adds to the uncertainty on how the data shared would be used. These raises, as Prof. Hildebrandt points out, the question of to what extent such design should support legal requirements, thus contributing to interactions that fit the system of checks and balances typical for a society that demands that all of its human and institutional agents be “under the rule of law”. The issue of consent is very inherent to the rule of law, as in the digital social contract it ensures the individualistic right to self-determination.

The need for an informed consent overlaps with the ‘purpose limitation’ and ‘collection limitation’ principles, part of the core Fair Information Principles (FIPs), as part of the Guidelines governing the protection of privacy and transborder flows of personal data, by OECD, which came out first in 1980. The principles stipulate that “There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject”, all while ensuring that “the purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose”.

Privacy, like other abstract and subjective freedoms, cannot be reduced to the fulfillment of certain conditions, nor it can be given a delineated shape. However, we must endeavor to give users at least some level of control so that they can better understand and balance privacy considerations against countervailing interests.

 

About the authors

Gyan Tripathi is a student of law at Symbiosis International (Deemed University), Pune; and a Research Associate with Scriboard [Advocates and Legal Consultants]. He particularly loves to research the intersection of technology and laws and its impact on society. He tweets at @tripathi_gy.

Setu Bandh Upadhyay is a lawyer and policy analyst working on Technology Policy issues in the global south. Along with a law degree, he holds a graduate Public Policy degree from the Central European University. He has a diverse set of experiences working with different stakeholders in India, East Africa, and Europe. Currently, he is also serving as the Country Expert for India for the Varieties of Democracy (V-Dem project). He tweets at @setubupadhyay.

NOW OUT: COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society (free download)

We are thrilled to announce the publication of the collection “COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society”, edited by Stefania Milan, Emiliano Treré (Cardiff University) and Silvia Masiero (University of Oslo) for the Theory on Demand series of the Institute of Network Cultures!

The book explores pandemic invisibilities and datafied policies, but also forms of resistance and creativity of communities at the margins as they try to negotiate survival during the COVID-19 crisis. It features 75 authors writing in 5 languages in 282 pages that amplify the silenced voices of the first pandemic of the datafied society. In so doing, it seeks to de-center dominant ways of being and knowing while contributing a decolonial approach to the narration of the COVID-19 emergency. It brings researchers, activists, practitioners, and communities on the ground into dialogue to offer critical reflections in near-real time and in an accessible language, from indigenous groups in New Zealand to impoverished families in Spain, from data activists in South Africa to gig workers in India, from feminicidios in Mexico to North/South stereotypes in Europe, from astronomers in Brazil to questions of infrastructure in Russia and Github activism in China—and much more!

The book is **open access**. You can download the .pdf and .epub versions from this page.
While supplies last, we are also distributing printed copies for free (use the same link to order yours).

“COVID-19 from the Margins caringly and thoughtfully demonstrates why the multiplicity we call “the poor” is more than ever at the receiving end of the worst effects of globalized, patriarchal/colonial racist capitalism. But they are not passive victims, for their everyday forms of activism and re-existence, including their daily tweaking of the digital for purposes of community, care, and survival, has incredible insights about design and digital justice that this book takes to heart as we strive to undo the lethal effects of ‘the first pandemic of the datafied society’ “, wrote about the book Colombian anthropologist Arturo Escobar, author of ‘Designs for the Pluriverse. Radical Interdependence, Autonomy, and the Making of Worlds’ (Duke UP, 2018).

A number of book launch event will follow in the coming weeks. Visit this website to stay tuned, or follow the project on Twitter (@BigDataSur).

We wish to thank a number of sponsors without whom this project and the blog where it all started would not have been possible. In order of appearance, the Amsterdam School of Cultural Analysis, the School of Journalism, Media and Culture at Cardiff University, the European Research Council, and the Research Priority Areas of the University of Amsterdam Global Digital Cultures and Amsterdam Center for European Studies. Finally, a big heartfelt thanks goes to Geert Lovink and his INC team, for believing in this project from the start and giving us the chance to experiment with multilingualism and knowledge sharing.

just out: ‘Latin American Visions for a Digital New Deal: Towards Buen Vivir with Data’

Stefania Milan and Emiliano Treré, co-founders of the Big Data from the South Research Initiative, have contributed a piece entitled ‘Latin American Visions for a Digital New Deal: Towards Buen Vivir with Data‘ to the essay collection ‘A Digital New Deal. Visions of Justice in a Post-Covid World‘, edited by the Just Net Coalition and IT for Change (India). Their piece is accompanied by the beautiful illustration of Mansi Thakkar.

Read the project description, and download the full collection as pdf from this link.

The Just Net Coalition and IT for Change invite you to explore and engage with our Digital New Deal essay series, a thoughtfully curated set of long reads authored by passionate and committed scholars, activists and visionaries from around the world. In these essays, authors reflect on the current global Covid moment and its challenges from various standpoints and how the digital fits into this equation. From activists steeped in long standing battles against corporate capture of our resources and pushing for food sovereignty, labor rights, climate justice, equitable development, to scholars pondering the new questions of the internet, data, AI and the state of our public sphere, to practitioners seeking to address the disenfranchisement of countless communities and people from digital systems, the Digital New Deal captures the current anxieties, challenges, hopes and visions for the future. Beyond calling out what ails the world, our authors set for themselves in these poignant, informative, and radical pieces, the difficult challenge of outlining progressive solutions…to future gaze, imagine new possibilities and to reclaim the digital for justice.

 

 

[BigDataSur-COVID] Digital Social Protection during COVID-19: The Shifted Meaning of Data during the Pandemic

Silvia Masiero reflects on changes in digital social protection during the pandemic, outlines the implications of such changes for data justice, and co-proposes an initiative to discuss them.

by Silvia Masiero

One year ago today was my last time leaving a field site. Leaving friends and colleagues in India, promising to return as usual for the Easter break, it was hard to imagine to be suddenly plugged into the world we live in today. As a researcher of social protection schemes, little did I know that my research universe – digital anti-poverty programmes across the Global South – would have changed as it has over the last 12 months. As I have recently stated in an open commentary, COVID-19 has yielded manifold implications on social protection systems, implications that require reflection as conditions of pandemic exceptionalism perdurate over time and across regions.

The Shifted Meaning of Beneficiary Data

My latest study was on a farmer subsidy programme based on the datafication of recipients – a term that indicates, from previous work, the conversion of human beneficiaries into machine-readable data. The programme epitomises the larger global trend of matching demographic and, increasingly, biometric credentials of individuals with data on eligibility for anti-poverty schemes, such as poverty status, family size and membership of protected groups. Seeding social protection databases with biometric details, a practice exemplified by India’s Aadhaar, is supposed to combat exclusion and inclusion errors alike, assigning benefits to all entitled subjects while scrapping all the non-entitled. At the same time, quantitative and qualitative research works have shown the limits of datafication, especially its consequences in reinforcing exclusions of entitled subjects whose ability to authenticate is reduced by failures in recognition, sometimes resulting in denial of vital schemes.

During the pandemic, as numerous contributions to this blog have illustrated, existing vulnerabilities have become deeper and new ones have emerged, expanding the pool of people in need for social protection. Instances of the former are daily-wage and gig workers – who have seen their extant subalternities deepened in the pandemic, in terms of loss of income or severely heightened risks at work. Instances of new vulnerabilities, instead, are associated to the “new poor” of the pandemic, affected in many ways by the backlashes of economic paralyses across the globe. The result is the heightened global need for social protection to work smoothly, making the affordance of inclusiveness – being able to cover for the (old and new) needful – arguably prioritarian to that of exclusiveness, aimed at “curbing fraud” by secure biometric identification.

Since its launch in May 2020, this blog has hosted contributions on social protection schemes from countries including Colombia, Peru, India, Brazil and Spain, all highlighting the heightened needs of social protection under COVID-19. While describing different world realities, all contributions remark how the vulnerabilities brought by COVID-19 call for means to combat wrongful exclusions, for example using excess stocks of commodities to expand scheme coverage. Against the backdrop of a world in which the priority was “curbing fraud” through the most up-to-date biometrics, the pandemic threw us in a world in which inclusion of the needful takes priority over the roles of anti-poverty scheme datafication. The first implication, for researchers of digital social protection, is the need to devise ways to learn from examples of expanded coverage in social protection, of which India’s National Food Security Act has offered an important instantiation over the last decade.

Social Protection in the Pandemic: New Data Injustices

As the edited book “Data Justice and COVID-19: Global Perspectives” notes, the hybrid public-private architectures emerged during COVID-19 have generated new forms of data injustice, detailed in the volume through 33 country cases. The book opens, along with important debates on the meaning of data in a post-pandemic world, the question on data justice implications of COVID-19 for digital social protection. Drawing on contributions published in this blog, as well as reports of social protection initiatives taken during the pandemic, I have recently highlighted three forms of data injustice – legal, informational and design-related – that need monitoring as the pandemic scenario persists.

From a legal perspective, injustice is afforded by the subordination of entitlements to registration of users into biometric databases, which become a condition for access – leading to scenaria of forced trading of data for entitlements, widely explored in the literature before COVID-19. The heightened need for social protection in the pandemic deepens the adverse implications of exclusions, exacerbating the consequences of injustice for those excluded from the biometric datasets. Stories from urban poor contexts ranging from Nebraska, US to São Paulo, Brazil, underscore the same point: while the legal data injustice of exclusion was problematic before, it only heightens its problematicity in the context of the economic backlash of the pandemic on the poor.

From an informational perspective, the way entitlements are determined – specifically, the use of citizens’ information across databases to determine entitlements – has become crucial during the pandemic. Two cases from this blog especially detail this point. In Colombia, information to determine eligibility for the Ingreso Solidario (Solidarity Income) program was combined from existing data repositories, but without detail on how the algorithm combined information and thus, on how eligibility was determined. In Peru, subsidies have leveraged information gathered through databases such as the Census, property registry and electricity consumption, again without further light on how information was combined. Uncertainty on eligibility criteria, beyond deepening pandemic distress, arguably limits beneficiaries’ ability to contest eligibility decisions, due to lack of clarity on the very grounds on which these are taken.

Finally, design-related data injustices arise from the misalignment of datafied social protection schemes with the effective needs of beneficiaries. In the pandemic, the trade-off brought by biometric social protection – entailing increased accuracy of identification, at the cost of greater exclusions – has been brought to its extreme consequences, as extreme are the implications of denial of subsidy for households left out by social protection schemes. This brings to light a trade-off whose problematicity was already known well before the pandemic started, and further heightened by studies questioning the effective ability of biometrics to increase the accuracy of targeting. As a result, a third, design-related form of data injustice needs monitoring as we trace the evolution of social protection systems through COVID-19.

Ways Forward: A Roundtable to Discuss

As the pandemic and its consequences perdurate, new ways are needed to appraise the consequences of shifts in datafied social protection that the crisis that the crisis has brought. Not surprisingly, my promise of going back to the field for Easter 2020 could not be maintained, and established ways to conduct research on global social protection needed reinvention. It is against this backdrop that a current initiative, launched in the context of the TILTing Perspectives Conference 2021, may make a substantial contribution to knowledge on the theme.

The initiative, a Roundtable on COVID-19 and Biometric ID: Implications for Social Protection, invites presentations on how social protection systems have transformed during the pandemic, with a focus on biometric social protection and the evolution of its roles and systems. Abstracts (150-200 words) are invited as submissions to the TILTing Perspectives Conference, with the objective of gathering presentations from diverse world regions and draw conclusions together. Proposals for the role of discussants – to take part in the roundtable and formulate questions for debate – are also invited through the system. In an epoch where established ways to do fieldwork are no longer practicable, we want the roundtable to be an occasion to advance collective knowledge, together deepening our awareness of how social protection has changed in the first pandemic of the datafied society.

Submission to the Roundtable are invited at: https://easychair.org/cfp/Tilting2021

 

 

[BigDataSur-COVID] COVID-19 and the Stripping of Power from the Edges

By Niels ten Oever

At the start of the COVID-19 pandemic, people wondered whether the internet infrastructure would be capable of handling the increase in data traffic. When many people started working, streaming, and following the rapidly unfolding news on social media from home, many expected this would strain on the internet infrastructure. Some European politicians were so concerned that they called on Netflix to lower the resolution of their video streams. Why did it turn out the internet infrastructure was able to cope with the increasing demand? The answer is, because the internet no longer works as most people think it does. An extra layer of control was added to the internet by Content Delivery Networks. This chapter will discuss how pressure on the infrastructural margins of the internet is strengthening the center of the network, and examine how COVID-19 has exacerbated this trend.

In 2011, the Tunisian government started heavily censoring the internet in response to popular uprisings in the country. In response, many internet users engaged in what is commonly called a Distributed Denial of Service (DDoS) attack on the Tunisian government’s website. In a DDoS attack, hundreds or even thousands of computers try to reach a website at the same time. This can lead to the website’s server, or the connection to the server, being overloaded and thus render the website unavailable to internet users. When a website suddenly becomes very popular, this can also lead to similar behavior. When many users try to connect at the same time, the traffic effectively renders the site or service unavailable. Eight of Tunisia’s websites were forced offline.

In response to the DDoS attacks, and to prevent down-time of servers due to their popularity, Content Distribution Networks (CDNs) were increasingly used. CDNs are globally-distributed proxy servers, often placed in data centers close to internet eXchange Points (IXPs). While a user thinks they are connecting to a popular website far away, they are connecting to a CDN server that is located near them. While you are thinking you are streaming a video from a jurisdiction that you think is safe, the video is more likely to be stored close to the network controlled by your Internet Service Provider (ISP) or your telecommunications operator.

When the internet was designed, an engineer adopted the end-to-end principles as their central motto. This was included in the mission statement of the Internet Engineering Taskforce, the institution responsible for co-developing and standardizing the internet infrastructure:

The Internet isn’t value-neutral, and neither is the IETF. We want the Internet to be useful for communities that share our commitment to openness and fairness. We embrace technical concepts such as decentralized control, edge-user empowerment and sharing of resources, because those concepts resonate with the core values of the IETF community. These concepts have little to do with the technology that’s possible, and much to do with the technology that we choose to create (RFC3935).

When users connected to the internet during the COVID-19 pandemic, it may seem they were edge-users connecting to another endpoint over “dumb pipes”—leveraging the powers of decentralized control. The truth it quite the opposite. The internet infrastructure held up during the COVID-19 pandemic not because people were getting their content from the global internet, but from a data center near them. You may think is actually a good thing, since it caused the internet to not collapse? Maybe. CDNs are the mere latest cause and consequence of centralization on the internet. The difference between CDNs and other large players such as Google and Facebook (who have their own CDNs) is that these other CDNs remain largely invisible. Some of you might have heard about Cloudflare, but what about Akamai, Fastly, and Limelight?

In 2017, Cloudflare unilaterally removed the neo-nazi forum and website Daily Stormer from its services. In 2019, it similarly removed the imageboard 8chan after two shootings in the United States. The company cited the following reason for removal: “In the case of the El Paso shooting, the suspected terrorist gunman appears to have been inspired by the forum website known as 8chan. Based on evidence we’ve seen, it appears that he posted a screed to the site immediately before beginning his terrifying attack on the El Paso Walmart killing 20 people”. The interesting point was that no one asked Cloudflare to do this; they removed the content on their own volition, without a clear process in place. Many critical internet scholars such as Suzanne van Geuns, Corinne Cath, and Kate Klonick have reported on this. While such decisions show the concrete impact these companies can have, it is perhaps even more telling that one hears very little about these companies.

CDNs are perhaps the internet infrastructure that companies benefitted most from during the COVID-19 epidemic, because there was increased traffic to the websites that they provide services to. But what about the people who requested information from these websites? Technically, they got served by another server than the one they thought they were connected to. They might have received something else than what they asked for, because CDNs allow for particularly fine-mazed geography-based adaptation of content. The CDN that served a user in Senegal might have different data than a CDN that served a user in Brisbane. And there is almost no way of knowing by which particular CDN server you got served, or to bypass the CDN. In this way, the opacity of internet infrastructure was exacerbated by the COVID-19 pandemic. In other words, the COVID-19 pandemic led to further black-boxing of the internet infrastructure, making it harder for users to understand how it works. While this might make the internet faster and more available, it does not make the internet more reliable. Arguably, it makes the internet a better tool for control, because it increases power asymmetries between users and transnational corporations.

In 2011, Tunisian internet users were able to use the internet infrastructure against their own government. In 2020, it is nearly impossible for users around the world to even know where the websites they are accessing are located, let alone take them down. The internet is no longer a bazaar. The COVID-19 pandemic helped fortify an industrial zone that now is the internet, which only allows users to connect on the outside, without having a view or control on the inside. The internet has become a smart network, with not so smart edges.

 

Niels ten Oever is a post-doctoral researcher at the University of Amsterdam (The Netherlands) and Texas A&M University (USA), associated also with the Centro de Tecnologia e Sociedade at the Fundação Getúlio Vargas, Brazil. His research focuses on how norms such as human rights get inscribed, resisted, and subverted in the Internet infrastructure through transnational governance. Previously, Niels has worked as Head of Digital for ARTICLE19 and served as programme coordinator for Free Press Unlimited. He holds a cum laude MA in Philosophy and a PhD in Media Studies from the University of Amsterdam. He sometimes

 

[BigDataSur-COVID] Alternative Perspectives on Relationality, People and Technology During a Pandemic: Zenzeleni Networks in South Africa

By Nic Bidwell & Sol Luca de Tena

Many rural communities in Africa have characteristics that are neither represented by data about COVID-19, nor addressed by public health information designed to help people protect themselves. This does not mean to say that rural inhabitants are unaffected by information designed for different populations; and grassroots initiatives have been vital in countering the impacts of this. Here, we reflect on the role of community networks in customising information and services for rural inhabitants during the pandemic, and how they reveal constructs embedded in data representation and aggregation. Community networks (CNs) are telecommunications initiatives that are installed, maintained, and operated by local inhabitants to meet their own communication needs. Rey-Moreno’s 2017 survey identified 37 community networks in 12 African countries. With the success of four Annual African CN Summits, more are emerging every year. Our account focuses on Zenzeleni Networks in South Africa. Thus, we begin by introducing its response to COVID-19 and ensuring health information suited local circumstances. We end by arguing that examples of contextualisation reveal logics about personhood that are vital to tackling the disease, but not represented by individualist models embedded in datafication.

Zenzeleni’s Response to COVID-19

Zenzeleni is a community-owned wireless internet service provider that has connected more than 13,000 people and 10 organisations to the internet in South Africa’s Eastern Cape province. The network is owned by amaXhosa inhabitants (including 40% women) and is run by two local cooperatives. A cooperative approach ensures internet access costs are up to 20 times lower than services offered by existing telecommunications operators, and expenditure is retained locally. The non-profit organisation Zenzeleni Networks NPC was established through the cooperative, and provides vital connections with regulatory authorities and telecommunications expertise. Zenzeleni was seeded in Mankosi, a remote district of 12 villages, by PhD researchers at the University of the Western Cape in Cape Town, which followed prolonged collaborations on solar electricity and media sharing technologies. Over the past eight years, the community network has evolved as a social innovation ecosystem in which rural communities own their telecommunication businesses. Like other community networks in the global south, Zenzeleni has created employment and developed technical skills in one the most disadvantaged areas in South Africa.

As well as providing more affordable and higher quality network services than alternatives, Zenzeleni’s embeddedness directly links technology and media considerations to local life. As the COVID-19 lockdown ensued, inhabitants working, studying or seeking work in cities returned to their rural family homes. Zenzeleni played a vital role in providing continuity to residents’ urban lives, by adding network infrastructure to extend the community access points and ensuring free and open access to education websites, including all of the nation’s universities and further education colleges. Indeed, usage of access points tripled during since the pandemic began.

Not only are health services difficult to access, but the local populations served by Zenzeleni are particularly vulnerable; they have a high incidence of HIV, tuberculosis, and child and maternal health issues. Thus, Zenzeleni sourced funding to connect the District Hospital. Just as importantly, however, from the pandemic’s onset, the network started to address health information needs. Like other groups across Africa, Zenzeleni immediately recognised the mismatch between health information issued by WHO and South Africa’s national government and local circumstances. Not only was information initially unavailable in most of Africa’s 2000 languages, even when advice was in a home language it was ill-suited to many rural contexts. Recommending regular handwashing, for instance, is inappropriate for Mankosi’s inhabitants who share a few unreliable taps in their villages because water is not supplied to households. Similarly, guidelines on shared transport are irrelevant when only one bus a day connects villages on a five hour round trip to the nearest supermarket. Zenzeleni ensured free and open access to official health websites. Understanding the local context launched projects also increased access to relevant information resources and raised awareness of health strategies that matched local circumstances.

My Mask Protects you, and Yours Protects me: Accounting for Personhood in the Datafied Society

While providing health information in home languages suited to local constraints is vital, but efficacy in managing a socially-spread disease requires integrating deeper insights about the nuances of local social practices and relations. For instance, people returning to villages from cities bring information of varying legitimacy, from recommendations to outright falsehoods. Locally, this information was interpreted through assumptions that information in cities was inherently more credible because cities are highly connected. The valorisation of information associated with electronic media has been discussed elsewhere in rural southern Africa. An implicit part of Zenzeleni’s role has been to foster critical approaches to disinformation by directing inhabitants to legitimate information and ensuring information was properly contextualised. However, at the same time, promoting information access must account for sharing practices. While internet hotspots safely offer socially-distanced access, many inhabitants group around tablets and phones.

Device-sharing practices in Mankosi are not merely about limited access to devices. They also involve a cultural construct of relationality. Devices like smartphones are embedded with logic that personhood exists prior to interpersonal relationships (Bidwell, 2016). This individualist logic contrasts with the philosophy of Ubuntu, an isiXhosa word which is often translated as “I am because we.” This collective logic assumes that neither community or individual exists prior, and being human depends on the mutual and dynamic constitution of other humans. As Eze explains:

We create each other and need to sustain this otherness creation. And if we belong to each other, we participate in our creations: we are because you are, and since you are, definitely I am.

The importance of the construct of Ubuntu to effective contextualisation is illustrated by Zenzeleni’s local volunteers’ observations that community members assisted each other in putting on face-masks. Senses of mutual responsibility are straightforward in communities such as Mankosi. However, routinely performing responsibility involves physical help and, since none of the guidelines explicitly combine social distancing with putting on a mask, this represents an ambiguity.

The challenge of translating a guideline such as “wear it for me” reveals an important role for community networks in COVID-19 times, and in datafication more generally. Much like the assumption of a person putting on their masks themselves, prevalent models of data extraction, representation, and personalisation cultivate and amplify an individualist logic. Yet, as many commentators have suggested, the best protection we have against the virus is Ubuntu. Zenzeleni and other community networks around the world offer an alternative perspective on relationality, people, and technology.

 

Nicola Bidwell is an adjunct professor at the International University of Management, Namibia, and a researcher at University College Cork, Ireland. She has applied her expertise in community-based, action research for technology design in the Global South for the past 15 years, and catalysed thought about indigenous-led digital design and decolonality. Nic is an associate editor for the journal AI & Society: Knowledge, Culture and Communication.

Sol Luca de Tena has over a decade of experience in strategic project management within technology development, capacity building, social impact, and policy, with a focus on utilising technologies to address environmental and social challenges. She is currently the acting CEO of Zenzeleni Networks Not for Profit company, supporting the operation and seeding of community networks in rural communities in South Africa. She also leads various projects which seek to address the digital divide in a human centre approach, and collaborates on various working groups and forums on Community Networks in Africa and around the world.

BigDataSur 2020 year-in-review

by Stefania Milan, Emiliano Treré and Silvia Masiero

December 18, 2020

2020 has been a tough year for many reasons. The COVID-19 global health emergency has claimed lives, exposed our dependence on the digital infrastructure, and impoverished many communities even further. We were forced to change plans, subvert our lifestyles, distance ourselves from our loved ones. The first pandemic of the datafied society has exposed the weakness of people and communities at the margins. Not only has the Global South been severely hit, but also gig workers, impoverished families, domestic violence survivors, LGBTQ+, indigenous, migrant, racialized and rural communities have paid an even higher price in terms of lowered income, loneliness, violence, death. If anything, this pandemic has made clear the need for an initiative like Big Data from the South, tasked with interrogating and exposing impending forms of inequality, injustice and poverty as they intercept the datafied society. 

Against this backdrop, Big Data Sur has not remained quiet. Our network has produced a number of critical cutting-edge reflections on the main challenges of the pandemic. The thematic, multilingual blog ‘COVID-19 from the margins’, launched in May 2020, has given voice to the many fringes left in the dark by the mainstream coverage of the pandemic. It has and continues offering precious food for thought to reflect on the challenges of the pandemic for the disempowered.

To date, we published contributions from over 80 authors, in five idioms, and reporting from some 25 countries. We covered all continents–from Indonesia to Mexico, from Peru to New Zealand, from Namibia to China to Spain. Among others, we ran a special on Brazil when the controversial president Jair Bolsonaro dismissed the pandemic as a mere ‘gripezinha’ (light flu). Lately, a group of astronomers contributed their experience with working with indigenous communities in the rural areas of Brazil. 

We worked in the shadows (we even designed the logo ourselves!), we worked nights. We fundraised to be able to pay a small contributor fee to authors in need, and provided editorial support in several languages to empower less experienced writers to share their stories for a global audience. This was only possible thanks to new team members that joined us. Silvia Masiero, Associate Professor of Information Systems at the University of Oslo, has joined Emiliano Treré and Stefania Milan in the editorial team. Nicolás Fuster, Guillén Torres, Zhen Ye, Jeroen de Vos, and Yiran Zhao provided key support in the background. Volunteer proof-readers like Sergio Barbosa (Portuguese) e Giulia Polettini (Chinese) helped us occasionally. To this splendid team goes our gratitude and appreciation: without their precious help, we would not have been able to publish in so many idioms and with such a high frequency.

Unfortunately, our project is chronically underfunded. But some illuminated organizations believed in the urgency of the BigDataSur agenda. In particular, the COVID-19 blog was supported by the Amsterdam School of Cultural Analysis at the University of Amsterdam (The Netherlands) and School of Journalism, Media and Culture at Cardiff University (UK), and by the European Research Council via the DATACTIVE project: thank you!

Besides the blog, also in 2020 BigDataSur work and values has been featured in public talks and lectures, and in a seizable number of academic writings. An analytical matrix to study ‘data from the margins’ will soon appear as part of the Oxford Handbook of Sociology and Digital Media edited by Rohlinger and Sobieraj for Oxford University Press. A special issue of the multilingual journal Palabra Clave will be released in August 2021 exploring ‘Latin American perspectives on datafication and Artificial Intelligence’. And more is in store, including plans for a course on ‘Decolonizing Datafication’ to be added to the teaching curriculum at the University of Amsterdam—for a start. 

What’s next?

Due to lack of funding, the COVID-19 blog will progressively wind down. So hurry up and send us your posts if you want to join the conversation! 

But we also have great news in store for you: the blog has given birth to the multi-vocal book COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society. The book—proudly multilingual and rigorously open access—will be released in early January by the Amsterdam-based Institute for Networked Cultures, as part of their edgy series ‘Theory on Demand’. As some of you know, the December release date had to be postponed because COVID-19 hit the publisher, too. We wish to extend our heartfelt thanks to our amazing copy-editor Andrew Schrock from Indelible Voice, who worked against the clock to deliver the final manuscript. Thanks to funding by two of the University of Amsterdam’s Research Priority Areas, namely ’Global Digital Culture’ and ‘Amsterdam Center for European Studies’, as well as the DATACTIVE project, we will print a sizable number of copies for free distribution. Let us know if we should reserve a copy for you! We can mail it anywhere.