Category: COVID-19 from the margins

[BigDataSur-COVID] Kampf um die Risikogruppe: Die Auswirkungen von Covid-19 auf ältere Menschen und Menschen mit Behinderung im digitalen Deutschland

The Battle for the At-Risk Group: The Impact of Covid-19 on Elderly People and People with Disabilities in Digital Germany

People with disabilities and elderly people are not a homogeneous group – whether in how they experience datafication in a wealthy country such as Germany nor in how Covid-19 effects their lives. However, what unites them is the old ambivalent struggle over classifications: Who counts as being at-high-risk? Who receives vaccination? Who has to stay at home until fall?  

 

by Ute Kalender

Wie ist die Situation von Menschen mit Behinderung und älteren Menschen in einem Land wie Deutschland, das im letzten Jahr durch Corona einem enormen Digitalisierungsschub unterlag?

Glauben wir neuen und alten Cyberfeminismen, dann müssten Menschen mit Behinderung und ältere Menschen ganz oben auf der coronabedingten Datafizierungswelle surfen. Für Donna Haraway etwa galten Menschen mit Behinderung als Cyborgs schlechthin – als ideale Subjekte einer technologischen Welt. Sie vermutete, dass wegen ihren intimen Verbindungen mit Prothesen und Technologien “[p]erhaps paraplegics and other severely handicapped people can (and sometimes do) have the most intense experiences of complex hybridization”. Und auch in den aktuellen Texten des computerfreundlichen Xenofeminismus sind Menschen mit Behinderung vielfach anzutreffen. Durch die selbstbestimmte Aneignung von digitalen Technologien weisen sie Diskriminierungen im Namen einer natürlichen Ordnung zurück.

Dass sich der technologische Corona-Alltag komplexer gestaltet, ist schwer zu erraten. Bei wem sich jetzt allerdings ein Unbehagen breit macht, weil sie die Authentizitätskeule fürchtet – gut so. Ich habe nicht vor, mich auf die wirklichen Erfahrungen von Menschen mit Behinderung in einer datafizierten Coronawelt zu beziehen, um die digitalfeministischen Vorstellungen von Menschen mit Behinderung als idealisiert oder ideologisch zu entlarven. Erfahrung ist ja bekanntlich etwas zu Klebriges, als dass wir uns ernsthaft darauf beziehen könnten. Nein ich bleibe beim Synthetischen, bei den Instagram-Videos von Krüppelaktivis_innen zu #ZeroCovid, einer Bewegung, die sich für den solidarischen europaweiten Shutdown einsetzt, bei Eindrücken vom Leben meiner Eltern und bei den Projektionen, die ich als nicht-behinderte Frau auf Menschen mit Behinderung habe, die ich in den Straßen Berlins treffe.

Beginnen wir mit meinen um die 80-jährigen, teils stark gehandicapten, westdeutschen Unterklasse-Eltern. Sie wurden 2020 zwangsdigitalisiert. Ein großes Telekommunikations-Unternehmen stellte von analog auf digital um, kündigte ihnen den preiswerten jahrzehntealten Vertrag und ließ sie einen neuen, teureren abschließen. Nach anfänglichem Ärger kaufte ich meinen Eltern ein Tablet. Mein Vater musste in immer kürzeren Abständen ins Krankenhaus und durfte dort wegen Corona nicht besucht werden. Vielleicht würden Videotelefonate seine Aufenthalte erträglicher machen. Meine Mutter verschickte bald eifrig Nachrichten über Messengerdienste, beanstandete meine Frisur auf Fotos, die sie von mir im Internet fand und verwickelte etliche verdutzte Bekannte in entfernten Städten in Videokonferenzen. Wenn mein Vater meines und das Gesicht meiner Schwester auf dem Tablet sah, freute er sich zwar immer, war den Tränen nahe und küsste begeistert die Oberfläche des Tablets, fand aber keinen Zugang und würde das Gerät im Krankenhaus wohl niemals anschmeißen. Die Schrift zu klein, die Oberfläche zu unruhig, die Schritte in den Onlineraum nicht zu merken. Ganz anders als Freunde von mir in Brasilien. Im gleichen Alter, aber digital kompetenter. Immerhin: Bevor sich wieder ein Krankenhausaufenthalt ankündigte, ergatterten wir für meinen Vater einen frühen Impftermin Ende Februar. Trotz zusammenbrechender Server zur Terminvergabe in Nord Rhein Westphalen, einem Bundesland im Westen von Deutschland gelegen.

Andere Menschen mit Behinderung und chronischen Krankheiten müssen vermutlich bis Ende des Sommers auf eine Impfung warten. Der jüngere Student mit Spina Bifida, der nicht im Heim lebt ebenso wie die 55-jährige Frau mit Lungenkrebs, die frisch operiert jetzt isoliert zu Hause sitzt. Die deutsche Corona-Impfverortung schließt sie von einer baldigen Impfung aus. Der Behinderten-Aktivist Raul Krauthausen sieht darin eines der größten Missverständnisse in Deutschland – dass die Risikogruppe ausschließlich aus Hochaltrigen besteht und in Heimen lebt. Es gebe 100-tausende jüngere Menschen mit chronischen Krankheiten. Sie beschäftigen Assistent_innen, haben Kinder, Partner_innen und Freund_innen. All diese Leute fallen seit Beginn der Schutzmaßnahmen durch das Raster, erhalten keine Masken, Schutzkleidungen und Schnelltests. Sie bekommen keine Pflegeboni für ihre Assistent_innen oder pflegende Angehörige und es gibt für diese Menschen keine Impfungen.

Eine Entsolidarisierung mit denen, die in der Todesfalle Heim sitzen oder mit den über 80-Jährigen liegt den Kritiker_innen fern. Dennoch zeigen die Statements: Der Kampf um das zweischneidige Schwert der Risikoklassifikation – des Einschlusses in die Risikogruppe mit hoher Impfpriorität – ist entbrannt. Und die Interventionen erinnern auch daran, dass jedes Klassifizieren eine moralische Agenda beinhaltet. Klassifikationen wertschätzen das Leben der einen und blenden andere Leben aus. Klassifikationen gewähren einer Gruppe Zugang zu Ressourcen und verweigern sie einer anderen.

Die Interventionen werden aber auch von jenen geführt, die Zugang zu digitalen Endgeräten und Infrastrukturen haben und sich gekonnt in den sozialen Medien wie Instagram und Facebook bewegen. Von den privilegierten Behinderten, den Edel-Krüppeln, wie der leider im letzten Jahr verstorbene Behinderten-Aktivist Matthias Vernaldi zu sagen pflegte. Und nicht von jenen die in einem reichen Land wie Deutschland in Zonen der Globalen Süden leben. Die Interventionen zeigen, dass Menschen mit Behinderung und ältere Menschen keine homogene, keine passive, schon gar keine immer leidende Gruppe ist. Dennoch frage ich mich auch, welchen Standpunkt die haben, mit denen ich viel zu selten, meist gar nicht spreche, die mir aber einige Male am Tag begegnen. Betty die Pöbel-Prinzessin vom Kottbusser Tor oder Schrei-Stubi, der in einem Zelt am S-Bahn Ring in Neukölln wohnt. Bei ihnen lässt sich nicht sagen, ob die Behinderung, ihre mentalen Angelegenheiten, im Zuge ihres Lebens auf der Straße entstanden sind, oder ob umgekehrt ihre Behinderungen zu einem Leben auf der Straße geführt haben. Betty und Stubi verfügen weder über ein Handy noch werden sie derzeit angeschrieben und zur Impfung eingeladen. Sie fallen durch das deutsche Datenraster – wollen vielleicht durchfallen, gar nicht erfasst werden. Und vielleicht ist es so wie ein Bekannter mit Behinderung, ein Professor für Rehabilitationswissenschaften, mal in einer unserer nächtelangen Diskussion über barrierefreie Apps sagte: Das Problem heißt nicht Digitalisierung, das Problem heißt Armut.

 

About the author

Dr. Ute Kalender is a cultural scientist from Berlin. As a qualitative researcher, she works in a research project on intersexuality at Charité University Medicine and in Digitale Akademie Pflege 4.0–a project on the digitalisation of the care sector.

Dr. Ute Kalender ist Kulturwissenschaftlerin und lebt in Berlin. Als qualitative Forscherin arbeitet sie in einem Forschung zu Intersexualität an der Charité Universitätsmedizin. Und in dem BMBF-Projekt Digitale Akademie Pflege 4.0 – einem Forschungsprojekt zur Digitalisierung des Pflegesektors.

[BigDataSur-COVID19] Come la sorveglianza biometrica si sta insinuando nel trasporto pubblico

Durante la pandemia i lavoratori e le lavoratrici essenziali sono stati i soggetti più vulnerabili. Questo articolo discute come la sorveglianza introdotta per limitare il COVID-19 molto probabilmente sarà normalizzata nel contesto post-pandemia.

by Laura Carrer and Riccardo Coluccini

 

COVID-19 has shown how essential workers, while fundamental to our societies, are constantly being exploited and marginalized. This is even more true if we consider how smart working has fundamentally changed our perception of public spaces: working from home is a privilege for few people and the public space is something to be monitored. Many essential workers are still forced to commute to their workplaces using public transport and tech companies are taking advantage of the pandemic to introduce anti-COVID solutions that further push for dataficaton of our lives. We see the deployment of video surveillance systems enhanced by algorithms to monitor distance between people on public transport systems and software that can detect a person’s face and temperature and check if they are wearing a face mask. Forced to move in our public spaces, essential workers become guinea pigs for technological experiments that risk further normalizing biometric surveillance.

 

La pandemia di COVID-19 ha creato uno spartiacque nel modo in cui abitiamo il nostro spazio pubblico: mentre alcune fasce privilegiate della popolazione mondiale hanno beneficiato del lavoro da remoto, milioni di persone nel settore della sanità, dell’istruzione, della ristorazione, nell’infrastruttura logistica e di produzione non hanno avuto gli stessi privilegi e spesso hanno lavorato senza adeguati dispositivi di protezione individuale, continuando a recarsi a lavoro quando possibile con i mezzi pubblici. Molto spesso queste categorie di lavoratori essenziali sono anche appartenenti a minoranze e hanno vissuto quindi doppiamente il pesante bilancio della pandemia di COVID-19, pagando un prezzo molto alto.

Se da una parte è sembrata esserci una presa di coscienza nei confronti di queste lavoratrici e lavoratori essenziali—unici a muoversi e continuare a garantire un certo grado di normalità nella nostra vita quotidiana durante la pandemia—dall’altra queste persone rischiano di finire al centro di un nuovo disturbante esperimento tecnologico che potrebbe normalizzare l’utilizzo della sorveglianza all’interno delle nostre città.

I mezzi pubblici sono diventati il campo di test per soluzioni tecnologiche anti-COVID che si basano sulla videosorveglianza: dagli algoritmi per monitorare la distanza tra passeggeri a bordo fino ai software in grado di riconoscere se la persona indossa o meno una mascherina.

L’innovazione tecnologica sembra trainare la risposta alla pandemia in tutto il mondo, non solo sotto forma di app per il tracciamento dei contagi, ma anche e soprattutto sfruttando l’infrastruttura di videosorveglianza già ampiamente diffusa. A Città del Messico, il sistema di videosorveglianza cittadina è stato subito riconvertito per monitorare l’uso delle mascherine. A Mosca, la rete capillare di videocamere (più di 100.000) è stata utilizzata per controllare in tempo reale i cittadini positivi al coronavirus che per varie ragioni si allontanavano da casa. In Messico, il primo sistema di riconoscimento facciale nazionale (nello stato di Coahuila) implementato nel 2019 ha incluso la rilevazione termica ad aprile 2020, un mese dopo l’inizio della pandemia. Un’infrastruttura preesistente rende la possibilità di normalizzazione e controllo dei cittadini da parte dello Stato inevitabilmente più semplice.

Tutto questo avviene spesso a scapito di una corretta valutazione dei rischi per i diritti umani e si sta espandendo in maniera poco trasparente anche sui mezzi pubblici.

Lo scorso maggio, a Parigi, sono state introdotte nelle linee della metropolitana videocamere in grado di monitorare il numero di passeggeri e l’effettivo utilizzo delle mascherine. La stessa tecnologia è stata introdotta in alcuni mercati all’aperto e sui bus della città di Cannes. Tecnologie simili sono state introdotte in India a bordo di bus di lunga distanza e in alcune stazioni ferroviarie.

Il sistema di trasporti dello stato del New Jersey ha annunciato a gennaio 2021 il test di una serie di tecnologie per rilevare la temperatura, individuare l’uso delle mascherine e usare algoritmi di intelligenza artificiale per monitorare il flusso di persone. In Cina, l’azienda di trasporti Shangai Sunwin Bus ha già introdotto quelli che chiama “Healthcare Bus” muniti di tecnologie biometriche.

Le aziende del settore hanno subito sfruttato questo spiraglio per pubblicizzare le proprie tecnologie, come ad esempio l’azienda Hikvision, produttrice mondiale di videocamere. In Italia, l’azienda RECO3.26 che offre il sistema di riconoscimento facciale alla polizia scientifica italiana ha da subito approfittato della situazione offrendo una suite di prodotti anti-COVID: tra questi ci sono il DPI Check, per controllare appunto l’utilizzo della mascherina chirurgica da parte dei soggetti che rientrano nell’area videosorvegliata; Crowd Detection e People Counting per monitorare gli assembramenti; oltre a funzioni per la misurazione in tempo reale della distanza di sicurezza tra le persone videosorvegliate e il rilevamento della temperatura corporea. In Italia, alcune di queste tecnologie sono state subito acquistate da parte dell’Azienda Trasporti Milanesi ATM. E non è chiaro se l’Autorità per la privacy italiana sia stata informata al riguardo.

L’utilizzo di queste tecnologie, oltre ad essere invocato come primaria e più efficiente soluzione per la risoluzione di un problema emergenziale ben più complesso e intricato, è problematico anche sotto un altro punto di vista. L’ente governativo americano National Institute of Standards and Technology (NIST) ha recentemente pubblicato un report di analisi dei software di riconoscimento facciale presenti al momento sul mercato, evidenziando come l’accuratezza di questi ultimi sia molto bassa soprattutto ora che l’utilizzo della mascherina è obbligatorio in molti paesi del mondo. Un prezzo che, visto l’utilizzo della tecnologia biometrica al giorno d’oggi, molte persone—soprattutto appartenenti a categorie già ampiamente discriminate—saranno costrette a pagare caro.

Nella narrazione odierna, tecno-soluzionista e tecno-ottimista, la sorveglianza dei corpi per contrastare un virus che si diffonde velocemente può sembrare l’unica via d’uscita. In molti casi le lavoratrici e i lavoratori essenziali sono già vittime della sorveglianza sul luogo di lavoro, come nel caso delle tecnologie sviluppate da Amazon per monitorare la situazione nei propri magazzini, ma ora questa sorveglianza rischia di espandersi e impossessarsi ulteriormente dei nostri spazi pubblici.  La Commission nationale de l’informatique et des libertés (CNIL), l’autorità garante per la protezione dei dati personali francese, ha già sottolineato che questa tecnologia “presenta il rischio di normalizzare la sensazione della sorveglianza tra i cittadini, di creare un fenomeno di assuefazione e banalizzazione di tecnologie intrusive.” Nel caso della città di Cannes, l’intervento del CNIL ha condotto al blocco dell’impianto di monitoraggio delle mascherine.

La campagna intereuropea Reclaim Your Face sta cercando di mettere in guardia dagli effetti che il controllo demandato alla tecnologia può avere sulle nostre vite e come i nostri spazi pubblici rischiano di essere trasformati in un luogo disumanizzante: la falsa percezione di sicurezza e il chilling effect—la modifica del nostro comportamento quando sappiamo di essere osservati—ne sono gli esempi più che concreti. Avere telecamere puntate addosso in ogni nostro spostamento significa davvero sentirsi più sicuri? E quando questo assunto è puntualmente smentito da studi e fatti di cronaca, quale sarà la successiva soluzione da mettere in campo? Come ci rapporteremo, poi, alla crescente possibilità di non essere più realmente capaci di muoverci liberamente nello spazio pubblico per paura di essere giudicati? Lo sguardo degli algoritmi ci strappa di dosso ogni forma di umanità e ci riduce a vuote categorie e dati digitali.

In questo modo, le persone costrette a spostarsi di casa per recarsi a lavoro diventano cavie per esperimenti tecnologici—normalizzando di fatto la sorveglianza. Lo spazio pubblico viene ridotto a laboratorio e tutti i lavoratori e lavoratrici essenziali rischiano di essere trasformati in dati digitali senza vita.

 

About the authors

Laura Carrer is head of FOI at Transparency International Italy and researcher at the Hermes Center for Transparency and Digital Human Rights. She is also a freelance journalist writing on facial recognition, digital rights and gender issues.

Riccardo Coluccini is one of the Vice Presidents of the Italian NGO Hermes Center for Transparency and Digital Human Rights. He is also a freelance journalist writing about hacking, surveillance and digital rights.

 

 

[BigDataSur-COVID] Consent Design Flaws in Aarogya Setu and The Health Stack

by Gyan Tripathi and Setu Bandh Upadhyay

“The use of a person’s body or space without his consent to obtain information about him invades an area of personal privacy essential to the maintenance of his human dignity,” observed the Canadian Supreme Court in the matter of Her Majesty, The Queen v. Brandon Roy Dyment, (1988) 2 SCR 417 (1988).

The Government of India released its digital contact tracing application “Aarogya Setu” (the app) on April 2, 2020, following a rampage of similar digital contact tracing (DCT) applications worldwide. Some DCTs, like the one in Singapore, have been largely successful, while others like in Norway had to be pulled owing to assessment by the country’s data protection authority, which raised concerns the application posed a disproportionate threat to user privacy — including by continuously uploading people’s location. Interestingly, Aarogya Setu not only continuously collects people’s location, but it also binds it with other Personally Identifiable Information (PII).

While India has more than 17 other similar apps at various state levels, Aarogya Setu is perhaps the most ambitious digital contact tracing tool in the world. However, the app has been the center of heavy public backlash for posing a grave threat to the constitutionally guaranteed right to privacy.

According to the much-celebrated judgment in K. S. Puttaswamy v. Union of India (the judgment), any restriction on the fundamental right to privacy must pass the three-prong test of legality, which postulates the existence of law; need, defined in terms of a legitimate state aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them; Aarogya Setu fails on all three counts with a lack of any legislative backing, unclear and shifting objectives that the state could have achieved with the deployment of the application, and owing to the huge amount of Personally Identifiable Information (PII) it collects, the near-opaque team of researchers that ‘volunteered’ to build it, the faulty technology used, lack of any legislative backing, absence of clear guidelines on usage and data storage, and lack of any data protection authority oversight.

Following a slew of legal challenges and public outcry, the government released the Aarogya Setu Data Sharing and Storage Protocol (the protocol) which was intended to govern the data-sharing practices of the data collected by the app between governments (Central and State), administrative bodies and medical institutions. However, there was a continued lack to provide an effective mechanism to check the practicality and execution of the protocol. Subsequent responses sought under the Right to Information queries revealed that the data management and sharing protocols as envisaged in the document were never realized. Earlier, various activists and security experts had criticized the government for releasing an incomplete source code while claiming that it was making the application ‘open-source’. Therefore, in the case of Aarogya Setu, there was a systematic breakdown of established laws and reasonable expectations of privacy.

While the judgment also talks about granting more practical ways of control over information by the citizens, and the same is also talked about in Section 11 of the proposed Personal Data Protection Bill, 2019 by way of specific consent, the very architecture of the application does not allow users to exercise control over their data. In an event that a person is tested positive for the novel coronavirus, the application would upload not only their data but also the data of all those with whom they came in contact, based on the interaction they have had in the previous fourteen days.

The 9th Empowered Group, constituted by the Union Government for ‘Technology & Data Management’, to do away with the discrepancy and/or duplicity of data of the individual who had tested positive, opted for 2-way communication between the application and the Ayushman Bharat dashboard, umbrella scheme for healthcare in India. This has been revealed by the minutes of the meeting obtained under the Right to Information by the Internet Freedom Foundation. The minutes show that the data collected through Aarogya Setu was not only integrated with Ayushman Bharat but was also in communication with Aarogya Rekha, the geo-fencing surveillance employed by governments to enforce quarantine measure and track those who were put under mandatory quarantine, institutional or home.

Fears of a scope creep are already manifesting in the Aarogya Setu development team’s plans for integrating telemedicine, e-pharmacies, and home diagnostics to the app in a separate section called AarogyaSetu Mitr.

On 7 August, the National Health Data Mission (NDHM) released its strategic document detailing the requirement of digitizing all medical registries and thereby creating a National Health Stack (the health stack) based on a June 2018 white-paper by NITI Aayog, policy think tank of the Government of India. The National Health Authority, the nodal agency for Ayushman Bharat, indicated that it would migrate all data collected by the Aarogya Setu application and integrate it with the health stack. Various media reports and occasional public statements have confirmed that the data collected by the Aarogya Setu app would be the starter for the health stack.

It is here that lies a grave point of concern: owing to the faulty data collection mechanism of the application, lack of an express concern for data sharing with Health Stack, and inherent flaws within the health stack, millions will be put at risk of algorithmic or systematic exclusion. There is a massive effort deficit in the competence and effort of public and private providers of health care services in India. It is often observed that healthcare workers are absent for more part of their jobs, and even in cases they are, allied conditions like lack of proper equipment and facilities are a major block. As algorithms and artificial intelligence systems are made commonplace in the healthcare sector, on the pretext of them being more cost-effective and accurate, and equal importance should be given to lack of records, already stretched health infrastructure, outdated research, overburdened medical institutions, and personnel. The subsequent use of data collected, and the use of automated tools for decision making might also exacerbate the existing problems such as underrepresentation of minorities, women, and non-cis males.

There is a lack of any specific legislation concerning the disclosure of medical records in India. However, under the regulations notified by the Indian Medical Council, every medical professional is obligated to maintain physician-patient confidentiality. But this obligation does not extend to other entities, third parties, and data processors responsible for processing patient data, either under the mandate of a state body or a body corporate.

Presently, India has an outdated Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules of 2011 in force, but the rules fail to provide a comprehensive framework based on other internationally accepted practices. On matters of health information security, India currently has a draft Digital Information Security in Healthcare Act which provides for the establishment of eHealth Authorities and Health Information Exchanges at central as well as state-level

Computational systems are mostly data-driven and are ultimately based on the brute force of complex statistical calculations. Since the technical architecture of the proposed National Health Stack is unknown at moment, it further adds to the uncertainty on how the data shared would be used. These raises, as Prof. Hildebrandt points out, the question of to what extent such design should support legal requirements, thus contributing to interactions that fit the system of checks and balances typical for a society that demands that all of its human and institutional agents be “under the rule of law”. The issue of consent is very inherent to the rule of law, as in the digital social contract it ensures the individualistic right to self-determination.

The need for an informed consent overlaps with the ‘purpose limitation’ and ‘collection limitation’ principles, part of the core Fair Information Principles (FIPs), as part of the Guidelines governing the protection of privacy and transborder flows of personal data, by OECD, which came out first in 1980. The principles stipulate that “There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject”, all while ensuring that “the purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose”.

Privacy, like other abstract and subjective freedoms, cannot be reduced to the fulfillment of certain conditions, nor it can be given a delineated shape. However, we must endeavor to give users at least some level of control so that they can better understand and balance privacy considerations against countervailing interests.

 

About the authors

Gyan Tripathi is a student of law at Symbiosis International (Deemed University), Pune; and a Research Associate with Scriboard [Advocates and Legal Consultants]. He particularly loves to research the intersection of technology and laws and its impact on society. He tweets at @tripathi_gy.

Setu Bandh Upadhyay is a lawyer and policy analyst working on Technology Policy issues in the global south. Along with a law degree, he holds a graduate Public Policy degree from the Central European University. He has a diverse set of experiences working with different stakeholders in India, East Africa, and Europe. Currently, he is also serving as the Country Expert for India for the Varieties of Democracy (V-Dem project). He tweets at @setubupadhyay.

NOW OUT: COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society (free download)

We are thrilled to announce the publication of the collection “COVID-19 from the Margins: Pandemic Invisibilities, Policies and Resistance in the Datafied Society”, edited by Stefania Milan, Emiliano Treré (Cardiff University) and Silvia Masiero (University of Oslo) for the Theory on Demand series of the Institute of Network Cultures!

The book explores pandemic invisibilities and datafied policies, but also forms of resistance and creativity of communities at the margins as they try to negotiate survival during the COVID-19 crisis. It features 75 authors writing in 5 languages in 282 pages that amplify the silenced voices of the first pandemic of the datafied society. In so doing, it seeks to de-center dominant ways of being and knowing while contributing a decolonial approach to the narration of the COVID-19 emergency. It brings researchers, activists, practitioners, and communities on the ground into dialogue to offer critical reflections in near-real time and in an accessible language, from indigenous groups in New Zealand to impoverished families in Spain, from data activists in South Africa to gig workers in India, from feminicidios in Mexico to North/South stereotypes in Europe, from astronomers in Brazil to questions of infrastructure in Russia and Github activism in China—and much more!

The book is **open access**. You can download the .pdf and .epub versions from this page.
While supplies last, we are also distributing printed copies for free (use the same link to order yours).

“COVID-19 from the Margins caringly and thoughtfully demonstrates why the multiplicity we call “the poor” is more than ever at the receiving end of the worst effects of globalized, patriarchal/colonial racist capitalism. But they are not passive victims, for their everyday forms of activism and re-existence, including their daily tweaking of the digital for purposes of community, care, and survival, has incredible insights about design and digital justice that this book takes to heart as we strive to undo the lethal effects of ‘the first pandemic of the datafied society’ “, wrote about the book Colombian anthropologist Arturo Escobar, author of ‘Designs for the Pluriverse. Radical Interdependence, Autonomy, and the Making of Worlds’ (Duke UP, 2018).

A number of book launch event will follow in the coming weeks. Visit this website to stay tuned, or follow the project on Twitter (@BigDataSur).

We wish to thank a number of sponsors without whom this project and the blog where it all started would not have been possible. In order of appearance, the Amsterdam School of Cultural Analysis, the School of Journalism, Media and Culture at Cardiff University, the European Research Council, and the Research Priority Areas of the University of Amsterdam Global Digital Cultures and Amsterdam Center for European Studies. Finally, a big heartfelt thanks goes to Geert Lovink and his INC team, for believing in this project from the start and giving us the chance to experiment with multilingualism and knowledge sharing.

just out: ‘Latin American Visions for a Digital New Deal: Towards Buen Vivir with Data’

Stefania Milan and Emiliano Treré, co-founders of the Big Data from the South Research Initiative, have contributed a piece entitled ‘Latin American Visions for a Digital New Deal: Towards Buen Vivir with Data‘ to the essay collection ‘A Digital New Deal. Visions of Justice in a Post-Covid World‘, edited by the Just Net Coalition and IT for Change (India). Their piece is accompanied by the beautiful illustration of Mansi Thakkar.

Read the project description, and download the full collection as pdf from this link.

The Just Net Coalition and IT for Change invite you to explore and engage with our Digital New Deal essay series, a thoughtfully curated set of long reads authored by passionate and committed scholars, activists and visionaries from around the world. In these essays, authors reflect on the current global Covid moment and its challenges from various standpoints and how the digital fits into this equation. From activists steeped in long standing battles against corporate capture of our resources and pushing for food sovereignty, labor rights, climate justice, equitable development, to scholars pondering the new questions of the internet, data, AI and the state of our public sphere, to practitioners seeking to address the disenfranchisement of countless communities and people from digital systems, the Digital New Deal captures the current anxieties, challenges, hopes and visions for the future. Beyond calling out what ails the world, our authors set for themselves in these poignant, informative, and radical pieces, the difficult challenge of outlining progressive solutions…to future gaze, imagine new possibilities and to reclaim the digital for justice.

 

 

[BigDataSur-COVID] Digital Social Protection during COVID-19: The Shifted Meaning of Data during the Pandemic

Silvia Masiero reflects on changes in digital social protection during the pandemic, outlines the implications of such changes for data justice, and co-proposes an initiative to discuss them.

by Silvia Masiero

One year ago today was my last time leaving a field site. Leaving friends and colleagues in India, promising to return as usual for the Easter break, it was hard to imagine to be suddenly plugged into the world we live in today. As a researcher of social protection schemes, little did I know that my research universe – digital anti-poverty programmes across the Global South – would have changed as it has over the last 12 months. As I have recently stated in an open commentary, COVID-19 has yielded manifold implications on social protection systems, implications that require reflection as conditions of pandemic exceptionalism perdurate over time and across regions.

The Shifted Meaning of Beneficiary Data

My latest study was on a farmer subsidy programme based on the datafication of recipients – a term that indicates, from previous work, the conversion of human beneficiaries into machine-readable data. The programme epitomises the larger global trend of matching demographic and, increasingly, biometric credentials of individuals with data on eligibility for anti-poverty schemes, such as poverty status, family size and membership of protected groups. Seeding social protection databases with biometric details, a practice exemplified by India’s Aadhaar, is supposed to combat exclusion and inclusion errors alike, assigning benefits to all entitled subjects while scrapping all the non-entitled. At the same time, quantitative and qualitative research works have shown the limits of datafication, especially its consequences in reinforcing exclusions of entitled subjects whose ability to authenticate is reduced by failures in recognition, sometimes resulting in denial of vital schemes.

During the pandemic, as numerous contributions to this blog have illustrated, existing vulnerabilities have become deeper and new ones have emerged, expanding the pool of people in need for social protection. Instances of the former are daily-wage and gig workers – who have seen their extant subalternities deepened in the pandemic, in terms of loss of income or severely heightened risks at work. Instances of new vulnerabilities, instead, are associated to the “new poor” of the pandemic, affected in many ways by the backlashes of economic paralyses across the globe. The result is the heightened global need for social protection to work smoothly, making the affordance of inclusiveness – being able to cover for the (old and new) needful – arguably prioritarian to that of exclusiveness, aimed at “curbing fraud” by secure biometric identification.

Since its launch in May 2020, this blog has hosted contributions on social protection schemes from countries including Colombia, Peru, India, Brazil and Spain, all highlighting the heightened needs of social protection under COVID-19. While describing different world realities, all contributions remark how the vulnerabilities brought by COVID-19 call for means to combat wrongful exclusions, for example using excess stocks of commodities to expand scheme coverage. Against the backdrop of a world in which the priority was “curbing fraud” through the most up-to-date biometrics, the pandemic threw us in a world in which inclusion of the needful takes priority over the roles of anti-poverty scheme datafication. The first implication, for researchers of digital social protection, is the need to devise ways to learn from examples of expanded coverage in social protection, of which India’s National Food Security Act has offered an important instantiation over the last decade.

Social Protection in the Pandemic: New Data Injustices

As the edited book “Data Justice and COVID-19: Global Perspectives” notes, the hybrid public-private architectures emerged during COVID-19 have generated new forms of data injustice, detailed in the volume through 33 country cases. The book opens, along with important debates on the meaning of data in a post-pandemic world, the question on data justice implications of COVID-19 for digital social protection. Drawing on contributions published in this blog, as well as reports of social protection initiatives taken during the pandemic, I have recently highlighted three forms of data injustice – legal, informational and design-related – that need monitoring as the pandemic scenario persists.

From a legal perspective, injustice is afforded by the subordination of entitlements to registration of users into biometric databases, which become a condition for access – leading to scenaria of forced trading of data for entitlements, widely explored in the literature before COVID-19. The heightened need for social protection in the pandemic deepens the adverse implications of exclusions, exacerbating the consequences of injustice for those excluded from the biometric datasets. Stories from urban poor contexts ranging from Nebraska, US to São Paulo, Brazil, underscore the same point: while the legal data injustice of exclusion was problematic before, it only heightens its problematicity in the context of the economic backlash of the pandemic on the poor.

From an informational perspective, the way entitlements are determined – specifically, the use of citizens’ information across databases to determine entitlements – has become crucial during the pandemic. Two cases from this blog especially detail this point. In Colombia, information to determine eligibility for the Ingreso Solidario (Solidarity Income) program was combined from existing data repositories, but without detail on how the algorithm combined information and thus, on how eligibility was determined. In Peru, subsidies have leveraged information gathered through databases such as the Census, property registry and electricity consumption, again without further light on how information was combined. Uncertainty on eligibility criteria, beyond deepening pandemic distress, arguably limits beneficiaries’ ability to contest eligibility decisions, due to lack of clarity on the very grounds on which these are taken.

Finally, design-related data injustices arise from the misalignment of datafied social protection schemes with the effective needs of beneficiaries. In the pandemic, the trade-off brought by biometric social protection – entailing increased accuracy of identification, at the cost of greater exclusions – has been brought to its extreme consequences, as extreme are the implications of denial of subsidy for households left out by social protection schemes. This brings to light a trade-off whose problematicity was already known well before the pandemic started, and further heightened by studies questioning the effective ability of biometrics to increase the accuracy of targeting. As a result, a third, design-related form of data injustice needs monitoring as we trace the evolution of social protection systems through COVID-19.

Ways Forward: A Roundtable to Discuss

As the pandemic and its consequences perdurate, new ways are needed to appraise the consequences of shifts in datafied social protection that the crisis that the crisis has brought. Not surprisingly, my promise of going back to the field for Easter 2020 could not be maintained, and established ways to conduct research on global social protection needed reinvention. It is against this backdrop that a current initiative, launched in the context of the TILTing Perspectives Conference 2021, may make a substantial contribution to knowledge on the theme.

The initiative, a Roundtable on COVID-19 and Biometric ID: Implications for Social Protection, invites presentations on how social protection systems have transformed during the pandemic, with a focus on biometric social protection and the evolution of its roles and systems. Abstracts (150-200 words) are invited as submissions to the TILTing Perspectives Conference, with the objective of gathering presentations from diverse world regions and draw conclusions together. Proposals for the role of discussants – to take part in the roundtable and formulate questions for debate – are also invited through the system. In an epoch where established ways to do fieldwork are no longer practicable, we want the roundtable to be an occasion to advance collective knowledge, together deepening our awareness of how social protection has changed in the first pandemic of the datafied society.

Submission to the Roundtable are invited at: https://easychair.org/cfp/Tilting2021

 

 

[BigDataSur-COVID] COVID-19 and the Stripping of Power from the Edges

By Niels ten Oever

At the start of the COVID-19 pandemic, people wondered whether the internet infrastructure would be capable of handling the increase in data traffic. When many people started working, streaming, and following the rapidly unfolding news on social media from home, many expected this would strain on the internet infrastructure. Some European politicians were so concerned that they called on Netflix to lower the resolution of their video streams. Why did it turn out the internet infrastructure was able to cope with the increasing demand? The answer is, because the internet no longer works as most people think it does. An extra layer of control was added to the internet by Content Delivery Networks. This chapter will discuss how pressure on the infrastructural margins of the internet is strengthening the center of the network, and examine how COVID-19 has exacerbated this trend.

In 2011, the Tunisian government started heavily censoring the internet in response to popular uprisings in the country. In response, many internet users engaged in what is commonly called a Distributed Denial of Service (DDoS) attack on the Tunisian government’s website. In a DDoS attack, hundreds or even thousands of computers try to reach a website at the same time. This can lead to the website’s server, or the connection to the server, being overloaded and thus render the website unavailable to internet users. When a website suddenly becomes very popular, this can also lead to similar behavior. When many users try to connect at the same time, the traffic effectively renders the site or service unavailable. Eight of Tunisia’s websites were forced offline.

In response to the DDoS attacks, and to prevent down-time of servers due to their popularity, Content Distribution Networks (CDNs) were increasingly used. CDNs are globally-distributed proxy servers, often placed in data centers close to internet eXchange Points (IXPs). While a user thinks they are connecting to a popular website far away, they are connecting to a CDN server that is located near them. While you are thinking you are streaming a video from a jurisdiction that you think is safe, the video is more likely to be stored close to the network controlled by your Internet Service Provider (ISP) or your telecommunications operator.

When the internet was designed, an engineer adopted the end-to-end principles as their central motto. This was included in the mission statement of the Internet Engineering Taskforce, the institution responsible for co-developing and standardizing the internet infrastructure:

The Internet isn’t value-neutral, and neither is the IETF. We want the Internet to be useful for communities that share our commitment to openness and fairness. We embrace technical concepts such as decentralized control, edge-user empowerment and sharing of resources, because those concepts resonate with the core values of the IETF community. These concepts have little to do with the technology that’s possible, and much to do with the technology that we choose to create (RFC3935).

When users connected to the internet during the COVID-19 pandemic, it may seem they were edge-users connecting to another endpoint over “dumb pipes”—leveraging the powers of decentralized control. The truth it quite the opposite. The internet infrastructure held up during the COVID-19 pandemic not because people were getting their content from the global internet, but from a data center near them. You may think is actually a good thing, since it caused the internet to not collapse? Maybe. CDNs are the mere latest cause and consequence of centralization on the internet. The difference between CDNs and other large players such as Google and Facebook (who have their own CDNs) is that these other CDNs remain largely invisible. Some of you might have heard about Cloudflare, but what about Akamai, Fastly, and Limelight?

In 2017, Cloudflare unilaterally removed the neo-nazi forum and website Daily Stormer from its services. In 2019, it similarly removed the imageboard 8chan after two shootings in the United States. The company cited the following reason for removal: “In the case of the El Paso shooting, the suspected terrorist gunman appears to have been inspired by the forum website known as 8chan. Based on evidence we’ve seen, it appears that he posted a screed to the site immediately before beginning his terrifying attack on the El Paso Walmart killing 20 people”. The interesting point was that no one asked Cloudflare to do this; they removed the content on their own volition, without a clear process in place. Many critical internet scholars such as Suzanne van Geuns, Corinne Cath, and Kate Klonick have reported on this. While such decisions show the concrete impact these companies can have, it is perhaps even more telling that one hears very little about these companies.

CDNs are perhaps the internet infrastructure that companies benefitted most from during the COVID-19 epidemic, because there was increased traffic to the websites that they provide services to. But what about the people who requested information from these websites? Technically, they got served by another server than the one they thought they were connected to. They might have received something else than what they asked for, because CDNs allow for particularly fine-mazed geography-based adaptation of content. The CDN that served a user in Senegal might have different data than a CDN that served a user in Brisbane. And there is almost no way of knowing by which particular CDN server you got served, or to bypass the CDN. In this way, the opacity of internet infrastructure was exacerbated by the COVID-19 pandemic. In other words, the COVID-19 pandemic led to further black-boxing of the internet infrastructure, making it harder for users to understand how it works. While this might make the internet faster and more available, it does not make the internet more reliable. Arguably, it makes the internet a better tool for control, because it increases power asymmetries between users and transnational corporations.

In 2011, Tunisian internet users were able to use the internet infrastructure against their own government. In 2020, it is nearly impossible for users around the world to even know where the websites they are accessing are located, let alone take them down. The internet is no longer a bazaar. The COVID-19 pandemic helped fortify an industrial zone that now is the internet, which only allows users to connect on the outside, without having a view or control on the inside. The internet has become a smart network, with not so smart edges.

 

Niels ten Oever is a post-doctoral researcher at the University of Amsterdam (The Netherlands) and Texas A&M University (USA), associated also with the Centro de Tecnologia e Sociedade at the Fundação Getúlio Vargas, Brazil. His research focuses on how norms such as human rights get inscribed, resisted, and subverted in the Internet infrastructure through transnational governance. Previously, Niels has worked as Head of Digital for ARTICLE19 and served as programme coordinator for Free Press Unlimited. He holds a cum laude MA in Philosophy and a PhD in Media Studies from the University of Amsterdam. He sometimes

 

[BigDataSur-COVID] Alternative Perspectives on Relationality, People and Technology During a Pandemic: Zenzeleni Networks in South Africa

By Nic Bidwell & Sol Luca de Tena

Many rural communities in Africa have characteristics that are neither represented by data about COVID-19, nor addressed by public health information designed to help people protect themselves. This does not mean to say that rural inhabitants are unaffected by information designed for different populations; and grassroots initiatives have been vital in countering the impacts of this. Here, we reflect on the role of community networks in customising information and services for rural inhabitants during the pandemic, and how they reveal constructs embedded in data representation and aggregation. Community networks (CNs) are telecommunications initiatives that are installed, maintained, and operated by local inhabitants to meet their own communication needs. Rey-Moreno’s 2017 survey identified 37 community networks in 12 African countries. With the success of four Annual African CN Summits, more are emerging every year. Our account focuses on Zenzeleni Networks in South Africa. Thus, we begin by introducing its response to COVID-19 and ensuring health information suited local circumstances. We end by arguing that examples of contextualisation reveal logics about personhood that are vital to tackling the disease, but not represented by individualist models embedded in datafication.

Zenzeleni’s Response to COVID-19

Zenzeleni is a community-owned wireless internet service provider that has connected more than 13,000 people and 10 organisations to the internet in South Africa’s Eastern Cape province. The network is owned by amaXhosa inhabitants (including 40% women) and is run by two local cooperatives. A cooperative approach ensures internet access costs are up to 20 times lower than services offered by existing telecommunications operators, and expenditure is retained locally. The non-profit organisation Zenzeleni Networks NPC was established through the cooperative, and provides vital connections with regulatory authorities and telecommunications expertise. Zenzeleni was seeded in Mankosi, a remote district of 12 villages, by PhD researchers at the University of the Western Cape in Cape Town, which followed prolonged collaborations on solar electricity and media sharing technologies. Over the past eight years, the community network has evolved as a social innovation ecosystem in which rural communities own their telecommunication businesses. Like other community networks in the global south, Zenzeleni has created employment and developed technical skills in one the most disadvantaged areas in South Africa.

As well as providing more affordable and higher quality network services than alternatives, Zenzeleni’s embeddedness directly links technology and media considerations to local life. As the COVID-19 lockdown ensued, inhabitants working, studying or seeking work in cities returned to their rural family homes. Zenzeleni played a vital role in providing continuity to residents’ urban lives, by adding network infrastructure to extend the community access points and ensuring free and open access to education websites, including all of the nation’s universities and further education colleges. Indeed, usage of access points tripled during since the pandemic began.

Not only are health services difficult to access, but the local populations served by Zenzeleni are particularly vulnerable; they have a high incidence of HIV, tuberculosis, and child and maternal health issues. Thus, Zenzeleni sourced funding to connect the District Hospital. Just as importantly, however, from the pandemic’s onset, the network started to address health information needs. Like other groups across Africa, Zenzeleni immediately recognised the mismatch between health information issued by WHO and South Africa’s national government and local circumstances. Not only was information initially unavailable in most of Africa’s 2000 languages, even when advice was in a home language it was ill-suited to many rural contexts. Recommending regular handwashing, for instance, is inappropriate for Mankosi’s inhabitants who share a few unreliable taps in their villages because water is not supplied to households. Similarly, guidelines on shared transport are irrelevant when only one bus a day connects villages on a five hour round trip to the nearest supermarket. Zenzeleni ensured free and open access to official health websites. Understanding the local context launched projects also increased access to relevant information resources and raised awareness of health strategies that matched local circumstances.

My Mask Protects you, and Yours Protects me: Accounting for Personhood in the Datafied Society

While providing health information in home languages suited to local constraints is vital, but efficacy in managing a socially-spread disease requires integrating deeper insights about the nuances of local social practices and relations. For instance, people returning to villages from cities bring information of varying legitimacy, from recommendations to outright falsehoods. Locally, this information was interpreted through assumptions that information in cities was inherently more credible because cities are highly connected. The valorisation of information associated with electronic media has been discussed elsewhere in rural southern Africa. An implicit part of Zenzeleni’s role has been to foster critical approaches to disinformation by directing inhabitants to legitimate information and ensuring information was properly contextualised. However, at the same time, promoting information access must account for sharing practices. While internet hotspots safely offer socially-distanced access, many inhabitants group around tablets and phones.

Device-sharing practices in Mankosi are not merely about limited access to devices. They also involve a cultural construct of relationality. Devices like smartphones are embedded with logic that personhood exists prior to interpersonal relationships (Bidwell, 2016). This individualist logic contrasts with the philosophy of Ubuntu, an isiXhosa word which is often translated as “I am because we.” This collective logic assumes that neither community or individual exists prior, and being human depends on the mutual and dynamic constitution of other humans. As Eze explains:

We create each other and need to sustain this otherness creation. And if we belong to each other, we participate in our creations: we are because you are, and since you are, definitely I am.

The importance of the construct of Ubuntu to effective contextualisation is illustrated by Zenzeleni’s local volunteers’ observations that community members assisted each other in putting on face-masks. Senses of mutual responsibility are straightforward in communities such as Mankosi. However, routinely performing responsibility involves physical help and, since none of the guidelines explicitly combine social distancing with putting on a mask, this represents an ambiguity.

The challenge of translating a guideline such as “wear it for me” reveals an important role for community networks in COVID-19 times, and in datafication more generally. Much like the assumption of a person putting on their masks themselves, prevalent models of data extraction, representation, and personalisation cultivate and amplify an individualist logic. Yet, as many commentators have suggested, the best protection we have against the virus is Ubuntu. Zenzeleni and other community networks around the world offer an alternative perspective on relationality, people, and technology.

 

Nicola Bidwell is an adjunct professor at the International University of Management, Namibia, and a researcher at University College Cork, Ireland. She has applied her expertise in community-based, action research for technology design in the Global South for the past 15 years, and catalysed thought about indigenous-led digital design and decolonality. Nic is an associate editor for the journal AI & Society: Knowledge, Culture and Communication.

Sol Luca de Tena has over a decade of experience in strategic project management within technology development, capacity building, social impact, and policy, with a focus on utilising technologies to address environmental and social challenges. She is currently the acting CEO of Zenzeleni Networks Not for Profit company, supporting the operation and seeding of community networks in rural communities in South Africa. She also leads various projects which seek to address the digital divide in a human centre approach, and collaborates on various working groups and forums on Community Networks in Africa and around the world.

[BigDataSur-COVID] Towards Civic Data Policies: Participatory Safeguards in COVID-19 Times

By Arne Hintz

The pervasive tracing, tracking, and analysing of citizens and populations has emerged as the tradeoff of an increasingly datafied world. Citizens are becoming more transparent to the major data-collecting institutions of the platform economy and the state, while they have limited possibilities to intervene into processes of data governance, control the data that is collected about them, and affect how they are profiled and assessed through data assemblages. The COVID-19 pandemic has highlighted the centrality of these dynamics. Contact tracing and detailed identification of outbreak clusters have been essential responses to COVID-19. Yet, detailed data about our movements, interactions and pastimes is now tracked, stored, and analysed, both “online” through the use of contact-tracing apps and “offline” (e.g., when we fill in a form at a bar or restaurant). The rise of tracking raises the question of how exactly data is collected and analysed, by whom, for what purposes, and with what limitations. Essentially, it signals the necessity of legal safeguards to ensure that data analytics fulfil their purpose while preventing privacy infringements, discrimination, and the misuse of data. The COVID-19 pandemic thus alerts us to the importance of effective regulatory frameworks that protect the rights and freedoms of digital citizens. It also demands public involvement in a debate that affects our lives during the pandemic and beyond.

The wider context of data policy in the wake of major data controversies by both public and commercial institutions—from the Snowden revelations to Cambridge Analytica—is currently ambiguous. On the one hand, it reflects a deeply entrenched commitment to expansive data collection. On the other hand, it increasingly recognises the need for enhanced data protection and citizens’ data rights. In many countries, the possibilities for monitoring people’s data traces (particularly by state agencies) have significantly expanded. The UK Investigatory Powers Act from 2016 serves as a stark example, because it legalised a broad range of measures, including the “bulk collection” of people’s data and communication; the “internet connection records” (i.e., people’s web browsing habits); and “computer network exploitation” (i.e., state-sponsored hacking into the networks of companies and other governments as well as the devices of individual citizens).1

At the same time as these encroachments, we have also seen the strengthening of data protection rules, most prominently by the European Union General Data Protection Regulation (GDPR) in 2018. The GDPR enhances citizen control over data by providing rights to access and withdraw personal data, request an explanation for data use, and deny consent to data tracking by platforms. It requires that data be collected only for specific purposes to reduce indiscriminate data sharing and trading. The GDPR also limits the processing of sensitive personal data. While some elements of the GDPR have been controversial and the regulation overall is often described as insufficient, it has been recognised as an important building block towards a citizen-oriented data policy framework. The emerging policy environment of data collection and data use has been significant in societies that are increasingly governed through data analysis and processes of automated decision-making. Profiling citizens and segmenting populations through detailed analysis of personal and behavioural data are now at the core of governance processes and shape state-citizen relations.

What does the shifting data environment mean during COVID-19 times? How should regulatory frameworks enable and constrain the tracking and tracing of virus outbreaks, and what boundaries should exist? If we accept that some data collection and analysis is useful to address the pandemic and its serious health implications, the purpose limitation of this data (as highlighted by the GDPR) becomes crucial. In some countries, contact-tracing apps were designed to track a much wider range of data than initially necessary for tracing infection chains and enable government agencies to use that data for non-medical tracking purposes. In order to avoid contact-tracing becoming a Trojan Horse for widespread citizen surveillance, strict purpose limitation would be an essential cornerstone of a robust regulatory framework. Similarly, limitations to the collection of sensitive data and the deletion of all data at fixed times during or after the pandemic would be core components of such a framework. While it may be debatable whether wider data collection and sharing would be acceptable as long as the affected individuals give their consent, a consent model often leads to pressures and incentives for citizens to hand over data against their will and interest, which would make strict prohibitions seem a more appropriate mechanism. The COVID-19 contact-tracing case thus points to some of the elements that are increasingly discussed and regulated as part of policy reforms such as the GDPR, and it highlights the challenges of indiscriminate data collection.

Indiscriminate data collection also poses questions about who should develop such policy, and whether broader public involvement would be desirable or even necessary. The COVID-19 pandemic helps us explore the role of citizens as policy actors. Contributions to the regulatory and legislative environment by civic actors outside the realm of traditional “policymakers” have received increased attention in recent years. These range from the role of civil society in multi-stakeholder policy processes to policy influences by social movements and to the development of specific legislation by citizens in the form of what has been called crowd law and policy hacking.’ The COVID-19 case demonstrates multiple dimensions of these kinds of public engagement. It shows the strong normative role of technical developers arguing for decentralised data storage options in contact-tracing apps (e.g., the Decentralised Privacy-Preserving Proximity Tracing project), who have prevailed in many cases over the initial government intention to centralise data handling. Further, we have seen legal scholars taking the lead in proposing relevant legislative frameworks, for example, by developing a dedicated Coronavirus Safeguards Bill for the UK (which has not, so far, been adopted by the UK government but has still influenced the debate on contact-tracing). The public discourse on COVID-19 responses in many countries has also considered the problem of data collection and possible privacy infringements, thus placing data analytics firmly on the public agenda.

The current pandemic has shown that emergency situations require the rapid adoption of legal safeguards, and a wider public debate on what data analyses are acceptable and where boundaries lie. Policy components from recent regulatory frameworks such as the GDPR can be an important part of this endeavour, as should critical reflection on data extraction laws such as the Investigatory Powers Act. Expert proposals from civil society have promoted rules that address problems raised by the pandemic while protecting civic rights. At the “margins” of established policy processes, these interventions by civil society and the public play a significant role in advancing normative pressure on civic data policies.

 

About the author

Arne Hintz is Reader at Cardiff University’s School of Journalism, Media and Culture and Co-Director of its Data Justice Lab. His research focuses on digital citizenship and the future of democracy and participation in the age of datafication. He is Co-Chair of the Global Media Policy Working Group of the International Association for Media and Communication Research and co-author of Digital Citizenship in a Datafied Society (Polity, 2019).

[BigDataSur-COVID] Africa’s Responses to COVID-19: An Early Data Science View

By Vukosi Marivate, Elaine Nsoesie & Herkulaas MVE Combrink

 

COVID-19 is a unique event that has shaken the world. It has disrupted the way we live, how we work, and what we think. Across Africa, the arrival of COVID-19 also drew attention to the continent. We have had to live through grim forecasts of how “badly” the continent was going to respond to the virus, or whether the continent was different and we would not feel the impact. Given that we are still in the midst of the pandemic, we have a hard task of sifting through the opinions and reports to get to a better understanding of what has happened. We have to deal both with trying to better measure impact or contemplate if natural remedies would prevent spread. As data scientists, we believe that what is measured obscures shortcomings that otherwise might enlighten us on how we can better deal with such situations in the future.

Africa has significant experience dealing with infectious disease epidemics. For example, countries in West and Central Africa have responded over the decades to Ebola outbreaks, and Southern Africa has had HIV/AIDS to deal with. The experiences gained from these epidemics have prepared African health systems to respond to the pandemic. We are likely to see many research papers in the coming years dissecting what impact this preparedness may have had. In this article, we focus on how Africa worked to track COVID and what that might mean for data scientists in the future. What should we learn? Where did things go well? Where did things fail? How do we improve?

When we Measure the Spread

As the pandemic spread across the Northern Hemisphere, throughout the African continent questions formed about the potential impact of COVID-19 on different African countries. In many countries, COVID working groups were set up. These working groups were typically were made up of government and external experts who planned to look at different factors in the responses to COVID-19. In many instances, these groups used data to track COVID-19 and assist in modelling and data-driven decision making. One would have noticed the proliferation of country-led dashboards or infographics on the COVID-19 spread. In some countries, numbers were difficult to track and understand, because of low numbers of tests. The tracking of COVID-19 spread required a pipeline that could test, report, and aggregate information in a meaningful way for epidemiological and clinical surveillance.

Challenges in Reporting

We have seen international challenges to the free, transparent, and open reporting on the severity of COVID-19. Some African countries had these challenges as well, from denying the pandemic exists to refusing to release information on testing and confirmed cases. These challenges cannot be explained by simplistic reasons such as political pandering, but likely indicate challenges in resources available to respond to the pandemic. Countries have been stretched thin in a short period of time, and systems may not have the capacity to change direction this quickly. In this environment, how do you compile statistics and share meaningful information with both the public and policy stakeholders?

COVID-19 Will Still be With us

No one should underplay how COVID-19 will ultimately impact African countries. Its impact will not only be on healthcare; many sectors of society will likely be reeling from the sustained effects of the pandemic. There is already looming evidence about the adverse and secondary damage to other sectors such as education, crime, healthcare, and the economy. Decisions on border and business closures made during the early stages of the outbreak may also have lasting effects on countries in Africa.

Tracking More than Health

COVID-19 has affected more than just health, and the effects will be with us for some time. As we move into second waves in some countries, we are now deciding how to rehabilitate economies, the education systems, and tourism. All of these decisions require data that crosses between national statistics offices and stakeholders. To better plan recoveries and interventions, organisations and states are working to use data to make choices about which interventions might be best. This process extends the need for data beyond the healthcare system toward a coordinated response driven by the public, private and non-governmental institutions. Data and data related issues are the ultimate reflection of people and capacity issues present within a system. If we are to combat negative outcomes, we should all work toward capacitating our nations to prepare for the future.

Lessons we Must Learn

Counting is hard. It requires will, cooperation and resources that together improve policy. We need to learn how to set up the data infrastructure so that counting can catalyze data practices in the future. Yet, setting up a data infrastructure requires money and human capacity. Across the global population, we will have more emergencies to deal with. As such, governments must prepare adequately during the “peace times.” If we do not prepare, we will not get ahead to manage future crisis and crisis situations better. Investing in capacity and building the required skills to disseminate information in a more reliable way helps prepare us for the future. We should never sway away from training, innovation and incentivising education for the purpose of growth and improvement. Technical skills across all sectors—especially within healthcare—have served vital roles during the pandemic and will continues to do so. Capacitating the healthcare system with the technical skills to manage information, actively strive for excellence, and innovate still remains the foundation of preparedness, and drives the proactive strategies we need to be successful as a society.

Vukosi Marivate (https://dsfsi.github.io/) is the ABSA UP Chair of Data Science at the University of Pretoria. A large part of his work over the last few years has been in the intersection of Machine Learning and Natural Language Processing. Vukosi is interested in Data Science for Social Impact, and uses local challenges as a springboard for research. Vukosi is a co-founder of the Deep Learning Indaba, the largest Artificial Intelligence grassroots organisation on the African continent, aiming to strengthen African Machine Learning. He tweets at @vukosi.

Elaine Nsoesie is an Assistant Professor at the Boston University School of Public Health. She has a PhD in Computational Epidemiology, an MS in Statistics, and a BS in Mathematics. Her research is focused on the use of digital data and technology to improve health in global communities. Her work has also addressed bias in digital data. She is on the advisory boards of Data Science Africa and Data Science Nigeria. She is also the founder of Rethé (rethe.org), an initiative that provides scientific writing tools and resources to student communities in Africa to increase representation in scientific publications.

Herkulaas Michael Combrink is a medical biological scientist with more than six years data science experience with “Big Data” of institutional databases. Over the past seven years, he has been active in both healthcare and education. Herkulaas has won several awards for his work in Data Science, Data management and Healthcare. During the COVID-19 outbreak in the Free State, he has been seconded to assist the Free State Department of Health in data science and surveillance support. Additionally, Herkulaas is a PhD candidate in computer science at the University of Pretoria, South Africa.