Category: show on blog page

[BigDataSur] The Challenge of Decolonizing Big Data through Citizen Data Audits [1/3]

Author: Katherine Reilly, Simon Fraser University, School of Communication

A curious thing happened in Europe after the creation of the GDPR. A whole new wave of data audit companies came into existence to service companies that use personal data. This is because, under the GDPR, private companies must audit their personal data management practices. An entire industry emerged around this requirement. If you enter “GDPR data audit” into Google, you’ll discover article after article covering topics like “the 7 habits of highly effective data managers” and “a checklist for personal data audits.”

Corporate data audits are central to the personal data protection frameworks that have emerged in the past few years. But among citizen groups, and in the community, data audits are very little discussed. The word “audit” is just not very sexy. It brings to mind green eyeshades, piles of ledgers, and a judge-y disposition. Also, audits seem like they might be a tool of datafication and domination. If data colonization “encloses the very substance of life” (Halkort), then wouldn’t data auditing play into these processes?

In these three blog posts, I suggest that this is not necessarily the case. In fact, we precisely need to develop the field of citizen data audits, because they offer us an indispensable tool for the decolonization of big data. The posts look at how audits contribute to upholding our current data regimes, an early attempt to realize a citizen data audit in Peru, and emerging alternative approaches. The series of the following blogposts will be published the coming weeks:

  1. The Current Reality of Personal Data Audits [find below]

  2. A First Attempt at Citizen Data Audits [link]

  3. Data Stewardship through Citizen Centered Data Audits [link]

 

The Current Reality of Personal Data Audits

Before we can talk about citizen data audits, it is helpful to first introduce the idea of auditing in general, and then unpack the current reality of personal data audits. In this post, I’ll explain what audits are, the dominant approach to data audits in the world right now, and finally, the role that audits play in normalizing the current corporate-focused data regime.

The aim of any audit is to check whether people are carrying out practices according to established standards or criteria that ensure proper, efficient and effective management of resources.

By their nature, audits are twice removed from reality. In one sense, this is because auditors look for evidence of tasks rather than engaging directly in them. An auditor shows up after data has been collected, processed, stored or applied, and they study the processes used, as well as their impacts. They ask questions like “How were these tasks completed, and, were they done properly?”

Auditors are removed from reality in a second sense, because they use standards established by other people. An auditor might ask “Were these tasks done according to corporate policy, professional standards, or the law?” Auditors might gain insights into how policies, standards or laws might be changed, but their main job is to report on compliance with standards set by others.

Because auditors are removed from the reality of data work, and because they focus on compliance, their work can come across as distant, prescribed – and therefore somewhat boring. But when you step back and look at the bigger picture, audits raise many important questions. Who do auditors report to and why? Who sets the standards by which personal data audits are carried out? What processes does a personal data audit enforce? How might audits normalize corporate use of personal data?

We can start to answer these questions by digging into the criteria that currently drive corporate audits of personal data. These can be divided into two main aspects: corporate policy and government regulation.

On the corporate side, audits are driven by two main criteria: risk management and profitability. From a corporate point of view, personal data audits are no exception. Companies want to make sure that personal data doesn’t expose them to liabilities, and that use of this resource is contributing effectively and efficiently to the corporate bottom line.

That means that when they audit their use of personal data, they will check to see whether the costs of warehousing and managing data is worth the reward in terms of efficiencies or returns. They will also check to see whether the use of personal data exposes them to risk, given existing legal requirements, social norms or professional practices. For example, poor data management may expose a company to the risk of being sued, or the risk of alienating their clientele. Companies want to ensure that their internal practices limit exposure to risks that may damage their brand, harm their reputation, incur costs, or undermine productivity.

In total, corporate data audits are driven by, and respond to, corporate policies, and those policies are organized around ensuring the viability and success of the corporation.

Of course, the success of a corporation does not always align with the well-being of the community. We see this clearly in the world of personal data. Corporate hunger for personal data resources has often come at the expense of personal or community rights.

Because of this, governments insist that companies enforce three additional regulatory data audit criteria: informed consent, personal data security, and personal data privacy.

We can see these criteria reflected clearly in the EU’s General Data Privacy Regulation. Under the GDPR, companies must ask customers for permission to access their data, and when they do so, they must provide clear information about how they intend to use that data.

They must also account for the personal data they hold, how it was gathered, from whom, to what end, where it is held, and who accesses it for what business processes. The purpose of these rules is to ensure companies develop clear internal data management policies and practices, and this, in turn, is meant to ensure companies are thinking carefully about how to protect personal privacy and data security. The GDPR requires companies to audit their data management practices on the basis of these criteria.

Taking corporate policy and government regulation together, personal data audits are currently informed by 5 criteria – profitability, risk, consent, security and privacy. What does this tell us about the management of data resources in our current data regime?

In a recent Guardian piece Stephanie Hare pointed out that “the GDPR could have … [made] privacy the default and requir[ed] us to opt in if we want to have our data collected. But this would hurt the ability of governments and companies to know about us and predict and manipulate our behaviour.” Instead, in the current regime, governments accept the central audit criteria of businesses, and on top of this, they establish the minimal protections necessary to ensure a steady flow of personal data to those same corporate actors. This means that the current data regime (at least in the West) privileges the idea that data resides with the individual, and also the idea that corporate success requires access to personal data.

Audits work to enforce the collection of personal data by private companies, by ensuring that companies are efficient, effective and risk averse in the collection of personal data. They also normalize corporate collection of personal data by providing a built in response to security threats and privacy concerns. When the model fails – when there is a security breach or privacy is disrespected – audits can be used to identify the glitch so that the system can continue its forward march.

And this means that audits can, indeed, serve as tools of datafication and domination. But I don’t think this necessarily needs to be the case. In the next post, I’ll explore what we’ve learned from experimenting with citizen data audits, before turning to the question of how they can contribute to the decolonization of big data in the final post.

 

About the author: Dr. Katherine Reilly is Associate Professor in the School of Communication at Simon Fraser University in Vancouver, Canada. She is the recipient of a SSHRC Partnership Grant and an International Development Research Centre grant to explore citizen data audit methodologies alongside Derechos Digitales in Chile, Fundacion Karisma in Colombia, Sula Batsu in Costa Rica, TEDIC in Paraguay, HiperDerecho in Peru, and ObservaTIC in Uruguray.

[BigDataSur] Data journalism without data: challenges from a Brazilian perspective

Author: Peter Füssy

For the last decade, data journalism has attracted attention from scholars, some of whom have provided distinct definitions in order to understand the changes in journalistic practices. Each one of them emphasizes a particular aspect of data journalism; from new forms of collaboration to open-source culture (Coddington, 2014). Yet, even among clashing definitions, it is possible to say they all agree that there is no data journalism without data. But which data? Relevant data does not generate by itself and it is usually related to power, economic, and/or political struggles (De Maeyer et. al, 2014). While journalists in the Global North mostly benefit from open government mechanisms for public scrutiny, journalists working in countries with less transparency and democratic tradition still face infrastructural issues when putting together data and journalism (Borges-Rey, 2019; Wright, Zamith & Bebawi, 2019).

For the next paragraphs, I draw from academic research, reports, projects, and my own experience to briefly problematize one of the most recurring challenges to data journalism in Brazil: access to information. Since relevant data is rarely available immediately, a considerable part of data-driven investigative projects in Brazil relies on Freedom of Information (FOI) law that forces governments to provide data of public interest. Also known as Access to Information or Right to Information, these acts are an essential tool to increase transparency, accountability, citizens agency, and trust. Yet, implementation and compliance of the regulation in Brazil are inefficient in all levels of government bodies (Michener, 2018; Abraji, 2019; Fonseca, 2020; Venturini, 2017).

More than just a bureaucratic issue inherited from years of dictatorship and lack of competences, this inefficiency is also a political act. As Torres argued, taking Mexico as an example, institutional resistance to transparency is carried out through subtle and non-political actions that diminish data activists agency and have the effect of producing or reinforcing inequalities (Torres, 2020). In the case of Brazil, however, recent reports imply that institutional resistance to transparency is not necessarily subtle. It may also be a political flag.

Opacity and Freedom of Information

According to Berliner, the first FOI act was passed in Sweden in 1766, but the recent wave follows the example of the United States’ act from 1966. After the US, there is no clear pattern for adoption; for example, Colombia passed a law in 1985, while the United Kingdom did so only in 2000. FOI acts are more likely to pass when there is a highly competitive domestic political environment, rather than pressure from civil society or international institutions (Berliner, 2014).

Sanctioned in 2011, the Brazilian FOI came to effect only in 2012. In the first six years, 611.3 thousand requests were filled just in the federal government (excluding state and municipal bodies). The average of 279 requests per day or 11 per hour suggests how eager the population was to decentralise information. Although public authorities often give insufficient responses and say that the request was granted, it is possible to say the law was about to “stick”. From the total requests, 458.4 thousand (75%) resulted in partial or full access to the requested information (Valente, 2018).

At the beginning of 2019, while president Jair Bolsonaro was at his first international appearance as the Brazilian head of state in Davos, vice president general Hamilton Mourão signed a decree to limit access to information by allowing government employees to declare confidentiality of public data up to the top-secret level, which makes documents unavailable for 25 years (Folha de S.Paulo, 2019). Until then, this could be done only by the president and vice president, ministers of state, commanders of the armed forces and heads of diplomatic missions abroad. Facing a backlash from civil society, Bolsonaro lost support in Congress to pass that bill and withdraw the resolution a few weeks later. Nonetheless, reports show that the issues regarding FOI requests are growing under his presidency.

Data collected from the Brazilian FOI electronic system by Agência Pública revealed that Federal Government’s denials of requests with the justification of “fishing expedition” increased from 8 in 2018 to 45 in the first year of Bolsonaro’s presidency (Fonseca, 2020). The term “fishing expedition” is pejorative and usually related to secret or non-stated purposes, like using an unrelated investigation or questioning to find evidence to be used against an adversary in a different context. However, according to the Brazilian FOI, the reason behind a request must not be taken into account when deciding to provide information or not.

At the same time, journalists’ perception of difficulties to retrieve information via FOI reached the highest numbers in 2019, when 89% of the interviewed journalists described issues like answers after the legal deadline, missing information, data in closed format, and denial of information (Abraji, 2019). In 2013, 60% reported difficulties, and the number dropped to 57% in 2015.

For example, after more than one year in the office, Bolsonaro’s presidency still refuses to make public the guest list of his inauguration reception. In addition to the guest list, the government keeps in secrecy more than R$ 15 million in expenses made with corporate cards from the Presidency and Vice President’s Office. The confidentiality remains even after a decision by the Supreme Court that overturned the confidentiality in November last year.

More from less

Despite the challenges, Brazilian journalists are following the quantitative turn in the field and creating innovative data-driven projects. As reported by the Brazilian Association of Investigative Journalism (Abraji), at least 1.289 news stories built on data from FOI requests were published from 2012 to 2019. In 2017, the “Ctrl+X” project, which scraped thousands of lawsuits to expose politicians trying to silence journalists in courts, won a prize in the Global Editors’ Data Journalism Awards.

In the following year, G1 won the public choice award with a project that tracked every single murder in the country for a week. The results from the “Violence Monitor” showed a total of 1,195 deaths, one in every eight minutes. However, this project did not rely on FOI requests but on an unprecedented collaboration of 230 journalists employed by the biggest media group in Brazil, Globo. They gathered the data from scratch at police stations all over the country to tell the stories of the victims. Besides that, G1 partnered with Universidade de São Paulo for analysis and launched a campaign on TV and social media so that people could identify some of the victims.

Regardless of the lack of resources, freedom, and safety, these projects show that data journalism can be a tool to rebuild trust from audiences. However, activism to break the resistance to transparency is a challenge even more prominent when opacity seems to be encouraged by institutional actors.

 

About the author

Peter is a journalist trying to explore new media in depth, from everyday digital practices to the undesired consequences of a highly connected environment. After more than 10 years of writing and multimedia reporting for some of the most relevant news outlets in Brazil, he is now second years Research Master’s student in Media Studies at the University of Amsterdam.

 

References

Berliner, Daniel. “The political origins of transparency.” The journal of Politics 76.2 (2014): 479-491.

Borges-Rey, Eddy. “Data Journalism in Latin America: Community, Development and Contestation.” Data Journalism in the Global South. Palgrave Macmillan, Cham, 2019. 257-283.

Coddington, Mark. “Clarifying journalism’s quantitative turn: A typology for evaluating data journalism, computational journalism, and computer-assisted reporting.” Digital journalism 3.3 (2015): 331-348.

De Maeyer, Juliette, et al. “Waiting for data journalism: A qualitative assessment of the anecdotal take-up of data journalism in French-speaking Belgium.” Digital journalism 3.3 (2015): 432-446.

Fonseca, Bruno. Governo Bolsonaro acusa cidadãos de “pescarem” dados ao negar pedidos de informação pública. Agência Pública. 6 Feb, 2020. 

Michener, Gregory, Evelyn Contreras, and Irene Niskier. “From opacity to transparency? Evaluating access to information in Brazil five years later.” Revista de Administração Pública 52.4 (2018): 610-629.

Michener, Gregory, et al. “Googling the requester: Identity‐questing and discrimination in public service provision.” Governance (2019).

Valente, Jonas. “LAI: governo federal recebeu mais de 600 mil pedidos de informação”. Agência Brasil. May 16, 2018. 

Venturini, Lilian. “Se transparência é regra, por que é preciso mandar divulgar salários de juízes?”. Nexo Jornal. São Paulo, 3 Sept. 2017.

Wright, Kate, Rodrigo Zamith, and Saba Bebawi. “Data Journalism beyond Majority World Countries: Challenges and Opportunities.” Digital Journalism 7.9 (2019): 1295-1302.

[blog] I quattro nemici (quasi) invisibili nella prima pandemia dell’era della società dei dati

by

originally published on Il Manifesto, 24 April 2020

Big Data e Covid. La pandemia sta facendo emergere fenomeni e caratteristiche della società dei dati che, in circostanze di emergenza come quelle di queste settimane, rischiano di concretizzare quelli che—fino a poco fa—potevano essere considerati solo scenari estremi, conseguenze inaspettate, o effetti collaterali

La pandemia globale COVID-19 è la prima a manifestarsi su un piano così esteso e in modalità così tanto gravi in una fase avanzata della cosiddetta “società dei dati”. Ci troviamo, di fatto, in un momento spartiacque per il nostro stesso definire cosa significhi vivere un’epoca di pressoché totale trasformazione in dati delle attività umane in qualsiasi ambito.

Una situazione estrema come quella che stiamo vivendo in queste settimane di lockdown quasi totale inevitabilmente mostra tutte le sfumature di questo fenomeno, dalle più virtuose alle più potenzialmente inquietanti.

Non esiste nella storia recente un avvenimento di portata simile che possa competere con l’attuale stato di pandemia globale da un punto di vista di definizione del contemporaneo. Bisogna tornare indietro di due decenni, fino all’11 settembre 2001, per ritrovare un altro momento paragonabile di onnicomprensivo stress-test degli assunti culturali e dei fondamenti della nostra società nel complesso. Il 2001 e il 2020, però, hanno pochi punti di contatto per quanto riguarda gli ecosistemi tecnologici, le infrastrutture digitali e, di conseguenza, gli impatti sociali e politici di questi assetti tecnologici.

La società dei dati mette al centro la produzione di dati e il loro uso per creare valore aggiunto, dalla gestione del traffico al miglioramento dei servizi pubblici, dalla pubblicità personalizzata sul digitale fino alle app di contact tracing contro il COVID-19.

Il paradosso è che, anche in circostanze normali, siamo noi stessi a generare la maggior parte di questi dati, attraverso per esempio gli smartphone, le carte di credito, lo shopping online e i social media. La monetizzazione su larghissima scala dei dati relativi alle nostre preferenze e comportamenti ha generato il valore su cui si sono costruite società come Google e Amazon che hanno nell’analisi e nella predizione i loro maggiori punti di forza, o i loro monopoli.

Noi cittadini produciamo però dati anche ricorrendo al sistema sanitario pubblico o semplicemente passeggiando nelle nostre città, oramai popolate da una miriade di videocamere di sorveglianza o sistemi “intelligenti” di riconoscimento facciale. Molti di questi dati finiscono poi in mani private, anche quando apparentemente sembrano essere sotto il controllo di entità statali: spesso i server sono gestiti da imprese come Accenture, IBM o Microsoft.

Questa geografia variabile di dati, infrastrutture, e entità pubbliche e private è un cocktail potenzialmente esplosivo, soprattutto per la sua scarsa trasparenza verso gli utenti e i rischi per la privacy individuale e collettiva. La società dei dati è infatti anche la culla di quello che l’economista americana Shoshana Zuboff ha chiamato “capitalismo della sorveglianza”, il cui motore è la mercificazione delle informazioni personali anche a costo di ridurre la nostra capacità di agire in modo indipendente e compiere libere scelte. In altre parole, è il nostro stesso essere cittadini che cambia, e non necessariamente in meglio.

Il dramma a più livelli—dall’umano all’economico al sociale—scatenato dalla pandemia COVID-19 contribuisce a mostrare i lati più oscuri e controversi del sistema di mercificazione dei dati.

La pandemia sta infatti facendo emergere fenomeni e caratteristiche della società dei dati che, in circostanze di emergenza come quelle di queste settimane, rischiano di concretizzare quelli che—fino a poco fa—potevano essere considerati solo scenari estremi, conseguenze inaspettate, o effetti collaterali.

Per quelli che possono essere gli ambiti di interesse delle scienze sociali, ci sono almeno quattro diverse aree in cui la pandemia sta agendo da acceleratore di dinamiche potenzialmente pericolose fin qui rimaste più in nuce.

Al di fuori di ogni determinismo—sia tecnologico che epidemiologico—sono fin qui emerse almeno quattro distinte tendenze. Queste sono il positivismo acritico, l’information disorder, il vigilantismo e la normalizzazione della sorveglianza: quattro nemici resi quasi invisibili dal dramma umano della pandemia, ma che feriscono la collettività quasi quanto il virus. E sono destinati ad avere conseguenze di lungo termine a dir poco pericolose. Vediamoli assieme.

Il positivismo acritico

Il primo nemico invisibile è associato ad un verbo di uso comune, “contare”, un’azione che in questi giorni ci viene giustamente presentata come un alleato. “Facciamo parlare i numeri”, si sente spesso dire. Chi non rimane col fiato sospeso in attesa delle tabelle della protezione civile che ci comunicano il numero di morti, di guariti, di ospedalizzati? Il contare, e ancora di più il contarsi, è per ogni società un momento di presa di coscienza importante: basti pensare ai censimenti che hanno un ruolo cruciale nella definizione dello stato nazione. Per di più contare ha che fare con l’essenza stessa delle pandemie: i grandi numeri.

Di norma, tendiamo a credere di più ai dati statistici che alle parole, poiché vi associamo una sorta di verità di ordine superiore. Si tratta di un fenomeno noto anche come “dataism”, dataismo, un’ideologia che ripone eccesiva fiducia nel potere soluzionista e predittivo dei dati.

La fede nei numeri ha radici lontane, da rintracciare nei giorni del positivismo del 19esimo secolo, che postulava la fiducia nella scienza e nel progresso scientifico-tecnologico. Nel suo “Discorso sullo spirito positivo” (1844), il filosofo Augusto Comte spiega come il positivismo riporti al centro “il reale, in opposizione al chimerico”, e come si prefigga di “opporre il preciso al vago”, configurandosi come “il contrario di negativo”, vale a dire identificando un atteggiamento propositivo di fiducia nel futuro.

Di sicuro, riportare i fatti concreti al centro della narrazione del virus e della ricerca di soluzioni non può che essere cosa buona e giusta dopo una stagione buia per la scienza, in cui sono stati messi in questione perfino i vaccini. Purtroppo, però, la fiducia nei numeri è spesso mal riposta poiché, come si è spesso detto in queste settimane, i dati ufficiali tendono a raccontare una porzione limitata e spesso fuorviante della realtà pandemica.

Ciononostante, i numeri e i dati sono al cuore della narrazione del virus. Si tratta, però, di una narrazione poco accurata, spesso decontestualizzata, e non per questo meno ansiogena. Il risultato è un positivismo acritico che tende ad ignorare il contesto e non spiega come si faccia di conto e perché. Decisioni che coinvolgono intere nazioni vengono prese e giustificate sulla base di numeri che non hanno, però, a disposizione dati necessariamente affidabili.

L’Intelligent Retail Lab di Walmart negli Usa – foto Ap

L’information disorder nella pandemia

Il contesto informazionale di una pandemia è stato equiparato ad un’”infodemia”, un’espressione utilizzata in primis dalla stessa Organizzazione mondiale della sanità per definire circostanze in cui vi è una sovrabbondanza di informazioni—accurate o meno—che rendono molto difficile orientarsi tra le notizie o anche solo distinguere le fonti affidabili da quelle che affidabili non sono. La pandemia è di conseguenza anche una situazione particolarmente rischiosa per quanto riguarda il diffondersi di varie tipologie di “information disorder”—letteralmente disturbi dell’informazione—come varie forme di disinformazione o misinformazione.

Nell’infodemia da COVID-19 la cattiva informazione si è manifestata in vari modi. Il Reuters Institute for the Study of Journalism (RISJ) dell’Università di Oxford ha pubblicato uno dei primi studi sulle caratteristiche del fenomeno in questa pandemia, concentrandosi su un campione di notizie in lingua inglese vagliate da iniziative di fact-checking come il network non profit First Draft. Lo studio, che è un primo tentativo esplorativo di analisi del problema, rivela come la varietà delle fonti di disinformazione sulla pandemia possano essere sia “top-down” (quando sono promosse dalla politica o da altre personalità pubbliche) o “bottom-up”, ossia quando partono dagli utenti comuni.

Se la prima tipologia rappresenta il 20% del totale del campione analizzato dal RISJ, è anche vero però che la disinformazione top-down tende a generare molto più buzz sui social media rispetto a quanto prodotto dal basso. Scrive il RISJ, inoltre, che la fetta più grande della misinformazione emersa in queste settimane sarebbe costituita da contenuti “riconfigurati”, modificati ovvero in alcune loro parti. Solo una minoranza (il 38% circa) sarebbe invece composta da contenuti completamente inventati ex novo.

Lo studioso Thomas Rid, uno dei massimi esperti mondiali di campagne di disinformazione nell’ambito della sicurezza nazionale (alla cui storia ha dedicato un libro molto atteso e di prossima pubblicazione, “Active Measures”), ha fatto notare inoltre sul New York Times come la situazione di pandemia possa costituire anche un terreno particolarmente fertile per potenziali operazioni di “information warfare” volte a creare confusione e tensioni nelle opinioni pubbliche dei paesi colpiti, sulla scia di quanto si è visto negli Usa durante le elezioni presidenziali del 2016. Non va nemmeno dimenticata la misinformazione che sfocia nel razzismo e alimenta pulsioni xenofobe, come la falsa notizia, circolata in vari ambienti, che vorrebbe gli africani immuni al virus.

Un data center Bitcoin in Virginia – foto Ap

Il vigilantismo (digitale e non)

Di questi tempi, molti runner si sono sentiti apostrofare malamente, e in alcuni casi sono stati anche aggrediti fisicamente, da altri cittadini indispettiti dal potenziale pericolo per la salute pubblica che può rappresentare un individuo in libera uscita.

Persone che si stavano recando al lavoro hanno raccontato di essere stati vittime di ingiurie di vario tipo per non essere “stati a casa”. Innumerevoli video sono stati caricati sui social media con lo scopo di denunciare chi sarebbe presumibilmente andato a spasso infischiandosene del lockdown. Questo fenomeno è conosciuto in criminologia e sociologia come “vigilantismo”, come ha spiegato il criminologo Les Johnston già nel 1996.

Il vigilantismo riguarda privati cittadini che volontariamente assumono ruoli che non competono loro, come quello di controllo del comportamento degli altri e la relativa denuncia pubblica delle malefatte altrui, vere o presunte. Con le sue azioni di difesa delle norme sociali, il vigilante cerca di offrire delle garanzie di sicurezza a sé stesso e agli altri.

L’avvento dei social media e dei dispositivi mobili ha favorito la diffusione su larga scala di un “vigilantismo digitale” che, come racconta il ricercatore dell’Università di Rotterdam Daniel Trottier, ha lo scopo di attaccare e svergognare l’autore del mancato rispetto delle regole attraverso un’esposizione al pubblico ludibrio, che è spesso duratura e irrispettosa della privacy altrui, e alimenta aggressività e sentimenti di rivalsa.

Se il fenomeno è tipico di momenti storici in cui l’ordine costituito è a rischio, o viene percepito come tale, la sua comparsa e diffusione nei giorni dell’emergenza Coronavirus appare quasi inevitabile. Il vigilantismo digitale da COVID-19 è però particolarmente rischioso, per almeno due ragioni. Anzitutto, questo bisogno di odiare “chi esce da casa”, crea esclusione e stigma sociale, additando ed esponendo individui sulla base di indizi puramente visivi che non possono discriminare tra chi effettivamente rompe le regole e chi invece ha una buona ragione per farlo (per esempio, si sta recando al lavoro).

Questa pericolosissima creazione di “nemici del popolo” sfocia in danni ingenti a livello psicologico—dal senso di solitudine all’incomprensione al desiderio di ritorsione—che molto probabilmente sopravvivranno all’emergenza Coronavirus.

Questo fenomeno finisce inoltre per giustificare simili comportanti trasgressivi, sulla base dell’errato ragionamento che “se lo fanno gli altri, lo posso fare pure io”. In secondo luogo, il vigilantismo digitale o meno divide la collettività, con gravi e duraturi effetti a livello di divisioni sociali tra presunti buoni e cattivi, tra meritevoli e non. Finisce per intaccare la narrazione tanto necessaria di una comunità unita forte proprio della sua unità, in grado di fronteggiare l’emergenza in maniera razionale, proprio nel momento in cui vi è un bisogno estremo di sapere che il sacrificio individuale alimenta lo sforzo collettivo.

Sistemi di riconoscimento facciale in Germania – foto Ap

Privacy e normalizzazione della sorveglianza

La pandemia ha anche riacceso il dibattito sul ruolo della privacy nella società dei dati e, in particolare, in un contesto di emergenza sanitaria come quello di queste settimane. Da più parti, seguendo l’esempio—o i presunti “modelli”—offerti da alcuni paesi asiatici variabilmente democratici o non democratici come Cina, Singapore e Corea del Sud, è stato chiesto di intraprendere soluzioni tecnologiche di sorveglianza e monitoraggio digitale per cercare di rallentare la diffusione del virus tramite il monitoraggio digitale dei cittadini, in varie forme.

Anche in Europa, diversi governi hanno iniziato a lavorare a possibili soluzioni tecniche e, nel complesso, il dibattito si è orientato verso lo sviluppo di applicazioni di “deconfinamento” che potessero sfruttare varie funzioni degli smartphone per fare “contact tracing”, ovvero monitorare i contatti sociali delle persone infettate o potenzialmente esposte a focolai di contagio.

Ad accumunare queste soluzioni, ad ogni modo, sono le complesse e pericolose ripercussioni in termini di diritti, privacy e sicurezza, temi che è fin troppo semplice perdere di vista se si guarda alla tecnologia con lenti eccessivamente deterministe, soluzioniste o dal punto di vista del “positivismo acritico” di cui sopra.

Difficile riassumere il polifonico dibattito italiano sulla questione dell’app “Immuni” scelta dal governo a questo scopo, ma numerosi elementi indicano come si sia cercato da subito di far passare la privacy come un ostacolo per l’attuazione di misure fondamentali.

Questo si è visto all’estremo in Francia, il primo paese a ufficialmente chiedere a Google e Amazon di allentare le misure di protezione della privacy per facilitare l’adozione di app di tracciamento dei contatti. In Italia, “Immuni” andrà nella direzione di un approccio basato su Bluetooth e decentralizzazione, certamente meno invasivo di altre opzioni che sono state sul tavolo delle varie task force governative, ma alcune indicazioni interessanti emergono dal dibattito che ha accompagnato queste decisioni.

Per quanto questa soluzione sembrerebbe sulla carta meno invasiva di altre, anche in questo caso rimangono aperte diverse questioni di opportunità. Claudio “Nex” Guarnieri, uno dei massimi esperti mondiali di sicurezza informatica, ha commentato le varie soluzioni tecniche avanzate, ricordando come anche il Bluetooth non offra comunque garanzie in termini di efficacia.

Le scienze sociali e vari studi sul giornalismo e sulla sorveglianzamostrano come la “normalizzazione” della sorveglianza sia un fenomeno frequente nei dibattiti pubblici sul tema. Qualcosa di simile si è avvertito anche nel dibattito italiano ed europeo nel mezzo della pandemia: i timori degli esperti (sia tecnici che legali) sono stati speso bollati sbrigativamente come problemi di secondo piano, mentre si è fatta passare la falsa dicotomia tra privacy e difesa della salute pubblica, come se la prima invariabilmente ostacolasse la seconda. In realtà, come ha scritto anche lo scrittore e autore dell’acclamato Homo Deus (2017) Yuval Noah Harari sul Financial Times, porre i due temi come in antitesi è scorretto, in quanto non si dovrebbe chiedere ai cittadini di scegliere tra due diritti fondamentali che tra di loro non si auto-escludono di certo.

La domanda da porsi è a quanti e quali diritti siamo e saremo disposti a rinunciare—anche solo in parte—e per quali obiettivi? Una visione troppo deterministica delle potenzialità di queste soluzioni tecniche potrebbe anche portare a sopravvalutare la loro effettiva capacità di essere d’aiuto in questo scenario.

Troppo spesso, inoltre, si è banalizzato il discorso attorno alla privacytentando, in modo disonesto, di mettere sullo stesso piano le abitudini online degli utenti—spesso frivole—con un programma di monitoraggio statale della salute pubblica. La privacy non è morta, come si è invece letto da più parti, e per quanto in parte erosa dal più che problematico sfruttamento commerciale in essere sul web, non si può ridurre questo dibattito al novero delle scelte individuali, azzerandolo con un click.

L’altra questione a restare aperta è infine quella del ritorno alla normalità: a emergenza finita, come assicurarsi che le tecnologie di tracciamento e infrastrutture di controllo pensate per tempi di crisi vengano effettivamente disattivate (e i loro dati cancellati)? Su questo punto si è espressa in modo chiaro anche la Commissione Europea, che si è pronunciata con alcune raccomandazioni e una toolbox, auspicando per gli stati membri un approccio pan-europeo nella difesa della privacy e della protezione dei dati, oltre a standard tecnici condivisi e quanto più decentralizzati.

Il grande assente nello scenario italiano rimane però il dibattito parlamentare sulla questione—come invece sta avvenendo per esempio in Olanda proprio mentre scriviamo—che pur sarebbe doveroso per assicurare il controllo democratico, la accountability e il rispetto di norme e valori democratici di base in scelte tanto delicate.

Gli anticorpi

Ma come si combattono questi quattro nemici insidiosi? Purtroppo la soluzione non è né semplice né immediata. E non esiste (né mai esisterà) un vaccino capace di magicamente immunizzare la collettività contro il positivismo acritico, l’information disorder, il vigilantismo digitale e la normalizzazione della sorveglianza. Possiamo però lavorare sugli anticorpi e fare in modo che si diffondano il più possibile nelle nostre comunità. La società dei dati ha bisogno di utenti critici e consapevoli, che sappiano usare e contestualizzare gli strumenti sia digitali che statistici, che sappiano comprendere i rischi che invariabilmente vi sono associati ma anche cavalcarne i potenziali benefici. E che possano aiutare le fasce meno digitalizzate della popolazione a navigare la propria presenza digitale.

In questo processo assume un ruolo centrale la cosiddetta “data literacy”, ovvero l’alfabetizzazione informatica estesa alla società dei dati. Tale alfabetizzazione deve prendere in considerazione la questione della cittadinanza nell’era dei big data e dell’intelligenza artificiale e deve metterci in grado di compiere scelte consapevoli per quanto riguarda i contorni della nostra azione sul web, comprese le complesse considerazioni in materia di protezione dei dati personali.

Deve aiutarci a distinguere tra le fonti di informazione e a districarci tra gli algoritmi di personalizzazione dei contenuti che inficiano la nostra libera azione sul web. La sfida è aperta ma anche particolarmente urgente se è vero che l’Italia è il fanalino di coda tra i 34 paesi OEDC (Organization for Economic Co-Operation and Development) per quanto riguarda l’alfabetizzazione digitale. Una ricerca recente (2019) proprio dell’ OEDC ha rivelato come solo il 36% degli italiani sia in grado di fare “un uso complesso e diversificato di internet” —il che crea un terreno fertile per i quattro nemici che abbiamo identificato.

Il mondo dell’educazione ha certamente un ruolo chiave da giocare, affiancando una rinnovata educazione alla cittadinanza sul web alla bistrattata educazione civica. Per questo serve una seria formazione del corpo docente, ma servono anche dei fondi dedicati per strumenti, infrastrutture e preparazione. Si tratta però di un progetto di medio e lungo termine, che difficilmente si potrà attuare durante la pandemia. La questione da non perdere di vista è che il mondo “post-Coronavirus” è in costruzione proprio ora, nel vortice della pandemia.

Le scelte intraprese oggi avranno un inevitabile impatto sugli scenari futuri della società dei dati. Più che mai, a dettare queste scelte deve essere un approccio inclusivo, trasparente e onesto per non trovarsi in un futuro dove a dominare sono “scatole nere” tecnologiche, oscure, discriminanti e potenzialmente anti-democratiche.

Gli autori

Philip Di Salvo è ricercatore post-doc e docente presso l’Istituto di media e giornalismo dell’Università della Svizzera italiana (USI) di Lugano. Si occupa di leaks, giornalismo investigativo e sorveglianza di Internet. “Leaks. Whistleblowing e hacking nell’età senza segreti” (LUISS University Press) è il suo ultimo libro.

Stefania Milan è professoressa associata di New Media e Digital Culture presso l’Università di Amsterdam, dove insegna corsi di data journalism e attivismo digitale, e gestisce il progetto di ricerca DATACTIVE, finanziato dal Consiglio Europeo della Ricerca (Horizon2020, Grant Agreement no. 639379).

[blog] The true cost of human rights witnessing

Author: Alexandra Elliott – Header image: Troll Patrol India, Amnesty Decoders

Witnessing is widely accepted as an established element of enforcing justice, and recent increase in accessibility to big data revolutionizes this process. Data witnessing, now, can be conducted by remote actors using digital tools to code large amounts of information – a process exemplified for instance by Amnesty International’s Amnesty Decoders. Gray presents an account of the Amnesty Decoders initiative and provides examples of their cases, such as “Decode Darfur” (977) in which volunteers successfully identified the destruction of villages during war by comparing the before and after satellite imagery. A critical, yet under-discussed consequence of this type of work is the significant mental toll of engaging with this amount of confronting material. The nature of human rights exposés means witnesses are working with disturbing imagery often depicting violence and devastation, which can lead to secondary trauma and must be managed accordingly.

This blog-post should be read as an overview of completed research into the mental health effects of data witnessing and the initiatives that should be put in place to mitigate this. It concludes by highlighting Berkley’s Investigations Lab as an example of the efficient implementation of protective measures in human rights research. The text below presents, however, only the tip of the iceberg of detailed scholarship and I recommend turning to the Human Rights Resilience Project for a more thorough inventory.

The Human Rights Resilience Project is an “interdisciplinary research initiative […] working to document, awareness-raising, and the development of culturally-sensitive training programs to promote well-being and resilience among human rights workers” (“Human Rights Resilience Project – NYU School Of Law – CHRGJ”). Whilst not undertaking any human rights witnessing itself, it functions as a toolbox for those who do. It provides an excellent example of bringing the issue to the forefront of discourse, advocating for the psychological risks of engaging in human rights witnessing to receive the attention it’s severity demands so that both workers and institutions can prepare and manage accordingly.

Data Witnessing and Mental Health

We have reached a point in research in which the correlation between declining mental health and exposure to confronting material in data witnessing work is undeniable. There is a large collection of papers available which evidence the harmful impact on mental wellbeing within the human rights industry.

Dubberly, Griffin and Mert Bal’s research provides a clear overview of “the impact that viewing traumatic eyewitness media has upon the mental health of staff working for news, human rights and humanitarian organisations” (4). They introduce the notion of a “digital frontline” (5) as online data witnessing relocates the confrontation of graphic, disturbing material previously encountered exclusively in the physical field to an office desk far removed from the scene of the crime. 55% of the humanitarian workers and data witnesses observed in the research viewed shocking profanity at least weekly. Carried along with this shift is the psychological impact affiliated with engaging with disturbing content. The effects detected included that workers “developed a negative view of the world, feel isolated, experience flashbacks, nightmares and stress-related medical conditions” (5).

Over the past few years, a range of similar research was undertaken, of which I have presented merely a selection, all confirming a correlation between human rights witnessing and a negative headspace. In Knuckey, Satterthwaite, and Brown list human rights work practices that would contribute to fluctuating mental states, being; trauma exposure, a sensation of hopelessness, high standards and self-criticism, and inflexibility towards coping mechanisms. Similarly, Reiter and Koenig also discuss impacts of humanitarian research on workers’ mental health. Flores Morales et al. conducted a study of human rights defenders and journalists in Mexico of whom are consistently exposed to traumatic content in their work. They detect strong levels of secondary traumatic stress symptoms amongst 36.4% of participants. Finally in one of the earlier investigations into the concern, Joscelyne et al. surveyed international human rights workers to determine the consequences their work had on their psychological wellbeing. The results stated participant levels of 19.4% for PTSD and 18.8% for subthreshold PTSD. Depression was present amongst 14.75 of workers surveyed. Shockingly, these proportions are very similar to those observed amongst combat veterans reiterating the severity of the matter and emphasising the requirement for action.

A Call to Action

Several sectors of the literature on the relationship between data witnessing and mental health focus on what initiatives are currently adopted by organisations to identify, prevent and counteract occasions of trauma and depression amongst researchers or proposes new, potentially effective strategies.

Satterthwaite et al. is an example study that aims to map established techniques for recognizing and reacting to mental health concerns within human rights work. Ultimately it is concluded that the current action of organisations is weak and the suggestion is for targeted training programmes and further academic discourse. Observations of negligence seem to become a trend, with Dubberly et al. also reporting a lack of protective processes in place amongst the majority of organisations studied. In what is dubbed a “tough up or get out” culture (7), humanitarian efforts deny proper recognition of the effects of trauma upon their researchers and thus offer no support or compensation. Additionally, new employees are not notified of the degree of profanity of their daily work material and are consequentially inappropriately prepared.

Acknowledging this gap in current support structures, academics have sought to develop strategies detecting, preventing and reducing declining mental health amongst data witnesses. For instance, Reiter and Koenig’s “Challenges and Strategies for Researching Trauma” describe protective techniques that aim to strengthen resilience; eg. explicitly acknowledgement of the psychological consequences and subsequently fostering a supportive workplace community.

Academics too urge the need for tools for self-care. Distinct from the pampering sessions and beauty treatments commonly affiliated with the term, here self-care practices are put to use to strengthen mental health. Pyles (2018) promotes self-care within the work of data witnessing for its ability to “cultivate the conditions that might allow them to feel more connected to themselves, their clients, colleagues and communities” (xix). This sense of community and grounding within a greater environment is important to counteract any feelings of isolation. Kanter and Sherman also encourage human rights organisations to adopt a “culture of self-care” to mitigate the risk of mental burnout and Pigni’s book “The Idealist’s Survival Kit” was written to provide human rights researchers and witnesses with an artillery of 75 self-care techniques.

As mentioned by Satterthwaite et al., tt is important to acknowledge the lack of mitigating practices in place may well be because of a lack of funding rather than an act of negligence. Dependency on external fundraisers introduces a complex network in which responsibility is distributed amongst a range of actors with varying motivations.

Berkeley

Leading by Example

The tendency for human right organisations to neglect their workers’ mental wellbeing is fortunately not universal. There are instances of hiring counselors and enforcing regular breaks and rotations (Duuberly et al.), one standout initiative is that of the University of Berkley’s Human Rights Centre Investigations Lab.

Following a similar format to the Amnesty Decoders, workers at the Investigations Lab “use social media and other publicly available, internet-based sources to develop evidence for advocacy and legal accountability” (“HRC Investigations Lab | Human Rights Center”). What sets the Lab apart is its dedication to “resiliency resources” – a programme of training and tools aiming to support the witnesses’ wellbeing. Upon orientation to the lab, workers receive resiliency training in which they receive small practical tips to avoid secondary trauma; “use post-its to block out graphic material when viewing a video repeatedly” (“Resiliency Resources | Human Rights Center”), for example. Additionally they are encouraged to regularly check in with an allocated resiliency manager.

Concluding Thoughts

The material human rights witnesses engage with is horrific and the protection of their mental health must be prioritized by the institutions for which they work. However it is also important to remember the necessity of their work in detecting human rights violations and war crimes. The role of data witnessing is admirable and cannot simply be omitted. Therefore the way forward is for human rights institutions to guarantee a support network of education, tools and community so that witnesses can continue to strengthen humanitarian action without personal detrimental consequences.

About the author

Alexandra grew up in Sydney, Australia before moving to England to complete her Bachelors degree at Warwick University. She is currently undertaking a Research Masters in Media Studies at the University of Amsterdam. It is through this course that she became involved with the Good Data tutorial and DATACTIVE project.

References

Dubberley, Sam, Elizabeth Griffin, and Haluk Mert Bal. “Making secondary trauma a primary issue: A study of eyewitness media and vicarious trauma on the digital frontline.” Eyewitness Media Hub (2015).

Flores Morales, Rogelio et al. “Estrés Traumático Secundario (ETS) En Periodistas Mexicanos Y Defensores De Derechos Humanos”. Summa Psicológica, vol 13, no. 1, 2016, pp. 101-111. Summa Psicologica UST, doi:10.18774/448x.2016.13.290.

Gray, Jonathan. “Data Witnessing: Attending To Injustice With Data In Amnesty International’S Decoders Project”. Information, Communication & Society, vol 22, no. 7, 2019, pp. 971-991. Informa UK Limited, doi:10.1080/1369118x.2019.1573915.

“HRC Investigations Lab | Human Rights Center”. Humanrights.Berkeley.Edu, https://humanrights.berkeley.edu/students/hrc-investigations-lab.

“Human Rights Resilience Project – NYU School Of Law – CHRGJ”. Chrgj.Org, https://chrgj.org/focus-areas/human-rights-resilience-project/.

Joscelyne, Amy et al. “Mental Health Functioning In The Human Rights Field: Findings From An International Internet-Based Survey”. PLOS ONE, vol 10, no. 12, 2015, p. e0145188. Public Library Of Science (Plos), doi:10.1371/journal.pone.0145188.

Kanter, Beth, and Aliza Sherman. “Updating The Nonprofit Work Ethic”. Stanford Social Innovation Review, 2016, https://ssir.org/articles/entry/updating_the_nonprofit_work_ethic?utm_source=Enews&utm_medium=Email&utm_campaign=SSIR_Now&utm_content=Title

Knuckey, Sarah, Margaret Satterthwaite, and Adam Brown. “Trauma, depression, and burnout in the human rights field: Identifying barriers and pathways to resilient advocacy.” HRLR Online 2 (2018): 267.

Pigni, Alessandra. The Idealist’s Survival Kit: 75 Simple Ways to Avoid Burnout. Parallax Press, 2016.

Pyles, Loretta. Healing justice: Holistic self-care for change makers. Oxford University Press, 2018.

Reiter, Keramet, and Alexa Koenig. “Reiter And Koenig On Researching Trauma”. Www.Palgrave.Com, 2017, https://www.palgrave.com/gp/blogs/social-sciences/reiter-and-koenig-on-researching-trauma.

“Resiliency Resources | Human Rights Center”. Humanrights.Berkeley.Edu, https://humanrights.berkeley.edu/programs-projects/tech-human-rights-program/investigations-lab/resiliency-resources.

Satterthwaite, Margaret, et al. “From a Culture of Unwellness to Sustainable Advocacy: Organizational Responses to Mental Health Risks in the Human Rights Field.” S. Cal. Rev. L. & Soc. Just. 28 (2019): 443.

Image References

Berkeley. “Human Rights Investigations Lab: Where Facts Matter”. Human Rights Centre, https://humanrights.berkeley.edu/programs-projects/tech/investigations-lab.

Perpetual Media Group. “14 Things Marketers Should Never Do On Twitter”. Perpetual Media Group, https://www.perpetualmediagroup.ca/14-things-marketers-should-never-do-on-twitter/.

[BigDataSur] A widening data divide: COVID-19 and the Global South

COVID-19 shows the need for a global alliance of experts who can fast-track the capacity building of developing countries in the business of counting.

Stefania Milan & Emiliano Treré

The COVID-19 pandemic is sweeping the world. First identified in mainland China in December 2019, it has rapidly reached the four corners of the globe, to the point that the only “corona-free” land is reportedly Antarctica. News reports globally are filled with numbers and figures of various kinds. We count the number of tests, we follow the rise of the total individuals who tested positive to the virus, we mourn the dead looking at the daily death toll. These numbers are deeply ingrained in their socio-economic and political geography, as the virus follows distinct diffusion curves, but also because distinct countries and institutions count differently (and often these distinct ways of counting are not even made apparent). What is clear is that what gets counted exists, in both state policies and people’s imaginaries. Numbers affect our ability to care, share empathy, and donate to relief efforts and emergency services. Numbers are the condition of existence of the problem, and of a country or given social reality on the global map of concerns. Yet most countries from the so-called Global South are virtually absent from this number-based narration of the pandemic. Why, and with what consequences?

Data availability and statistical capacity in developing countries

If numbers are the conditions of existence of the COVID-19 problem, we ought to pay attention to the actual (in)ability of many countries in the South to test their population for the virus, and to produce reliable population statistics more in general–let alone to adequately care for them. It is a matter of a “data gap” as well as of data quality, which even in “normal” times hinders the need for “evidence-based policy making, tracking progress and development, and increasing government accountability” (Chen et al., 2013). And while the World Health Organization issues warning about the “dramatic situation” concerning the spread of COVID-19 in the African continent, to name just one of the blind spots of our datasets of the global pandemic, the World Economic Forum calls for “flattening the curve” in developing countries. Progress has been made following the revision of the United Nations’ Millennium Development Goals in 2005, with countries in the Global South have been invited (and supported) to devise National Strategies for the Development of Statistics. Yet, a cursory look at the NYU GovLab’s valuable repository of data collaboratives” addressing the COVID-19 pandemic reveals the virtual absence of data collection and monitoring projects in the South of the emisphere. The next obvious step is the dangerous equation “no data=no problem”. 

Disease and “whiteness”

Epidemiology and pharmacogenetics (i.e. the study of the genetic basis of how people respond to pharmaceuticals), to name but a few amongst the number of concerned life sciences, are largely based on the “inclusion of white/Caucasians in studies and the exclusion of other ethnic groups” (Tutton, 2007). In other words, modeling of disease evolution and the related solutions are based on datasets that take into account primarily–and in fact almost exclusively–the caucasian population. This is a known problem in the field, which derives from the “assumption that a Black person could be thought of as being White”, dismissing specificities and differences. This problem has been linked to the “lack of social theory development, due mainly to the reluctance of epidemiologists to think about social mechanisms (e.g., racial exploitation)” (Muntaner, 1999, p. 121). While COVID-19 represents a slight variation on this trend, having been first identified in China, the problem on the large scale remains. And in times of a health emergency as global as this one, risks to be reinforced and perpetuated.

A succulent market for the industry

In the lack of national testing capacity, the developing world might fall prey to the blooming industry of genetic and disease testing, on the one hand, and of telecom-enabled population monitoring on the other. Private companies might be able to fill the gap left by the state, mapping populations at risk–while however monetizing their data. The case of 23andme is symptomatic of this rise of industry-led testing, which constitutes a double-edge sword. On the one hand, private actors might supply key services that resource-poor or failing states are unable to provide. On the other hand, however, the distorted and often hidden agendas of profit-led players reveals its shortcomings and dangers. If we look at the telecom industry, we note how it has contributed to track disease propagation in a number of health emergencies such as Ebola. And if the global open data community has called for smoother data exchange between the private and the public sector to collectively address the spread of the virus,in the absence of adequate regulatory frameworks in the Global South, for example in the field of privacy and data retention, local authorities might fall prey to outside interventions of dubious nature. 

The populism and racism factors

Lack of reliable numbers to accurately portray the COVID-19 pandemic as it spreads to the Southern hemisphere also offers fertile ground to distorted and malicious narratives mobilized for political reasons. To name just one, it allows populist leaders like Brazil’s Jair Bolsonaro to announce the “return to normality” in the country, dismissing the harsh reality as a collective “hysteria”. In Italy, the ‘fake news’ that migrant populations of African origin would be “immune” to the disease sweeped social media, unleashing racist comments and anti-migrant calls for action. While the same rumor that has reportedly been circulating in the African continent as well and populism has been hitting hard in Western democracies as well, it might be have more dramatic consequences in the more populous countries of the South. In Mexico, left-wing populist president Andrés Manuel López Obrador responded to the coronavirus emergency insisting that Mexicans should “keep living life as usual”. He did not stop his tour in the south of the country and frequently contradicted the advice of public health officials, systematically ignoring social distancing by touching, hugging and kissing his supporters and going as far as considering the pandemic as a plot to derail his presidency. These dangerous comments, assumptions and attitudes are a byproduct of the lack of reliable data and testing that we signal in this article. 

The risk of universalising the problem

Luckily, the long experience and harsh familiarity in coping with disasters, catastrophes and emergencies has also prompted various countries from the Global South to deploy effective measures of containment more quickly than many countries in the Global North. 

In the lack of reliable data from the South, however, modeling the diffusion of the disease might be difficult. The temptation will likely be to ”import” models and “appropriate” predictions from other countries and socio-economic realities, and then base domestic measures and policies on them. “Universalizing” the problem as well as the solutions, as we warned in a 2019 article, is tempting, especially in these times of global uncertainty. Universalizing entails erroneously thinking that the problem manifests itself in exactly the same manner everywhere, disregarding local features to “other” approaches. Coupled with the “whiteness” observed earlier, this gives rise to an explosive cocktail that is likely to create more problems than it solves. 

Beyond the blind spot? 

While many have enough to worry about “at home”, the largest portion of the world population today resides in the so-called Global South, with all the very concrete challenges of the situation. For instance, for a good portion of the 1,3 billion Indian citizens now on lockdown, staying at home might mean starving. How can the global community–open data experts, researchers, life science scholars, digital rights activists, to name but a few–contribute to “fix” the widening data divide that risks severely weakening any local effort to curb the expansion of COVID-19 to populations that are often already at the margins? We argue that the issue at stake here is not simply whether we pump in the much-needed resources or how we collaborate, but it is also a matter of where do we turn the eye–in other words, where we decide to look. COVID-19 will likely make apparents the need of a global alliance of experts of various kinds who, jointly with civil society organizations, can fast-track the capacity building of developing countries in the business of counting. 

This article has been published simultaneously on the the Big Data from the South blog and on Open Movements / Open Democracy.

Cover image credits: Martin Sanchez on Unsplash

Acknowledgements. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639379-DATACTIVE; https://data-activism.net).

[BigDataSur] Cuba y su ecosistema de redes después de la revolución

Por: Yery Menéndez García y Jessica Domínguez.

En Cuba la información, la comunicación y los datos son “recursos estratégicos del estado” [1] y “asunto de seguridad nacional” [2]. En la práctica, pero también en la mayoría de los documentos normativos del país, queda establecida la propiedad estatal sobre el capital simbólico de la nación.

A lo anterior se suman niveles de acceso y existencia de plataformas de redes telemáticas considerados entre los más bajos del planeta, importantes restricciones internacionales para el acceso a infraestructura, financiamientos, circuitos de telecomunicaciones y conectividad, y la existencia de programas que usan las TICS para intentar desestabilizar abiertamente al gobierno cubano.

Ante este contexto, y debido a los altos precios de conexión, grupos ciudadanos desarrollan prácticas de circulación de información que se adaptan a un contexto híbrido (off-on line). Estas iniciativas asumen un carácter autónomo, deslocalizado y auto gestionado e intentan satisfacer demandas diarias fuera de los mecanismos del estado. Algunas de las más relevantes en los últimos diez años son:

  1. Nuevos medios alternativos de comunicación

Un grupo de jóvenes periodistas graduados de universidades cubanas y otros profesionales están utilizando un grupo de recursos socio-técnicos para generar otras matrices de información.

Estas nuevas plataformas de información de interés público vienen a llenar vacíos dejado por los medios oficiales, únicos permitidos de existir. Algunos actúan como proyectos sombrilla o repositorios, albergando otras iniciativas ciudadanas de información.

Durante diez años y ante carencias de acceso a redes para resolver cuestiones infraestructurales, de fortalecimiento de capacidades y de acceso a fuentes, estas iniciativas han desarrollado formas de gestión creativas e innovadoras en concordancia con las más recientes tendencias globales.

A pesar de esto, la principal fuente de financiamiento de estos proyectos continúa siendo donaciones y becas provenientes de organizaciones internacionales. Este sigue siendo el principal punto de ataque usado para desacreditarlos por representantes del gobierno.

Entre los más relevantes y reconocidos se encuentran:

  • On Cuba, una plataforma en inglés y español dirigida, sobre todo a la comunidad cubana emigrada.
  • El Toque, un medio generalista, enfocado principalmente a los jóvenes y gestionado por jóvenes que cuenta historias de ciudadanía. El Toque pertenece a un grupo mayor de “emprendimientos de comunicación” reunidos dentro del Colectivo +Voces y que incluye también una radio digital llamada “El Enjambre” y un suplemento de humor gráfico, Xel2.
  • Periodismo de Barrio, una revista dedicada a tratar temas medioambientales y vulnerabilidades sociales.
  • El Estornudo, medio especializado en periodismo literario.
  • Joven Cuba y La Tizza, ambos son blogs colaborativos para promover el debate político.

Todos estos medios tienen como principal forma de socialización sus portales online. Pero desde que la distribución de formatos impresos es prohibida por el código penal cubano y el acceso online es caro, estos medios han tenido que innovar en sus interacciones con sus comunidades. La manera fundamental que han encontrado es la creación de una base de datos que se descarga una vez por semana. Con la base de datos descargada se actualiza la aplicación móvil de los sitios y desde entonces se puede acceder a todo el contenido offline.

Existe una clara diferencia entre estos medios y los medios abiertamente opuestos al gobierno de la isla. Los primeros están enfocados en producir información fuera de la égida del departamento ideológico del Partido Comunista de Cuba, estructura encargada de regular toda la producción simbólica del país, mientras los segundos subordinan la información que producen a su activismo político.

  1. El paquete semanal

El paquete es un producto-servicio que capitaliza redes sociales ya desarrolladas y las extiende. Si bien el objetivo final de esta expresión socio-técnica es el lucro y no la práctica de sentido de ciudadanía, si vale la pena comprender como esas redes de datos interactúan con redes sociales y como son producidas socialmente.

Dentro del paquete se recopila alrededor de 1 terabyte de contenido pirata, semana por semana. Este contenido se descarga de internet desde diferentes nodos o matrices que todo el mundo conoce, pero que permanecen ocultas, como secretos a voces. Una vez descargado el contenido, se entrega a un grupo de personas que a su vez, lo distribuyen mediante discos extraíbles a otras ciudadanos y así sucesivamente, por módicos precios.

De esta manera, en una especie de bola de nieve, los cubanos tienen acceso a internet offline y se mantienen actualizados de todo cuanto acontece en materia de información. Los contenidos del paquete incluyen desde cine hasta publicidad no permitida en los canales oficiales cubanos; desde música hasta bases de datos de otras plataformas de todo tipo. El paquete semanal es la principal forma de distribución de los medios y revistas mencionados anteriormente y de otros tantos, religiosos, humorísticos y políticos que no tienen otros espacios donde posicionarse.

La mejor descripción para el paquete es la de fenómeno híbrido de socialización de datos que media entre interacciones sociales no dependientes de algoritmos. Para la realidad semi-conectada de Cuba, el paquete semanal es hoy el recurso de distribución más popular y asequible. Y aunque no es legal, su carácter reticular, su distribución por nodos y de mano a mano y la calidad en la gestión y jerarquización de sus contenidos, hace imposible para las autoridades detenerlo completamente.

  1. The Street Network

La SNET (Street Network, por sus siglas en inglés) o Red de la calle, fue otra popular experiencia de distribución de contenidos y de creación de comunidades que, a diferencia del paquete, no tenía ánimo de lucro. En esta red, conectada por cables y Wi-Fi, sus “miembros” comenzaron a agruparse en nodos por toda la Habana con la intención de jugar partidas online. Con el paso del tiempo, la SNET fue creciendo y perfeccionando en estructura y organización, llegando a otras provincias del país. Y su objetivo primario pasó de ser el espacio de la comunidad gamer cubana a convertirse en un esquema para la generación de prácticas conectadas de ciudadanía mediadas por software.

La SNET, a pesar de ser un tejido ilegal, desarrolló un complejo sistema jerárquico, principios y éticas de funcionamiento bien establecidas, llegando a desplegar un nivel de infraestructura de red nunca antes visto, fuera de los márgenes del estado.

Convertida en un verdadero movimiento de activismo de datos, en 2019 el gobierno trató de institucionalizarla dentro de los Jóvenes Clubs de Computación y Electrónica. Este intento de cooptar la iniciativa generó protestas y demostraciones públicas que llevaron al gobierno, por primera vez, a sostener diálogos y llegar a consenso con los representantes de los nodos de SNET. A pesar de los acuerdos entre ambas partes, la red está hoy casi extinta.

  1. Articulaciones ciudadanas en redes sociales

En enero del pasado 2019 un tornado azotó la Habana devastando el ya vetusto fondo habitacional de la capital cubana. Luego de este fenómeno natural, una oleada de ciudadanos organizados congregaron a cubanos residentes y emigrados para brindar ayuda a los necesitados. Convocándose principalmente mediante Facebook, se crearon directorios colaborativos con los contactos de aquellos dispuestos a ayudar, bases de datos abiertas con los nombres y datos demográficos de los más necesitados e iniciativas de mapping para localizar los lugares donde fue mayor el daño.

Esta iniciativa fue, en su mayoría, impulsada por jóvenes profesionales y artistas. El nivel de movilización demostrado superó a las capacidades del estado, el que una vez más trató de institucionalizar las ayudas. En este caso, el movimiento siguió operando paralelo a los esfuerzos estatales y solo concluyó una vez que la mayoría de las personas afectadas recibieran kits básicos de apoyo.

  1. Plataformas comerciales

También existe una extensiva red de repositorios comerciales colaborativos como Revolico.com que intentan generar una alternativa dinámica al desprovisto mercado oficial. En estos repositorios se crea, gestiona, jerarquiza, recupera y socializa información referente a bienes y servicios que son adquiridos con otros bienes y servicios, moderados por reglas que toda la comunidad que utiliza la plataforma debe seguir.

En una situación de a-legalidad conviven estas comunidades de interpretación, creación y resistencia ante la información estatalizada. Ante un estado centralizador, estas nuevas relaciones sociales de producción dirigidas a llenar vacíos de sentidos que no pueden ser llenados de otra manera, mediadas o no por algoritmos; representan hoy alternativas cada vez más articuladas, populares y endógenas y de eso depende enteramente su supervivencia.

[1] Lineamientos de la política social del Estado (PCC, 2011, updated in 2016)

[2] Decreto Ley 370 de MInisterio de Información y Comunicaciones

Biografía 

Yery Menéndez García es periodista y profesora de la Facultad de Comunicación de la Universidad de La Habana. MA in Media Practice for Development and Social Change por la Universidad de Sussex en Reino Unido. Gestora de Audiencias en el medio independiente cubano El Toque.

[blog] Catching a Glimpse of the Elusive “Feminist Drone”

Author: Erinne Paisley

Introduction

Unmanned Aerial Vehicles (UAV or “drones”) are increasingly being used for military, governmental, commercial and personal purposes (Feigenbaum 267; Estrada 100). This rapid increase in drone use raises new questions about how this technology reinforces certain social and political inequalities within its own structure, function, and use. Those who work within the growing academic field of feminist internet studies are dedicated to understanding the aspects of society’s inequities that are both present in new technologies and that can be decreased through these mediums. However, a clear picture of what a “feminist drone” can look like is still relatively elusive.

To paint a picture of how this new media form can be used to decrease gendered inequalities, we can look to two previous feminist drone projects: Droncita (Dronette) in Mexico and the “Abortion Drone” in Poland. Each of these UAV projects worked in unique ways to expose the existing inequalities that are strengthened through typical drone use and, instead, counteract these forces by using the technology to fulfill feminist agendas. Droncita worked to address spatial inequalities and the “Abortion Drone” aimed to expose and counteract legal inequalities. These cases show a glimpse into the future of feminist drones and the expanding field of feminist internet studies that support them.

Mexico’s Droncita (Dronette)

Discrimination against women includes the exclusion of women from physical spaces . They are also discriminated against in additional intersecting ways including racially and economically. This exclusion of women ranges from workplaces to specific areas of cities that have high risks of sexual assault and other forms of violence (Spain 137). Operating from the skies, drones are able to use their small aerial cameras to literally offer new opportunities for viewing and recording our political and social world. In this way, they can reimagine some of these spatially exclusionary forms of discrimination – as we can see with Droncita.

Droncita made her debut in Ecatepec – 20km from Mexico City. Ecatepec is the city’s municipality with the highest rate of deaths presumed to be murder. In 2016, Feminist protestors filled the main square in an attempt to draw attention to the state’s inadequate reaction to the increasing number of female deaths in the country. The activists worked together, using white paint, to cover the square’s ground. The message they were creating was only viewable by one activist in particular: Droncita.

The drone was created by the Rexiste collective project, that began out of an opposition to the presidential election of Peña Nieto. Above these feminist activists, the drone now whirred, recording the emerging message. From Droncita’s point of view, the white paint clearly states: “Femicide State”. By recording this message from the sky’s unclaimed public space, Droncita firstly draws attention, in contrast, to the gendered space of Ecatepec below. The drone’s recording highlights that the feminist protestors are still not fully free to create their message safely in this space. As well, Droncita reclaims the space, alongside the activists below, by completing their message and illustrating its take-over of the square.

Femicide is: “The killing of a woman or girl, in particular by a man and on account of her gender.”

Through its actions, Droncita uses “digital ethnography”, the linking of digital space with actual space, to intervene (Estrada 104). Droncita turns aerial space into public space, making violence against women and the reality of the physical more visible – ultimately holding the Mexican government accountable for its role in creating a space where women feel unsafe and face omissions of justice.

Poland’s “Abortion Drone”

Gendered and intersectional discrimination is also upheld globally through law. One of the most significant, and ongoing, ways is through legal boundaries for women’s access to safe and affordable abortions. Women’s rights to make decisions over their own bodies include decisions regarding abortions and yet this form of healthcare is still illegal in many countries. As of 2020, abortion is fully illegal in 27 countries (even if the pregnancy is due to rape or incest). This legal boundary does not mean that women stop getting abortions, but instead that they are forced to receive expensive and unsafe medical attention. According to the World Health Organization, approximately 25 million unsafe abortions occur annually worldwide and over 7 million women are admitted to hospitals in developing countries due to this lack of safe access.

This is where the role of the “Abortion Drone” comes in. In 2015, across the German border from Słubice, Poland, this drone prepared to make its first trip. On one side of the river, a collection of women’s rights organizations and doctors prepared to fly the “quadcopter” to the other side. There a collection of pro-life protestors, journalists, and two women waited to swallow the abortion-inducing pills attached to the drone.

Despite the only 60-second length of the journey, the goal of the “Abortion Drone” was far-reaching. Within Poland, abortion is still illegal unless a woman’s life is categorized as being “in danger” or there is “evidence” of rape, incest or severe fetal abnormalities (O’Neil 2015). Because of these barriers, over 50,000 “underground abortions” are conducted each year – often using out-dated and dangerous tools and for thousands of dollars (limiting the resource to those who can economically afford it). Not only are Poland’s legal barriers for women’s access to healthcare a threat for the safety of those within the country, but they also serve as a wider representation of the legal struggles of millions of women globally.

The collection of activists and doctors called Women on Waves explains: “The medicines used for a medical abortion, mifepristone and misoprostol, have been on the list of essential medicines of the World Health Organization since 2005 and are available in Germany and almost all other European countries.”

As the “Abortion Drone” takes off on its inaugural flight, there is nothing that those on the Polish side can do to legally stop the drone’s journey. The UAV weighs under 5kg and is not used for commercial purposes. Because of these features, the new technology is able to both make visible the legal barriers for women in Poland and counteract them.

The drone lands on the Polish side safely and the women ceremoniously swallow the pills. Soon after, the activists operating the drones on the German side have their technology confiscated but the drone’s work has already been successful. The “Abortion Drone” has illuminated the legal and sexist inequalities that exist with regards to women’s access to healthcare – and temporarily counteracted them.

Feminist Drones in the Future

Droncita and the “Abortion Drone” illustrate the potential of feminist drones to illuminate and counteract spatial and legal inequalities that still exist for women and minorities today. The potential for feminist drones goes much beyond these two cases. As this article is published, feminist internet scholars are working to imagine other creative ways this new media can join the global fight for equality. It is fair to say this new member of the 21st century feminist movement is becoming less elusive; in fact, if you look up you might just catch a glimpse of it.

About the author

Erinne Paisley is a current Research Media Masters student at the University of Amsterdam and completed her BA at the University of Toronto in Peace, Conflict and Justice & Book and Media Studies. She is the author of three books on social media activism for youth with Orca Book Publishing.

Works Cited

Estrada, Marcela Suarez. “Feminist Politics, Drones and the Fight against the ‘Femicide State’ in Mexico.” International Journal of Gender, Science and Technology, vol. 9, no. 2, pp. 99–117.

Feigenbaum, Anna. “From Cyborg Feminism to Drone Feminism: Remembering Women’s Anti-Nuclear Activisms.” Feminist Theory, vol. 16, no. 3, Dec. 2015, pp. 265–88. DOI.org (Crossref), doi:10.1177/1464700115604132.

Feminist Internet. Feminist Internet: About. https://feministinternet.com/about/. Accessed 26 Feb. 2020.

Jones, Sam. “Paint Remover: Mexico Activists Attempt to Drone out Beleaguered President.” The Guardian, 15 Oct. 2015, https://www.theguardian.com/global-development/2015/oct/15/mexico-droncita-rexiste-collective-president-enrique-pena-nieto.

O’Neil, Lauren. “‘Abortion Drone’ Delivers Pregnancy-Terminatinng Pills to Women in Poland.” CBC News, 29 June 2015, https://www.cbc.ca/news/trending/abortion-drone-delivers-medication-to-women-in-poland-1.3132284.

Oxford University Dictionary. “Femicide.” Lexico, https://www.lexico.com/en/definition/femicide. Accessed 26 Feb. 2020.

Spain, Daphne. “Gendered Spaces and Women’s Status.” Sociological Theory, vol. 11, no. 2, July 1993, pp. 137–51.

Women on Waves. Abortion Drone; First Flight to Poland. https://www.womenonwaves.org/en/page/5636/abortion-drone–first-flight-to-poland. Accessed 26 Feb. 2020.

World Health Organization. Preventing Unsafe Abortion. 26 June 2019, https://www.who.int/news-room/fact-sheets/detail/preventing-unsafe-abortion.

World Population Review. Countries Where Abortion Is Illegal 2020. http://worldpopulationreview.com/countries/countries-where-abortion-is-illegal/. Accessed 26 Feb. 2020.

[blog] Show me the numbers: a case of impact communication in FLOSS

Author: Jeroen de Vos, header image by Ford Foundation

This blog post will explore the potential of repurposing impact assessment tools as a means to leverage funding problems in Free and Libre Open Source Software by making explicit the role they have in crucial public digital infrastructure. Two key concepts are relevant to help explain this specific exploration, the first of which is Free and Libre Open Source Software (FLOSS) and the central role it plays in facilitating a common software infrastructure used by both public and private organisations as well as civil society at large. The second is the notion of impact assessment as a strategy to understand, account for and communicate results of your efforts beyond merely financial numbers.

‘Money talk is kind of a taboo in the F[L]OSS community’, one respondent replied in an interview I most recently conducted at CCC 36C3. The talk he just gave outlined some of the tentative revenue models one could think of to make your software development activities more sustainable – it attracted a larger-than-expected audience with interesting follow-up questions. FLOSS software development very much draws on the internal motivation of developers or a developer community, with recurring questions of sustainability when relying on volunteering time that could be spent differently. And the complexity of this situation cannot be underestimated. The 2016 Ford Foundation report Roads and bridges: The Unseen Labor Behind Our Digital Infrastructure (Eghbal) contextualizes some of the common problems in the open-source software development – think of for instance the lack of appreciation of invisible labour, the emotional burden of upkeeping a popular project started, or the constant struggle over motivation while being structurally un- or underfunded.

The report draws on the metaphor of FLOSS as infrastructure, since it is readily available to anyone alike, but also in needs maintenance – has its limitations, but works well to illustrate the point. Just like infrastructure supports the flows of ideas, goods and people FLOSS operates on every level of digital infrastructure, whether talking about the NTP protocol synchronizing the internet, GnuPG (an encryption protocol allowing secure communication and data sharing) or MySQL (a database structure which quickly became a go-to standard for information storage and retrieval). Another commonality: as long as the infrastructure functions, its underlying support systems are seemingly invisible. That is, up until the point of failure it is unseen to which extent both private and public goods and services and public or private communication rely on these software packages. Only at failure, it becomes painfully explicit.

The recent well-known example of this escalation taking place is with the so-called Heartbleed bug. The FLOSS OpenSSL package contains the most widely used protocol for encrypting web traffic. Due to a bug creeping into the code somewhere in 2011, attackers could intercept information from connections that should be encrypted – which rendered large parts of online infrastructure unsafe in design, including services like Google, Amazon and many others. The issue raised the attention to the OpenSSL developers’ under-capacity – only one working full time for a salary of only a third to its colleagues in commercial counterparts. This is the point where the impact assessment tools might come into play – rather than relying on controversies to make visible the apparent widespread embedding and dependency on particular pieces of software, why not use impact assessment as a way to understand public relevance?

Conducting impact assessments can help communicate the necessity of maintenance by making visible the embeddedness of FLOSS software packages – whether it is on the level of language, operating system or protocol. To briefly contextualize, impact assessment grew out of changing management needs and has been implemented in the organisation of ‘soft output’ whether it be policymaking or social entrepreneurship. It is an interventionist tool that allows defining qualitative output with subsequent quantitative proxies to help understand the implementation results in relation to the desired output as described in a theory of change. It helps to both evaluate the social, technological, economic, environmental and political value created and subsequently make insightful the extend to which obsoletion would disrupt existing public digital infrastructure.

Without going too much into detail it needs mentioning that impact assessment already made its introduction as part of reporting deliverables to funders where relevant. Part of this exercise, however, is to instrumentalize impact assessment not only for (private) reporting by projects already funded but for (public) communicating FLOSS impact especially for projects without the necessary revenue streams in place. Needless to say, this output is only one of the steps in the process of making crucial FLOSS more sustainable but an important one, assessment output might help tapping into public or private sponsorship, establishing new collaborations with governments, educators and businesses alike, and venture into other new and exciting funding models.

This piece is meant as a conversation starter, do you already know of existing strategies to help communicate FLOSS output, are you involved in creating alternative business models for for-good public data infrastructure – ideas and comments welcome. Email: jeroen@data-activism.net

As for a short disclaimer I have been working with social enterprises developing market research and impact-first business models, I have been mulling over the crossover between social entrepreneurship and (FLOSS) activism, in their common struggle for sustainability, relying on informal networks or communities of action and trying to make a social change either from within or from the outside. This blog post is an attempt to think together social entrepreneurship and data activism through the use of a use-case: impact assessment for FLOSS.

References:

Eghbal, N. (2016). Roads and bridges: The unseen labor behind our digital infrastructure. Ford Foundation.

[BigDataSur] El Sur Global podría nacionalizar sus datos

Por Ulises Alí Mejías

(An English version of this article appeared in Al Jazeera on December 2019)

Introducción 

Las grandes empresas de tecnología están extrayendo datos de sus usuarios en todo el mundo, sin pagarles por éstos. Es hora de cambiar esta situación.

Abstract

Big tech corporations are extracting data from users across the world without paying for it. This process can be called “data colonialism”: a new resource-grab whereby human life itself has become a direct input into economic production. Instead of solutions that seek to solve the problem by paying individuals for their data, it makes much more sense for countries to take advantage of their scale and take the bold step to declare data a national resource, nationalise it, and demand that companies like Facebook and Google pay for using this resource so its exploitation primarily benefits the citizens of that country.

Nacionalización de datos 

El reciente golpe de estado en Bolivia nos recuerda que los países pobres, pero que son ricos en recursos naturales, continúan siendo plagados por el legado del colonialismo. Cualquier iniciativa que pretenda obstruir la capacidad de las compañías extranjeras para extraer recursos de manera barata se arriesga a ser prontamente eliminada.

Hoy, aparte de los minerales y el petróleo que abunda en algunos rincones del continente, las empresas están persiguiendo otro tipo de recurso, uno que quizás es más valioso: los datos personales. Al igual que los recursos naturales, los datos personales se han convertido en el blanco de ejercicios extractivos llevados a cabo por el sector dedicado a la tecnología.

Como el sociólogo Nick Couldry y yo hemos argumentado en nuestro libro, Los costos de la conexión (The Cost of Connection: How data is Colonizing Human Life and Appropriating It for Capitalism – Stanford University Press), hay un nuevo tipo de colonialismo emergiendo en el mundo de hoy: el colonialismo de los datos. Con este término queremos sugerir que estamos observando una nueva ola de apropiación de recursos en la cual la vida human en sí misma, expresada en los datos extraídos desde los mismos usuarios, se convierte en una aportación directa a la producción económica.

Reconocemos que este concepto puede resultar controversial dada la extrema violencia física y las estructuras aún presentes del racismo colonial histórico. Pero no queremos decir que el colonialismo de datos es igual al colonialismo histórico. Más bien, que la función esencial del colonialismo es justamente la misma. Esa función fue -y sigue siendo- la extracción, la explotación, y la apropiación de nuestros recursos.

Como el colonialismo clásico, el colonialismo de datos va transformando violentamente las relaciones sociales en elementos de producción económica. Elementos como la tierra, el agua, y otros recursos naturales fueron valuados por los primeros pueblos en la era precolonial, pero no de la misma manera que los colonizadores -y más tarde los capitalistas- llegaron a valorarlos, es decir, como una propiedad privada. De la misma manera, estamos viviendo en una situación en la que cosas que antes estaban fuera de la esfera económica -tales como las interacciones privadas con nuestros amigos y familiares, o nuestros archivos médicos- ahora han sido privatizadas y convertidas en parte del ciclo económico de la extracción de datos. Un ciclo que claramente beneficia principalmente a unas cuantas grandes empresas.

¿Pero qué pueden hacer los países de este “Sur Global” para evitar la explotación del colonialismo de datos?

Soluciones para el Sur Global

Una clara opción para este conjunto de países sería la de promulgar propuestas como las del escritor Jaron Lanier y el candidato presidencial estadounidense Andrew Yang, quienes han sugerido que cada uno de nosotros debería ser remunerado por los datos que producimos, a través de algún mecanismo de compensación. Pero estas propuestas neoliberales que buscan resolver el problema a nivel individual pueden al mismo tiempo diluir el valor de los recursos agregados. Si enfrentamos el problema así, los pagos a los usuarios serán difíciles de calcular, y tal vez muy pequeños.

En vez de esto, es mucho más lógico que los países del Sur Global aprovechen su tamaño y posición en el escenario internacional y tomen el paso audaz de declarar los datos generados por sus ciudadanos como un recursos nacional, demandando que compañías como Facebook o Google paguen por utilizar este recurso. Así, los principales beneficiarios del uso de datos personales serían justamente los ciudadanos que los producen.

Hagamos unos cálculos utilizando a México como un ejemplo: Facebook cuenta con 54.6 millones de usuarios en este país. En promedio, cada usuario global produce para Facebook $25 dólares al año en ganancias, lo que representa alrededor de $1.4 billones de dólares que la compañía se termina embolsando gracias a los mexicanos. Supongamos entonces que México nacionalizara sus datos y por lo tanto demandara quedarse con una parte substancial de esta suma. Y supongamos, ya que estamos haciendo este ejercicio, que arreglos similares se aplicaran al mismo tiempo con compañías como Google, Amazon, TikTok, etc.

Con billones de dólares recuperados a través de la nacionalización de los datos, el gobierno mexicano podría invertir en el desarrollo de campos como la salud, la educación, o la crisis migratoria por la cual atraviesa el país actualmente.

Sin embargo, una cosa es segura: cualquier intento de nacionalizar los datos por los países que conforman el Sur Global se enfrentaría con una intensa oposición. México nacionalizó su petróleo en 1938, gracias a una acción realizada por el presidente Lázaro Cárdenas, hoy considerado un héroe nacional, que enfureció a las compañías extranjeras. Lo anterior resultó en el boicoteo inmediato por parte de Estados Unidos, el Reino Unido, Holanda, y otros países. México solo podría librarse de esta situación por el eventual estallido de la Segunda Guerra Mundial.

También está el ejemplo de Chile. Salvador Allende amenazó en la década de 1970 con nacionalizar el sector telefónico, (que en ese minuto era controlado por la compañía norteamericana International Telephone & Telegraph), así como otras industrias. Antes de que se pudiera llevar a cabo, la CIA organizó un golpe de estado en 1973 que terminó con la muerte de Allende y una dictadura que duraría hasta 1990.

Y a Evo Morales, que experimentó con formas blandas de nacionalización que beneficiaron a los sectores más pobres de Bolivia mientras que mantenían a los inversionistas extranjeros moderadamente satisfechos, ahora lo han sacado por la fuerza de su país. No ayudó a su causa el hecho de que Morales, en un acto controversial, enmendó la constitución para poder volver postular a la presidencia luego de servir los dos periodos que ya eran permitidos por la ley boliviana.

Cualquiera sea el caso, la derecha en Bolivia y en Estados Unidos hoy están celebrando lo que algunos ven como un desarrollo interesante en la lucha por el control de minerales como el litio o el indio, los cuales son esenciales para la producción de dispositivos electrónicos.

Aún si los países que decidieran nacionalizar sus datos sobrevivieran a la represalia esperada, la nacionalización de datos no pondría fin a la raíz del problema; la normalización y legitimación de las extracción de información que ya se encuentra en proceso.

El futuro de la nacionalización de datos 

La nacionalización de datos no detendrá necesariamente la colonización que vive la región. Por eso, es una medida que debe ser pensada y entendida como una respuesta limitada a un problema mayor. Este es la razón por la cual la nacionalización de datos debe tener como objetivo final la separación de la economía del Sur Global de esta nueva especie de colonialismo.

La riqueza recuperada podría utilizarse también para desarrollar infraestructuras públicas que brinden versiones menos invasivas o explotadoras de los servicios ofrecidos por las grandes compañías tecnológicas de China y Estados Unidos. Parece difícil imaginar hoy algunas de estas alternativas, pero ya existen modelos que el Sur Global podría adoptar para desarrollar servicios que respeten la privacidad del individuo y no abusen del deseo humano de socializar.

Para evitar la corrupción y la mala administración, la sociedad civil deberá estar directamente involucrada en la toma de decisiones sobre el futuro de esta riqueza, incluyendo la capacidad de bloquear aplicaciones y usos abusivos de parte de compañías extranjeras sobre los datos generados por ciudadanos. Son, después de todos, sus datos, y es el público el que deberá tener un asiento en la mesa cuando se decida de qué manera se pueden ocupar esos recursos.

La propuesta de nacionalización de datos, aunque parezca inalcanzable y poco práctica, nos obliga por los menos a cuestionar la extracción de datos que continúa de manera indiscutible, a veces bajo el pretexto de que es un tipo de progreso que nos beneficia a todos.