Author: Jeroen

[BigDataSur-COVID] Living with Instalive in Iran: Social Media Use in Authoritarian Countries during the Pandemic

Header photo references: Vahid Salemi – Associated Press. LA Times.

Authors: Hossein Kermani and Maria Faust

This article argues that Instalive is not only a channel for personal communication; it also functions as an alternative TV in restrictive contexts with limited access to free and independent media. Therefore, in this article, we briefly discuss how and to what extent Iranians use Instalive to produce, share, and access content which is prohibited in Iranian official media, thus, resisting the political and social structures.

COVID-19 has become the new ‘normal’, and substantially altered the lives of people across the world and led to new forms of (online) communication. One of the most drastic changes to everyday life has been the increase of digital and especially social media use which imposed both opportunities and challenges. With an estimated 4.5 billion internet users worldwide, social made users make up 3.8 billion of these, this equals to almost 85% of overall internet users – now since the surge of the pandemic, social media use has risen from 75 to 82 minutes per day. Social distancing and lockdowns during spring 2020 caused people to search for new forms of connection through video chat. For example, as of 2020 TikTok has 1.5 billion users worldwide; its download rate has increased by more than 100 million from 199.4 in Q4 2019 to 315 million in Q1 2020. At the same time – and this is certainly challenging democratic ideas, both Douyin, the Chinese mainland, authoritarian version, and TikTok, share the same censorship practices of Bytedance, even more so in times of COVID-19. Instalive, the Instagram feature for live video chat, however, increasingly surged as an alternative digital video format across the world. While it is generally contributing to the changes in people’s communicative practices, it is of more importance in authoritarian contexts such as in Iran.

Social and digital context in Iran

In Iran, the country in the Middle East with the severe COVID-19 effects, social media use has also significantly increased. Iranian users have been keen on adopting social media in their everyday life from Yahoo messenger to Twitter and Instagram. To a great extent, social media has played a crucial role in contemporary political protests in Iran (e.g., 2009 Green Movement). The popularity of social media in Iran can be attributed to the fact that these platforms provide dissident Iranians, who were traditionally deprived of access to mainstream media, with a space to share their generated content with networks of others. That is probably why Iranians have continued to use social media through circumvention tools, despite the state’s disincentive actions like filtering, and hardliners denouncing and denial.

In 2020, following Telegram and WhatsApp, Instagram is the third most used social media platform in Iran with roughly 24 million users out of 84 million inhabitants. Instalive has been becoming more and more popular, especially amongst Persian women. During lockdown, Instagram users arguably increased by about 31%, with some users even using it 4 times as much as before. Thus, Instalive affected people’s communicative practices. For instance, many Iranians who had not experienced online video chat before, have been starting to using Instalive to chat with their families and friends Besides such personal impacts, Instalive works as an alternative TV for Iranians, providing an opportunity for both media practitioners and ordinary citizens to broadcast and consume their favorable content, which they probably cannot access through national and official media.

Islamic Republic of Iran Broadcasting (IRIB) vs. Instalive as an Alternative TV in Iran

Not only all the official media are severely controlled, but establishing and operating private television channels and radio stations are prohibited in Iran. IRIB as Iran’s national radio and television organization, which has the exclusive right to broadcast audio-visual content in national scale, is managed directly by state authorities. It mainly broadcasts the state-aligned content in order to underpin the state’s hegemonic discourse. In this regard, many dissident Iranians, including political activists and public figures, such as singers and cinema stars (i.e., media practitioners), do not have any chance to emerged in its programs. Moreover, other Iranian audiences have been deprived from watching their favorite programs and figures on air. For instance, the IRIB, as well as the other official media are banned from mentioning Mohamad Khatami, Iranian former reformist president. It is argued that there is some form of blacklisting including individuals who never should be shown in Iran’s national TV such as Mohammad Reza Shajarian, the most famous Iranian singer.

In this restrictive context, both camps, i.e., media practitioners and ordinary citizens perceive Instalive as an alternative platform which can serve their need to a form of free audio-visual media. They are using this feature to broadcast and consume forbidden content. Instalive was employed by some of the presidential campaigns to advertise their candidates during the 2017 election, but its use was intensified and diversified during the lockdown. For example, Hassan Rouhani, the president of Iran, used Instalive to broadcast his presidential speeches as he argued that the IRIB refused to publicly do that. Now, many of Iranian’s celebrities and microcelebrities use this platform to broadcast sexual and socio-cultural content which are mainly against the Islamic rules.

While no program about sex is allowed on the IRIB (even educative programs or erotic scenes), Tataloo-Neda Yassi Instalive broke the record of Instalive viewers around the world by more than 630k viewers. Tataloo is an Iranian rapper who had  more than 4 million followers at that time, well known by his weird and unconventional behavior. However, his page was closed by Instagram because of the accusation of child grooming. Yassi is also an Instagram microcelebrity who is famous because of her sexual posts and stories. Their Instalive was the perfect representation of anything forbidden in Iran regarding sex. Tataloo and Yassi talked about having (group) sex together, explaining sexual behavior, and promoting having free relations with many sexual partners. Given that the only acceptable form of sexual relationship is marriage in Iran, and all other forms of sexual behavior are forbidden and being punished, this Instalive crossed all of the state’s so-called ‘red lines’ as well as the IRIB policies.
Nevertheless, not all popular Instalives during the Covid-19 time were as offensive, in the sense of the state’s discourse, as the former example. For instance, let us look at the case of Aghamiri-Ahlam Instalive. Hassan Aghamiri is a clergy, and Ahlam is an Iranian female singer; they are actually representing two hostile camps in Iran. Women’s singing in public is forbidden in Iran by all means, and it is not allowed on the IRIB conclusively. This restriction is allegedly exerted by the Islamic rules. Here, clergies are considered as the representatives of Islam. It is anticipated that clergies condemn female singing and are not to be seen with them anywhere, as has been the case since the Islamic revolution. To this extent, Instalive disobeyed the state’s Islamic rules, playing with socio-cultural structures in Iran. A clergy co-singing with a female singer, as it happened in that live show, is something that would never be seen on the IRIB.

These two examples clearly show how Iranians use Instalive to overcome the state’s and IRIB’s restrictions and limitations. Employing Instalive, media practitioners produce content based on what is forbidden in the IRIB. Such content gained much attention by Iranians who have been deprived of watching such programs in the national TV.

What’s next? Instalive after COVID-19

Instalive introduced new opportunities for citizens in closed societies to share and discuss sensitive content. While Iranians use Twitter to discuss politics, they use Instagram to play with the state’s hard cultural and social restrictions. In this way, Iranians are undermining the power of the state’s exclusive TV as they perform or watch Instalives. On the other hand, the state clearly showed its intention to censor and control Instalives. Ali Zolghadri, the deputy chief of Tehran’s security police, stated that they would crash those show obscene behavior or say norm-breaking sentences on Instalives. Despite such hard comments, Iranians find alternative spaces to circumvent the state’s hard limitations. This time, they transformed Instalive into a kind of national TV. No matter its use would increase or decrease in the days after the COVID-19 crisis, it can be assumed that Instalive would remain a serious competitor to the IRIB, challenging its exclusivity and hegemony in more ways than we have seen so far.


About the authors:

1st author: Hossein Kermani, PhD in Social Communication Science (University of Tehran), is studying social media, digital repression, computational propaganda, and political activism in Iran. His research mainly revolves around the discursive power of social media in making meaning, shaping practices, changing the microphysics of power and playing with the political, cultural and social structures in Iran.

2nd author: Maria Faust, M.A. in Communication/Media Studies and Culture Studies, is a Doctoral Candidate at Leipzig University, Germany. She has published articles, a special section and two edited books on digital media, authoritarian contexts, temporal change, de-Westernization, the Global South(s), theory-building and visual culture(s) and continues her work in these fields.

[BigDataSur-COVID] COVID-19 in the UK: The Exacerbation of Inequality and a Digitally-Based Response

Authors: Massimo Ragnedda and Maria Laura Ruiu

The COVID-19 crisis has been shown to highlight existing forms of socio-economic inequality across the world’s Souths. This article illustrates the reinforcement of such inequalities in the United Kingdom, showing the heightened vulnerability of minorities and marginalised citizens and proposing a response based on tackling digital inequalities.

The consequences of the COVID-19 outbreak in social, economic, psychological and health terms, are still under evaluation as the effects of the containment measures could last for years. However, something seems to be quite clear: vulnerable people and vulnerable communities are those who suffer the most from this outbreak. This is not surprising, since both social and medical studies have repeatedly shown an interaction between social environment and health status.

In this article, we specifically focus on the UK (even though similar arguments could be applied to other countries in the Global North) where some social groups are suffering more than others from the outbreak. Black, Asian or minority ethnic background (BAME communities) and elderly and marginalized citizens are affected the most by the pandemic. The COVID-19 crisis has, indeed, triggered inequality by exposing more vulnerable groups to higher risks of experiencing the most severe symptoms of the disease.

BAME communities are the most affected in the UK

Despite the fact that there is no link between genetic predispositions to the virus and racial groups, BAME communities are the most affected in the UK. There are, indeed, several social factors that influence a higher frequency of death from coronavirus among vulnerable communities in the UK. First of all, BAME communities make up a large share of professions considered indispensable in tackling the virus. Many of these jobs are public-facing, so they are potentially more exposed to the virus. Secondly, on average, BAME groups in Western societies, due to longstanding social inequalities, suffer from generally worse health conditions. The link between socioeconomic factors and health status is well known. An emphasis on social aspects rather than a biological or genetic predisposition, underlines that the ways in which societies are organised tend to penalise already disadvantaged communities and citizens, therefore further reinforcing social inequalities.

Digital inequalities exacerbate social and health inequalities

There is also a third element, often under-evaluated, which highlights how socially discriminated people suffer the most from the COVID-19 pandemic: digital inequalities. COVID-19 outbreak has shown, among many other things, how digital skills, high-speed internet, and reliable hardware and software are essential conditions not only for social wellbeing, but also for everyday life. The digital divide, in fact, does not only include uneven access to resources and knowledge, but also limited human connections and unequal access to opportunities and health services. Being digitally excluded also affects the ability to manage the corona virus-related drawbacks. In a time where one-third of the world population is locked down, a exclusion from the digital arena also means a potential exclusion from essential online services, such as health services, e-learning, accurate and trustworthy information related to COVID-19 and purchase of essentials online. In a collective academic effort, we developed the COVID-19 exposure risk profiles (CERPs), which show that “All else equal, individuals who can more effectively digitize key parts of their lives enjoy better CERPs than individuals who cannot digitize these life realms”.

The digital divide is a real and thoughtful threat that requires tangible and future-proof solutions. This suggests that tackling inequalities not only means providing access to ICTs, but also skills and literacy to ensure an adequate digital experience. In the UK, for instance, 11.7 million people lack essential digital skills. This number suggests that 22% of the UK population have difficulty in accessing online information and updates about COVID-19. Both elderly people (especially from ethnic minorities) and people with disabilities tend to have limited capacity to access ICTs and an elementary digital competence. This suggests that self-isolation might be particularly challenging because the lack of access and skills contribute towards increasing loneliness by limiting people’s contact with relatives or friends.

Tackling digital inequalities to reduce COVID-19’s effects

Access to the Internet is a new civil right and a public utility. In this sense, bridging the digital divide means treating Internet access as an essential service. For this reason, during the COVID-19 pandemic, several public and private initiatives around the world were promoted to tackle the digital divide and support digitally excluded people. In the UK, for instance, some organisations such as “Future dot now UK” launched an initiative named #DevicesDotNow that provides digital devices to help the most vulnerable to access to the digital realm. The purpose of this initiative is to enable the digitally excluded to access online services, support and information needed during the pandemic. However, access alone is not enough to ensure the success of the digital experience. In a time where more than 11 million people in the UK lack digital skills, the possession of devices alone does not guarantee digital inclusion. For this reason, the Goodthing Foundation created a suite of resources to support the most vulnerable in using the Internet during the COVID-19 pandemic. These initiatives included a suite of resources to help people find trustworthy health advice from reliable sources or more practical advice such as how to use apps to call their doctors. Furthermore, the literature shows that giving the possibilities to keep in touch with family and friends to the most vulnerable people (especially when they are required to stay at home) helps reduce the negative effects of social isolation and loneliness (NHS 2019).

Evidently, digital inequalities cannot be bridged overnight, but these initiatives are particularly useful in time of crisis. More specifically, these initiatives are useful in tackling digital inequalities, because they look at the digital divide not only in terms of inequalities in access (first level of digital divide) but also in terms of uneven digital skills (second level) and uneven tangible outcomes people get from accessing and using the Internet (third level). In fact, by providing devices to access ICTs, these initiatives are reducing the first level, while providing basic digital skills to tackle the second level of digital divide. Overall these initiatives reduce also the third level of digital divide, by providing tangible and externally measurable outcomes (calling doctors, shopping and banking online) that improve people’s life chances. Therefore, the COVID-19 shows the importance of bridging digital inequalities to facilitate social relationships, global functions/interconnections and ordinary activities.

Lessons learned

Social and digital inequalities highlighted how specific subgroups are significantly more vulnerable to exposure to COVID-19, compared to their privileged counterparts. The crisis shows that social and digital solutions can be quickly implemented when necessary, but they need continuity to be effective. The lesson learned concerns the ability of policymakers to provide long-term strategies (as well as emergency plans) to tackle social and digital challenges. In fact, during a moment of crisis, the effects produced by social inclusion and digital-enhancing initiatives can only be limited because of the impossibility of tackling all the different levels of social and digital inequalities at the same time. In time of crisis, only a select group of people will be able to access the benefits provided by the emergency solutions in action. It is therefore necessary to consider social and digital inequalities as part of the same problem and promote initiatives to foster social and digital equity.

In conclusion, we may reiterate what we have tried to say throughout this article: the COVID-19 virus does not discriminate, but exacerbates existing social discrimination. It cannot be addressed as the sole cause of inequalities, but it brings to light the vulnerability of our unequal social assets. The COVID-19 might teach us that, despite our social adaptation capacity, being socially and digitally equipped can help mitigate the effects of global crises.

About the authors

Massimo Ragnedda, PhD, is a Senior Lecturer in Mass Communication at Northumbria University, UK, where he conducts research on the digital divide and digital media.

Maria Laura Ruiu obtained her second PhD from Northumbria University, UK where she is Lecturer in Sociology. Her research interests fall into environmental and media sociology with specific focus on climate change communication, social capital and digital media.

[BigDataSur] “WhatsApper-ing” por si só não salvará a desordem política brasileira

Este artigo reflete sobre o papel dos “WhatsAppers”, definido como ativistas sociais que apropriam o WhatsApp como plataforma principal de organização e comunicação, em relação à ascensão do Bolsonarismo no Brasil. Os recursos do uso do WhatsApp pelos atores sociais são explorados à luz das respostas ao Bolsonarismo, juntamente com suas implicações no momento atual da crise.

Por Sérgio Barbosa

Read in English

A pesquisa ilustrada explora as possibilidades do WhatsApp e sua apropriação pelos WhatsAppers no Brasil, aqui definidos como ativistas sociais que apropriam o WhatsApp como plataforma principal para organizar e se comunicar. Exploro a importância do contexto do Sul Global na formação de tais recursos, concentrando-me nas epistemologias locais que superaram a estrutura da mídia tradicional brasileira. Como mencionado em outro lugar, a análise empírica combinou diferentes métodos qualitativos, fornecendo insights sobre o repertório de comunicação e ação do grupo estudado, não sem mencionar reflexões sobre ética da pesquisa e suas implicações no contexto estudado.

WhatsAppers: Rumo a uma nova agenda de pesquisa

Esta pesquisa decorre de uma análise das interações sociais do UnidosContraOGolpe (UCG), grupo formado por ativistas esquerdistas no Brasil e organizado em um “grupo privado” do WhatsApp. O UCG surgiu em 2016 para se opor ao “golpe” que removeu a presidente Dilma Rousseff do poder. O estudo de caso resultou na primeira dissertação empírica na América Latina para investigar o ativismo digital nos grupos privados do WhatsApp como um campo emergente de ação política. Para isso, foi utilizada uma análise ‘meso-micro’ – no nível meso, para identificar o modus operandi das interações no grupo e, no nível micro, para capturar motivações, tensões e expectativas individuais. No cerne da investigação, a identidade do pesquisador foi revelada, seguindo os atores sociais por meio de seu ambiente íntimo de bate-papo. A pesquisa adotada foi “engajada” cujo objetivo é dar voz aos atores sociais. Em termos práticos, foi aplicado uma triangulação de métodos qualitativos, incluindo a etnografia digital (para identificar e analisar a prática de atores sociais dentro do domínio do bate-papo, através de uma longa perspectiva de “zoom” nas interações sociais do grupo), análise de conteúdo de mensagens selecionadas (para entender como o grupo emergiu organicamente e auto-organizou-se de maneira contingente) e quinze entrevistas semiestruturadas em profundidade (para extrair valores e motivações dos participantes).

Esta dissertação argumenta que os WhatsAppers são caracterizados por sua capacidade de se apropriar do grupo privado de bate-papo como um meio de participar também da vida política. O envolvimento com o ativismo político torna-se um assunto íntimo e familiar, mediado por um dispositivo pessoal e onipresente, que permite uma abordagem única da mobilização. Em linhas gerais, todos/as podem ser o/a WhatsApper, incluindo aqueles/as que não eram politicamente ativos/as anteriormente. O/A WhatsApper pode ser alguém que já está entrelaçado em outras redes sociais de política e mobilização ou não; eles/elas podem ser alguém de classe pobre, média ou rica. Em outras palavras, os/as WhatsAppers interagem digitalmente com outras pessoas, combinando ações políticas online e offline. À luz da sociologia digital, o estudo de caso revela que o WhatsApp se destaca como uma plataforma para o engajamento cívico, promovendo novos espaços de ativismo digital por três razões principais: o aplicativo de bate-papo (1) oferece formas estruturalmente novas de participação política e engajamento coletivo, (2) cria comunidades de interesse mútuo e (3) promove a tomada de decisão coletiva e ações autônomas individuais em pequena escala. No entanto, há desvantagens e limites, tais como: robôs podem influenciar as conversas no WhatsApp, usuários falsos podem invadir grupos públicos e privados e os membros do grupo podem ser ameaçados por ataques de vigilância.

Bolsonarismo: no seio da crise política brasileira

Em 2019, no primeiro ano do governo de Jair Bolsonaro, o Brasil registrou um desmatamento recorde e uma queda para zero das aplicações de multas ambientais. Bolsonaro nomeou uma ministra de direitos humanos que ficou conhecida por pregar a abstinência sexual como política do estado. Os filhos do presidente estão sob investigação de crimes e corrupção. Além disso, Bolsonaro nomeou um secretário de cultura exaltando a propaganda nazista. Além disso, toda semana o “anti-presidente” brasileiro ataca abertamente a imprensa e recentemente foi considerado o pior líder político a lutar contra a pandemia de Coronavírus.

O cenário político em que surge o Bolonarismo é amplamente reconhecido como reflexo da crise de representação e participação política e descrença generalizada nos partidos tradicionais. O Bolsonarismo pode ser entendido como “um fenômeno político que transcende a figura de Bolsonaro e se caracteriza por uma visão de mundo ultraconservadora, retornando aos valores tradicionais e à retórica nacionalista e patriótica”. Diante desse cenário, uma questão urgente deve ser colocada: o que realmente está acontecendo com a democracia brasileira?

Olhando para trás, olhando para frente

O Brasil é um país extremamente desigual, em múltiplas dimensões, entre as quais, o acesso à Internet. Parte da população semianalfabeta reúne suas informações quase exclusivamente por meio de mensagens visuais, áudios e vídeos de milhares de grupos do WhatsApp, graças às “taxas zero de dados móveis” fornecidas pelas empresas de telecomunicações que substituíram as mensagens curtas de texto do celular. O contexto da América Latina é uma excelente base de testes para o estudo das interações sociais no WhatsApp, uma vez que “96% dos brasileiros com acesso a um smartphone utilizam o WhatsApp como veículo principal de comunicação interpessoal”. Segundo o Instituto Reuters, 53% dos brasileiros usam o “ZapZap” (como o aplicativo é conhecido de forma geral no país) para encontrar e consumir notícias. Os cidadãos comuns também usam o “ZapZap” para pedir pizza, manter contato com a família, transferir dinheiro, marcar consultas médicas, aprender, fofocar, compartilhar vídeo pornô e namorar.

Enquanto os WhatsAppers do UCG estavam convocando chamadas para a ação política, ativistas da extrema-direita estavam se articulando em grupos públicos e privados do WhatsApp e além, combinando também atividades online e offline. Os setores progressistas não foram capazes de construir campanha digital nacional, com raras exceções, como pequenas iniciativas locais como a do UCG. Consequentemente, o potencial do ativismo digital em aplicativos de mensagens instantâneas foi posteriormente transformado em “arma” por grupos da extrema-direita que não apenas se apropriavam de grupos públicos e privados no WhatsApp, mas também utilizavam o “zap” como “ponte” para outras mídias sociais. A informação digital tornou-se uma “arma” que ainda hoje é utilizada de forma incontrolável pelos apoiadores de Bolsonaro, aproveitando a alta penetração do WhatsApp no ​​Brasil e facilitada pela baixa literacia digital da população. De fato, Bolsonaro fez uma campanha eleitoral bem-sucedida em 2018, com base em uma combinação de autoritarismo de baixo-para-cima e populismo digital. Seus apoiadores foram ajudados por robôs a disseminar conteúdo enganoso e, portanto, transformaram em “arma” vários grupos de WhatsApp.

COVID-19: WhatsAppers criativos nas margens

Este caso apresenta implicações importantes para a crise política em andamento. Atualmente, os cidadãos brasileiros são bombardeados com a desinformação relacionada ao COVID-19 e enfrentam um retrato caótico, enquanto ativistas de extrema-direita ocupam grandes espaços nas redes digitais desde as eleições de 2018. Além disso, há lições que devem ser aprendidas com a incapacidade de parar o exército digital do Bolsonarismo, quais sejam: enviar mensagens em que os cidadãos comuns possam confiar. Na crise atual, os brasileiros se comportam cada vez mais como consumidores, e menos como cidadãos, preferindo o mercado a ciência – talvez seja exatamente essa a lacuna que direciona nosso país para milhares de mortes durante a pandemia do Coronavírus.

A mídia tradicional brasileira continua discutindo quem pode ser o ideal candidato presidenciável nas eleições de 2022. No entanto, uma questão mais profunda é se os valores democráticos ainda serão mantidos até lá. A composição do governo de Bolsonaro nos lembra que a jovem democracia brasileira é agora mais capitalista, colonialista e patriarcal, e está caminhando para uma aventura política perigosa e irresponsável cujos resultados são imprevisíveis. Durante a pandemia, o distanciamento social, lavar as mãos, desinfetantes, máscaras, respiradores e os bloqueios nas cidades são privilégios do Norte Global, enquanto no Sul, muitos nem sequer têm acesso a esses serviços mínimos.

Como o título sugere, utilizar o WhatsApp apenas para conversar e comunicar não resolverá a desordem política brasileira, mas talvez WhatsAppers criativos possam fornecer uma faísca para criar redes de solidariedade nacionais-transnacionais. Em outras palavras: tomada de decisões e práticas participativas em alta velocidade para entregar mantimentos, coletar dinheiro, produzir máscaras, compartilhar informações científicas, mobilizar-se contra a desinformação relacionada ao COVID-19, alcançar famílias pobres e lutar por cenários democráticos emergentes. O UCG nos revela uma estratégia de comunicação interna muito articulada para conectar e ativar redes de solidariedade social que fomentam esperança, especialmente porque revela o campo de batalha da luta política ao permitir informações científicas compartilhadas, engajamento cívico, mobilização coletiva e empatia. Por fim, a coordenação de atividades on-line combinada com ações nas ruas pelos WhatsAppers reinventa o ativismo digital em tempos de pandemia.

Sobre o autor

Sérgio Barbosa é doutorando no programa “Democracia no Século XXI”, no Centro de Estudos Sociais (CES), da Universidade de Coimbra, e bolsista Sylff, finaciado pela Tokyo Foundation for Policy Research. Ele é membro da Technopolitics, uma rede de pesquisadores que liga o Brasil e o Equador à Espanha, Portugal e Itália. Sua pesquisa explora as formas emergentes de participação política vis-à-vis as possibilidades oferecidas pelos aplicativos de mensagens instantâneas, com ênfase no WhatsApp para ativismo digital e mobilização social.


O autor agradece a Silvia Masiero por sua cuidadosa revisão (e além) e deseja agradecer Charlotth Back e Jeroen de Vos por seus comentários e sugestões. Ele também agradece a Stefania Milan e Emiliano Treré pelo lançamento da iniciativa BigDataSur. Este artigo recebeu financiamento do Fundo japonês Sylff (Ryoichi Sasakawa Young Leaders Fellowship Fund).


Stefania on first pandemic of the datafied society @ZeMKI & @Milano Digital Week

May 27th 2020, As part of the Milano Digital Week, Philip Di Salvo, Daniele Gambetta and Stefania Milan discussed the complex triadic relationship between health, new technologies and privacy: between big data, contact tracing and the pandemic.

Watch the entire conversation here (in Italian)


June 3rd 2020, Stefania presented “from the first pandemic of the datafied society” at the Online Research Seminar series by ZeMKI, University of Bremen

– find talk below –

[BigDataSur] The Challenge of Decolonizing Big Data through Citizen Data Audits [3/3]


Author: Katherine Reilly, Simon Fraser University, School of Communication

Data Stewardship through Citizen Centered Data Audits

In my previous two posts (the first & the second), I talked about the nature of data audits, and how they might be applied by citizens. Audits, I explained, check whether people are carrying out practices according to established standards or criteria with the goal of ensuring effective use of resources. As citizens we have many tools available at our disposal to audit companies, but when we audit companies according to their criteria, then we risk losing sight of our own needs in the community. The question addressed by this post is how to do data audits from a citizen point of view.

Thinking about data as a resource is a first step in changing our perspective on data audits. Our current data regime is an extractive data regime. As I explained in my first post, in the current regime, governments accept the central audit criteria of businesses, and on top of this, they establish the minimal protections necessary to ensure a steady flow of personal data to those same corporate actors.

I would like to suggest that we rethink our data regime in terms of data stewardship. The term ‘stewardship’ is usually applied to the natural environment. A forest might be governed by a stewardship plan which lays out the rights and responsibilities of resource use. Stewardship implies a plan for the management of those resources, both so that they can be sustained, and also so that everyone can enjoy them.

If the raw material produced by the data forest is our personal information, then we are the trees, and we are being harvested. Our data stewardship regime is organized to support that process, and audits are the means to enforce it. The main beneficiaries of the current data stewardship regime are companies who harvest and process our data. Our own benefits – our right to walk through the forest and enjoy the birds, or our right to profit from the forest materially – are not contemplated in the current stewardship regime.

It is tempting to conclude that audits are to blame, but really, evaluation is an agnostic concept. What matters are the criteria – the standards to which we hold corporate actors. If we change the standards of the data regime, then we change the system. We can introduce principles of stewardship that reflect the needs of community members. To do this, we need to start from the audit criteria that represent the localized concerns of situated peoples.

To this end, I have started a new project in collaboration with 5 fellow data justice organizations in 5 countries in Latin America: HiperDerecho in Chile, Karisma in Colombia, TEDIC in Paraguay, HiperDerecho in Peru and ObservaTIC in Uruguay. We will also enjoy the technical support of Sula Batsu in Costa Rica.

Our focus will be on identifying alternative starting points for data audits. We won’t start from the law, or the technology, or corporate policy. Instead, we will start from people’s lived experiences, and use these as a basis to establish criteria for auditing corporate use of personal data.

We will work with small groups who share a common identity and/or experience, and who are directly affected by corporate use of their personal data. For example, people with chronic health issues have a stake in how personal data, loyalty programs and platform delivery services mediate their relationship with pharmacies and pharmaceutical companies. The project will identify community collaborators who are interested in working with us to establish alternative criteria for evaluating those companies.

Our emerging methodology will use a funnel-like approach, starting from broad discussions about the nature of data, passing through explorations of personal practices and the role of data in them, and then landing on more specific and detailed explorations of specific moments or processes in which people share their personal data.

Once the group has learned something about the reality of data in their daily lives – and in particular the instances where data is of particular concern from them – we will facilitate group activities that help them identify their data needs, as well as the behaviors that would satisfy those needs. An example of a data need might be “I need to feel valued as a person and as woman when I interact with the pharmacy.” A statement of how that need might be satisfied could be, for example, “I would feel more valued as a person and as a woman if the company changed its data collection categories.”

We are particularly interested to think through the application of community criteria to companies who have grown in power and influence during the Covid-19 pandemic. Companies like InstaCart, SkipTheDishes, Rapi, Zoom, and Amazon are uniquely empowered to control urban distribution chains that affect the welfare of millions. What do community members require from these companies in terms of their data practices, and how would they fare against an audit based on those criteria?

We find inspiration for alternative audit criteria in data advocacy projects that have been covered by DATACTIVE’s Big Data from the South Blog. For example, the First Nations Information Governance Centre (FNIGC) of Canada has established the principles of ownership, control, access and permission for the management of First Nations data, and New Zealand has adopted Maori knowledge protocols for information systems used in primary health care provision (as reported by Anna Carlson). Meanwhile, the Mexican organization Controla tu Gobierno argues that we need to view data “less as a commodity – which is the narrative that constantly tries to make us understand data as the new oil – and more as a source of meaning” (Guillen Torres and Mayli Sepulveda, 2017).

From examples like these, and given the concept of data stewardship, we can begin to see that data is only as valuable as the criteria used to assess it, and so we urgently need alternative criteria that reflect the desires, needs and rights of communities.

How would corporate actors fare in an audit based on these alternative criteria? How would such a process reposition the value of data within the community? Who should carry out these evaluative processes, and how can they work together to create a more equitable data stewardship regime that better serves the needs of communities?

By answering these questions, we can move past creating data literate subjects for the existing data stewardship regime. Instead, we can open space for discussion about how we actually want our data resources to be used. In a recent Guardian piece, Hare argued that “The GDPR protects data. To protect people, we need a bill of rights, one that protects our civil liberties in the age of AI.”2 The content of that bill of rights requires careful contemplation. Citizen data audits allow us to think creatively about how data stewardship regimes can serve the needs of communities, and from there we can build out the legal frameworks to protect those rights.


About the author: Dr. Katherine Reilly is Associate Professor in the School of Communication at Simon Fraser University in Vancouver, Canada. She is the recipient of a SSHRC Partnership Grant and an International Development Research Centre grant to explore citizen data audit methodologies alongside Derechos Digitales in Chile, Fundacion Karisma in Colombia, Sula Batsu in Costa Rica, TEDIC in Paraguay, HiperDerecho in Peru, and ObservaTIC in Uruguray.

WomenonWeb censored in Spain as reported by Magma

Author: Vasilis Ververis

The Magma project just published new research on censorship concerning, a non-profit organization providing support to women and pregnant people. The article describes how the major ISPs in Spain are blocking’s website. Spanish ISPs have been blocking this website by means of DNS manipulation, TCP reset, HTTP blocking with the use of a Deep Packet Inspection (DPI) infrastructure. Our data analysis is based on network measurements from OONI data. This is the first time that we observe Women on Web being blocked in Spain.

About Magma: Magma aims to build a scalable, reproducible, standard methodology on measuring, documenting and circumventing internet censorship, information controls, internet blackouts and surveillance in a way that will be streamlined and used in practice by researchers, front-line activists, field-workers, human rights defenders, organizations and journalists.

About the author: Vasilis Ververis is a research associate with DATACTIVE and a practitioner of the principles ~ undo / rebuild ~ the current centralization model of the internet. Their research deals with internet censorship and investigation of collateral damage via information controls and surveillance. Some recent affiliations: Humboldt-Universität zu Berlin, Germany; Universidade Estadual do Piaui, Brazil; University Institute of Lisbon, Portugal.

[BigDataSur] The Challenge of Decolonizing Big Data through Citizen Data Audits [2/3]


A First Attempt at Citizen Data Audits

Author: Katherine Reilly, Simon Fraser University, School of Communication

In the first post in this series, I explained that audits are used to check whether people are carrying out practices according to established standards or criteria. They are meant to ensure effective use of resources. Corporations audit their internal processes to make sure that they comply with corporate policy, while governments audit corporations to make sure that they comply with the law.

There is no reason why citizens or watchdogs can’t carry out audits as well. In fact, data privacy laws include some interesting frameworks that can facilitate this type of work. In particular, the EU’s General Data Privacy Regulation (GDPR) gives you the right to know how corporations are using your personal data, and also the ability to access the personal data that companies hold about you. This right is reproduced in the privacy legislation of many countries around the world from Canada and Chile to Costa Rica and Peru, to name just a few.

With this in mind, several years ago the Citizen Lab at the University of Toronto set up a website called Access My Info which helps people access the personal data that companies hold about them. Access My Info was set up as an experiment, so the site only includes a fixed roster of Canadian telecommunications companies, fitness trackers, and dating apps. It walks users through the process of submitting a personal data request to one of these companies, and then tracks whether the companies respond. The goal of this project was to crowdsource insights from citizens that would help researchers learn what companies know about their clients, how companies manage personal data, and who companies share data with. The results of this work have been used to advocate for changes to digital privacy laws.

Using this model as a starting point, in 2019, my team at SFU, and a team from the Peruvian digital rights advocate HiperDerecho, set up a website called SonMisDatos (Son Mis Datos translates as “It’s My Data”.) Son Mis Datos riffed on the open source platform developed by Access My Info, but made several important modifications. In particular, HiperDerecho’s Director, Miguel Morachimo, made the site database-driven so that it was easier to update the roster of corporate actors or their contact details. Miguel also decided to focus on companies that have a more direct material impact on the daily lives of Peruvians – such as gas stations, grocery stores and pharmacies. These companies have loyalty programs that are involved in collecting personal data about users.

Then we took things one step further. We used SonMisDatos to organize citizen data audits of Peruvian companies. HiperDerecho mobilized a team of people who work on digital rights in Peru, and we brought them together at two workshops. At the first workshop, we taught participants about their rights under Peru’s personal data protection laws, introduced SonMisDatos, and asked everyone to use the site to ask companies for access to their personal data. Companies need time to fulfill those requests, so then we waited for two months. At our second workshop, participants reported back on the results of their data requests, and then I shared a series of techniques for auditing companies on the basis of the personal data people had been able to access.

Our audit techniques explored the quality of the data provided, corporate compliance with data laws, how responsive companies were to data requests, the quality of their informed consent process, and several other factors. My favorite audit technique reflected a special feature of the data protection laws of Peru. In that country, companies are required to register databases of personal information with a state entity. The registry, which is published online, includes lists of companies, the titles of their databases, as well as the categories of data collected by each database. (The government does not collect the contents of the databases, it only registers their existence.)

With this information, our auditors were able to verify whether the data they got back from corporate actors was complete and accurate. In one case, the registry told us that a pharmaceutical company was collecting data about whether clients had children. However, in response to an access request, the company only provided lists of purchases organized by date, skew number, quantity and price. Our auditors were really bothered by this discovery, because it suggested that the company was making inferences about clients without telling them. Participants wondered how the company was using these inferences, and whether it might affect pricing, customer experience, access to coupons, or the like.

In another case, one of our auditors subscribed to DirecTV. To complete this process, he needed to provide his cell phone number plus his national ID number. He later realized that he had accidentally typed in the wrong ID number, because he began receiving cell phone spam addressed to another person. This was exciting, because it allowed us to learn which companies were buying personal data from DirecTV. It also demonstrated that DirecTV was doing a poor job of managing their customer’s privacy and security! However, during the audit we also looked back at DirecTV’s terms of service. We discovered that they were completely up front about their intention to sell personal information to advertisers. Our auditors were sheepish about not reading the terms of the deal, but they also felt it was wrong that they had no option but to accept these terms if they wanted to access the service.

On the basis of this experience, we wrote a guidebook that explains how to use Son Mis Datos, and how to carry about an audit on the basis of the ‘access’ provisions in personal data laws. The guide helps users think through questions like: Is the data complete, precise, unmodified, timely, accessible, machine-readable, non-discriminatory, and free? Has this company respected your data rights? What does the company’s response to your data request suggest about its data use and data management practices?

We learned a tonne from realizing these audits! We know, for instance, that the more specific the request, the more data a company provides. If you ask a company for “all of the personal data you hold about me” you will get less data that if you ask for “all of my personal information, all of my IP data, all of my mousing behaviour data, all of my transaction data, etc.”

Our experiments with citizen data audits also allow us to make claims about how companies define the term “personal data.” Often companies define personal data very narrowly to mean registration information (name, address, phone number, identification number, etc.). This lies in extreme contrast to the academic definition of personal data, which is any information that can lead to the identification of an individual person. In the age of big data, that means pretty much any digital traces you produce while logged in. Observations like these allow us to open up larger discussions about corporate data use practices, which helps to build citizen data literacy.

However, we were disappointed to discover that our citizen data audits worked to validate a data regime that is organized around the expropriation of resources from our communities. In my first blog post I explained that the 5 criteria driving data audits are profitability, risk, consent, security and privacy.

Since our audit originated with the law, with technology, and with corporate practices, we ended up using the audit criteria established by businesses and governments to assess corporate data practices. And this meant that we were checking to see if they were using our personal and community resources according to policies and laws that drive an efficient expropriation of those very same resources!

The concept of privacy was particularly difficult to escape. The idea that personal data must be private has been ingrained into all of us, so much so that the notion of pooled data or community data falls outside the popular imagination.

As a result, we felt that our citizen data audits did other people’s data audit work for them. We became watchdogs in the service of government oversight offices. We became the backers of corporate efficiencies. I’ve got nothing personal against watchdogs — they do important work — but what if the laws and policies aren’t worth protecting?

We have struggled greatly with the question of how to generate a conversation that moves beyond established parameters, and that situates our work in the community. With this in mind, we’ve begun to explore alternative approaches to thinking about and carrying out citizen data audits. That’s the subject of the final post in this series.


About the author: Dr. Katherine Reilly is Associate Professor in the School of Communication at Simon Fraser University in Vancouver, Canada. She is the recipient of a SSHRC Partnership Grant and an International Development Research Centre grant to explore citizen data audit methodologies alongside Derechos Digitales in Chile, Fundacion Karisma in Colombia, Sula Batsu in Costa Rica, TEDIC in Paraguay, HiperDerecho in Peru, and ObservaTIC in Uruguray.

[blogpost] Thinking Outside the Black-Box: The Case for ‘Algorithmic Sovereignty’ in Social Media

Urbano Reviglio, Ph.D. candidate of the University of Bologna in collaboration with Claudio Agosti, the brain behind just pubished a new academic article on Algorithmic Sovereignty in Social  Media + Society (SAGE). Find an extended abstract below, and the full paper here

Everyday algorithms update a profile of “who you are” based on your past preferences, activities, networks and behaviours in order to make a future-oriented prediction and suggest you news (e.g. Facebook and Twitter), videos (e.g. Youtube), movies (e.g. Netflix), songs (e.g. Spotify), products (e.g. Amazon) and, of course, ads. These algorithms define the boundaries of your Internet experience, affecting, steering and nudging your information consumption, your preferences, and even your personal relations.

Two paradigmatic (and likely most influential) examples clarify well the importance of this process. On Facebook, you can encounter 350 posts on average, prioritized on about 1.500. As such, you can be exposed only to 25% of the information, while roughly 75% is actually hidden. This is Facebook’s newsfeed algorithm that is choosing for you. And it is rather good at that. Think also of Youtube; its recommendations already drive more than 70% of the time you spend in the platform, meaning you are mostly “choosing” in a pre-determined set of possibilities. In fact, 90% of the ‘related content’ on the right side of the website is already personalized for you. Yet, this process occurs largely beyond your control and it is mostly based on implicit personalization — behavioural data collected from subconscious activity (i.e. clicks, time spent etc.) — rather than on deliberate and expressed preferences. Worryingly, this might become a default choice in future personalization, essentially because you may be well satisfied without further questioning the process. Do you really think the personalization that recommends you what to read and watch is indeed the best you could experience?

Personalization is not what is narrated by mainstream social media platforms. There are a number of fundamental assumptions that are nowadays shared by most researchers, and these need clarifications. Profiling technologies that allow personalization create a kind of knowledge about you that is inherently probabilistic. Personalization, however, is not exactly ‘personal’. Profiling is indeed a matter of pattern recognition, which is comparable to categorization, generalization and stereotyping. Algorithms cannot produce or detect the complexities of yourself. They can, however, influence your sense of self. As such, profiling algorithms can trivialize your preferences and, at the same time, steer you to conform to the status quo of your past actions chosen by ‘past selves’, narrowing your “aspirational self.” They can limit the diversity of information you are exposed to, and they can ultimately perpetuate existing inequalities. In other words, they can limit your information self-determination. So, how can you fully trust proprietary algorithms that are naturally designed for ‘engagement optimization’ — to hook you up to the screen as much as possible — and not explicitly designed for your personal growth and society’s cohesion?

One of the most concerning problems is that personalization algorithms are increasingly ‘addictive by design’. Human behavior indeed can be easily manipulated by priming and conditioning, using rewards and punishments. Algorithms can autonomously explore manipulative strategies that can be detrimental to you. For example, they can use techniques (e.g. A/B testing) to experiment with various messages until they find the versions that best exploit your vulnerabilities. Compulsion loops are already found in a wide range of social media. Research suggests that such loops can work via variable-rate reinforcement in which rewards are delivered unpredictably — after n actions, a certain reward is given, like in slot machines. This unpredictability affects the brain’s dopamine pathways in ways that magnify rewards. You think you liked that post… but you may have been manipulated to like that after several boring posts, with an outstanding perfect timing. Consider how just dozens of Facebook Likes can reveal useful and highly accurate correlations; hundreds of likes can predict your personality better than your mother could do, research suggests. This can be easily exploited. For example, if you are vulnerable to moral outrage. Researchers have found that each word of moral outrage added to a tweet raises the retweet rate by 17%. Algorithms know that, and could feed you with the “right” content at the right time. 

As a matter of fact, personalization systems deeply affect public opinion, and more often negatively. For increasingly more academics, activists, policy-makers and citizens the concern is that social media, more generally, are downgrading our attention spans, a common base of facts, the capacity for complexity and nuanced critical thinking, hindering our ability to construct shared agendas to help to solve the epochal challenges we all face. This supposed degraded and degrading capacity for collective action arguably represents “the climate change of culture.” Yet, research on the risks posed by social media – and more specifically their personalization systems – is still very contradictory; these are very hard to prove and, eventually, to mitigate. In light of the fast-changing media landscape, many studies become rapidly outdated, and this contributes to the broader crisis concerning the study of algorithms; these are indeed “black-boxed”, which means their functioning is opaque and their interpretability may not even be clear to engineers. Moreover, there are no easy social media alternatives one can join in to meet friends and share information. These one day might spread but until that day billions of people worldwide have to rely on opaque personalization systems that ultimately may impoverish them. They are an essential and increasingly valuable public instrument to mediate information and relations. And considering that these even introduce a new form of power of mass behavioral prediction and modification that is nowadays concentrated in very few tech companies, there is a clear need to radically tackle these risks and concerns now. But how?

By analyzing challenges, governance and regulation of personalization, what we argue in this paper is that we as a society need to frame, discuss and ultimately grant to all users a sovereignty over personalization algorithms. More generally, with ‘algorithmic sovereignty’ in social media we intend the regulation of information filtering and personalization design choices according to democratic principles, to set their scope for private purposes, and to harness their power for the public good. In other words, to open black-boxed personalization algorithms of (mainstream) social media to citizens and independent and public institutions. By doing this, we also explore specific experiences, projects and policies that aim to increase users’ agency. Ultimately, we preliminary highlight basic legal, theoretical, technical and social preconditions to attain what we defined as algorithmic sovereignty. To regain trust between users and platforms, personalization algorithms need to be seen not as a form of legitimate hedonistic subjugation, but as an opportunity for new forms of individual liberation and social awareness. And this can only occur with the right and capacity by citizens as well as democratic institutions to make self-determined choices on these legally private (but essentially public) personalization systems. As we argue thoughout the paper, we believe that such endeavor is within reach and that public institutions and civil society could and should eventually sustain its realization.

Protesting online: Stefania interviewed by the Dutch Tegenlicht

Only a few months ago, we were able to walk the streets with for the Women’s or climate march. Now streets are empty and activists, except for a few, stay at home. How to demonstrate in the so-called one-and-a-half meter society?

Stefania has been interviewed in an article by the Dutch critical public documentary series Tegenlicht / BackLight concerning protesting online. In light of COVID what does it mean to protest changes – read the full article here (in Dutch).

[BigDataSur] The Challenge of Decolonizing Big Data through Citizen Data Audits [1/3]

Author: Katherine Reilly, Simon Fraser University, School of Communication

A curious thing happened in Europe after the creation of the GDPR. A whole new wave of data audit companies came into existence to service companies that use personal data. This is because, under the GDPR, private companies must audit their personal data management practices. An entire industry emerged around this requirement. If you enter “GDPR data audit” into Google, you’ll discover article after article covering topics like “the 7 habits of highly effective data managers” and “a checklist for personal data audits.”

Corporate data audits are central to the personal data protection frameworks that have emerged in the past few years. But among citizen groups, and in the community, data audits are very little discussed. The word “audit” is just not very sexy. It brings to mind green eyeshades, piles of ledgers, and a judge-y disposition. Also, audits seem like they might be a tool of datafication and domination. If data colonization “encloses the very substance of life” (Halkort), then wouldn’t data auditing play into these processes?

In these three blog posts, I suggest that this is not necessarily the case. In fact, we precisely need to develop the field of citizen data audits, because they offer us an indispensable tool for the decolonization of big data. The posts look at how audits contribute to upholding our current data regimes, an early attempt to realize a citizen data audit in Peru, and emerging alternative approaches. The series of the following blogposts will be published the coming weeks:

  1. The Current Reality of Personal Data Audits [find below]

  2. A First Attempt at Citizen Data Audits [link]

  3. Data Stewardship through Citizen Centered Data Audits [link]


The Current Reality of Personal Data Audits

Before we can talk about citizen data audits, it is helpful to first introduce the idea of auditing in general, and then unpack the current reality of personal data audits. In this post, I’ll explain what audits are, the dominant approach to data audits in the world right now, and finally, the role that audits play in normalizing the current corporate-focused data regime.

The aim of any audit is to check whether people are carrying out practices according to established standards or criteria that ensure proper, efficient and effective management of resources.

By their nature, audits are twice removed from reality. In one sense, this is because auditors look for evidence of tasks rather than engaging directly in them. An auditor shows up after data has been collected, processed, stored or applied, and they study the processes used, as well as their impacts. They ask questions like “How were these tasks completed, and, were they done properly?”

Auditors are removed from reality in a second sense, because they use standards established by other people. An auditor might ask “Were these tasks done according to corporate policy, professional standards, or the law?” Auditors might gain insights into how policies, standards or laws might be changed, but their main job is to report on compliance with standards set by others.

Because auditors are removed from the reality of data work, and because they focus on compliance, their work can come across as distant, prescribed – and therefore somewhat boring. But when you step back and look at the bigger picture, audits raise many important questions. Who do auditors report to and why? Who sets the standards by which personal data audits are carried out? What processes does a personal data audit enforce? How might audits normalize corporate use of personal data?

We can start to answer these questions by digging into the criteria that currently drive corporate audits of personal data. These can be divided into two main aspects: corporate policy and government regulation.

On the corporate side, audits are driven by two main criteria: risk management and profitability. From a corporate point of view, personal data audits are no exception. Companies want to make sure that personal data doesn’t expose them to liabilities, and that use of this resource is contributing effectively and efficiently to the corporate bottom line.

That means that when they audit their use of personal data, they will check to see whether the costs of warehousing and managing data is worth the reward in terms of efficiencies or returns. They will also check to see whether the use of personal data exposes them to risk, given existing legal requirements, social norms or professional practices. For example, poor data management may expose a company to the risk of being sued, or the risk of alienating their clientele. Companies want to ensure that their internal practices limit exposure to risks that may damage their brand, harm their reputation, incur costs, or undermine productivity.

In total, corporate data audits are driven by, and respond to, corporate policies, and those policies are organized around ensuring the viability and success of the corporation.

Of course, the success of a corporation does not always align with the well-being of the community. We see this clearly in the world of personal data. Corporate hunger for personal data resources has often come at the expense of personal or community rights.

Because of this, governments insist that companies enforce three additional regulatory data audit criteria: informed consent, personal data security, and personal data privacy.

We can see these criteria reflected clearly in the EU’s General Data Privacy Regulation. Under the GDPR, companies must ask customers for permission to access their data, and when they do so, they must provide clear information about how they intend to use that data.

They must also account for the personal data they hold, how it was gathered, from whom, to what end, where it is held, and who accesses it for what business processes. The purpose of these rules is to ensure companies develop clear internal data management policies and practices, and this, in turn, is meant to ensure companies are thinking carefully about how to protect personal privacy and data security. The GDPR requires companies to audit their data management practices on the basis of these criteria.

Taking corporate policy and government regulation together, personal data audits are currently informed by 5 criteria – profitability, risk, consent, security and privacy. What does this tell us about the management of data resources in our current data regime?

In a recent Guardian piece Stephanie Hare pointed out that “the GDPR could have … [made] privacy the default and requir[ed] us to opt in if we want to have our data collected. But this would hurt the ability of governments and companies to know about us and predict and manipulate our behaviour.” Instead, in the current regime, governments accept the central audit criteria of businesses, and on top of this, they establish the minimal protections necessary to ensure a steady flow of personal data to those same corporate actors. This means that the current data regime (at least in the West) privileges the idea that data resides with the individual, and also the idea that corporate success requires access to personal data.

Audits work to enforce the collection of personal data by private companies, by ensuring that companies are efficient, effective and risk averse in the collection of personal data. They also normalize corporate collection of personal data by providing a built in response to security threats and privacy concerns. When the model fails – when there is a security breach or privacy is disrespected – audits can be used to identify the glitch so that the system can continue its forward march.

And this means that audits can, indeed, serve as tools of datafication and domination. But I don’t think this necessarily needs to be the case. In the next post, I’ll explore what we’ve learned from experimenting with citizen data audits, before turning to the question of how they can contribute to the decolonization of big data in the final post.


About the author: Dr. Katherine Reilly is Associate Professor in the School of Communication at Simon Fraser University in Vancouver, Canada. She is the recipient of a SSHRC Partnership Grant and an International Development Research Centre grant to explore citizen data audit methodologies alongside Derechos Digitales in Chile, Fundacion Karisma in Colombia, Sula Batsu in Costa Rica, TEDIC in Paraguay, HiperDerecho in Peru, and ObservaTIC in Uruguray.