Category: show in Big Data from the South

[BigDataSur] Inteligencia artificial y soberanía digital

Por Lucía Benítez Eyzaguirre

Resumen

La autonomía que van logrando los algoritmos, y en especial la inteligencia artificial, nos obliga a repensar los riesgos de la falta de calidad de los datos, de que en general no estén desagregados y los sesgos y aspectos ocultos de los algoritmos. Las cuestiones de seguridad y éticas están en el centro de las decisiones a adoptar en Europa relacionadas con estos temas. Todo un reto, cuando todavía no hemos logrado ni la soberanía digital.

Abstract

The autonomy that the algorithms are achieving and, especially, artificial intelligence forces us to rethink the risks of lack of quality in data, the fact that in general they are not disaggregated, and the biases and hidden aspects of the algorithms. Security and ethical issues are at the center of the decisions to be taken in Europe related to these issues. It looks like a big challenge, considering that we have not yet achieved even digital sovereignty.

IA y soberanía digital

Los algoritmos organizan y formatean nuestra vida. Como si fueran un software social y cultural, éstos se van adaptando a los comportamientos humanos, y avanzan en su existencia autónoma. Sin embargo, vivimos de forma ajena a su capacidad de control sobre la desigualdad, sobre la vigilancia de nuestras vidas o al margen del desarrollo inminente del internet de las cosas o de la inteligencia artificial (IA): como si pudiéramos darnos el lujo de ignorar cómo se van independizando cada vez más de las decisiones humanas. Ahora, por ejemplo, por primera vez se ha planteado si habrá que modificar los criterios de patentes después de la intención de registrar como propiedad intelectual los inventos y diseños hechos por una inteligencia artificial. De momento, ni la Unión Europea (UE) ni el Reino Unido se han mostrado dispuestos a aceptar una iniciativa de este tipo sin un debate sobre el papel de la IA y del escenario de incertidumbre que esta situación abre.

Es en este contexto que comienza a oírse una pluralidad de voces que piden una regulación de las tecnologías asociadas a la IA; un freno al futuro de un desarrollo autónomo e inseguro. Algunas de las corporaciones de los GAFAM -el grupo que concentra las cinco empresas más grandes en tecnología en el mundo-, como Microsoft o Google ya han pedido esta regulación. Es más, incluso pareciera que estos gigantes tecnológicos comienzan a avanzar hacia la autorregulación en cuestiones éticas o de responsabilidad social, a la vista del impacto que no hacerlo puede tener sobre su reputación. La cuestión para la UE supone valorar y reconocer los riesgos del devenir incontrolable de la IA, sobre todo en asuntos como la salud o la vigilancia. De ahí que parece que el reconocimiento facial en lugares públicos se frenará en los próximos años en algunos países de Occidente, para así prevenir los riesgos detectados en China.

Para combatir los riesgos de la IA hay que comenzar por asegurar la calidad de los datos y los algoritmos, investigar sobre los sesgos que producen y la responsabilidad sobre los errores y criterios. La IA se entrena en muchos casos con datasets no desagregados y a menudo ya sesgados, por lo que conducirá a algoritmos deformados y poco representativos de la población y a desarrollos parciales, de baja calidad y dudosos resultados. Frente al cada vez más numeroso trabajo que se realiza con datos masivos, apenas hay estudios técnicos sobre su impacto humano y social. Por lo mismo, trabajos como los del profesor Matthew Fuller son un clásico recurso para tomar conciencia de la importancia de la transparencia sobre el funcionamiento de los algoritmos. Fuller plantea la aplicación de sistemas que garanticen la condición verdadera de los resultados, la mejora del modelo a partir de un mayor número de conexiones, un funcionamiento que muestre las conexiones sociales o que ponga en evidencia que a menudo se supera la capacidad de los propios sistemas que se analizan con algoritmos.

Si queremos atender a los riesgos de la IA hay que comenzar por el logro de la “gobernabilidad algorítmica”. Este concepto supone la prevención del abuso y del control con el que los algoritmos regulan nuestra vida o con el que la programación rige nuestro quehacer, nuestras rutinas. Esta gobernanza es una garantía de la transparencia, con la supervisión colectiva de usuarios y empresas de los resultados, y la responsabilidad ante el uso de la información. Los algoritmos deben garantizar la transparencia y calidad de los datos (concepto conocido como open data en inglés), ofrecer su propio código de fuente abierto, que sea auditable por sus usuarios y que pueda responder a las reclamaciones fruto de los controles ciudadanos. Pero también es imprescindible que el algoritmo sea leal y justo, es decir, que evite la discriminación que sufren las mujeres, las minorías, o cualquier otro colectivo desfavorecido. Y si se trata de un algoritmo en línea, hay que tener también en cuenta las API (Application Public Programming Interface) públicas porque condicionan tanto la recolecta de datos como la forma en que se aplican técnicas comerciales, que oculta cómo se apropian de la información.

Este espíritu también se recoge en la Declaración de Zaragoza de 2019 a partir del debate de profesionales y académicos sobre los efectos adversos, y los riesgos potenciales. Sin embargo, esta declaración también señala las recomendaciones de uso de la IA, da a conocer sus impactos y su evolución en la sociedad. Esto lo hace a través de cinco puntos sobre las dimensiones humana y social, el enfoque transdisciplinar con el que abordar la AI, la responsabilidad y el respeto a los derechos, a partir de un código deontológico propio.

La Declaración pone el acento en la necesidad de desarrollos para las políticas de interés público y la sostenibilidad, pero siempre a partir de sistemas trazables y auditables, con un compromiso con los usuarios para evaluar el cumplimiento de sus objetivos y separar los defectos o desviaciones. En cuestiones éticas, la Declaración propone la formación de los programadores no sólo técnica sino ética, social y humanista, ya que los desarrollos de software también deben contemplar estas dimensiones, así como diferentes fuentes de conocimiento y experiencia.

La Declaración de Zaragoza también incluye un “derecho a la explicación” sobre las decisiones algorítmicas, siempre y cuando éstas entren en juego con los derechos fundamentales de las personas. A pesar del que el Reglamento General de Protección de Datos de la Unión Europea ha avanzado en derechos digitales, todavía estamos muy lejos de una soberanía tecnológica al estilo de la francesa. Desde 2016, Francia se rige por la “Ley de la república digital” que impulsa los algoritmos auditables, la neutralidad de la red, la apertura de datos, la protección de la privacidad y lealtad de las plataformas con la información de sus consumidores, el derecho a la fibra y a la conexión a Internet, el derecho al olvido, la herencia digital, la obligación de informar de las brechas de seguridad detectadas, las multas en materia de protección de datos.

 

Magma guide release announcement

January 29, 2020

By Vasilis Ververis, DATACTIVE

We are very pleased to announce you that the magma guide has been released.

What is the magma guide?

An open-licensed, collaborative repository that provides the first publicly available research framework for people working to measure information controls and online censorship activities. In it, users can find the resources they need to perform their research more effectively and efficiently.

It is available under the following website: https://magma.lavafeld.org

The content of the guide represents industry best practices, developed in consultation with networking researchers, activists, and technologists. And it’s evergreen, too–constantly updated with new content, resources, and tutorials. The host website is regularly updated and synced to a version control repository (Git) that can be used by members of the network measurements community to review, translate, and revise content of the guide.

If you or someone you know is able to provide such information, please get in touch with us or read on how you can directly contribute to the guide.

All content of the magma guide (unless otherwise mentioned) is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).

Many thanks to everyone who helped make the magma guide a reality.

You may use any of the communication channels (listed in contact page) to get in touch with us.

 

Vasilis Ververis is a research associate with DATACTIVE and a practitioner of the principles ~ undo / rebuild ~ the current centralization model of the internet. Their research deals with internet censorship and investigation of collateral damage via information controls and surveillance. Some recent affiliations: Humboldt-Universität zu Berlin, Germany; Universidade Estadual do Piaui, Brazil; University Institute of Lisbon, Portugal.

[BigDataSur] How Chilean activists used citizen-generated data to fight disinformation

by Tomás Dodds

Introduction
For over 80 days now, and with no end in sight, Chile has been in the grip of waves of social protests and cultural manifestations with tens of thousands of demonstrators taking to the streets across the country. For many, the upsurge of this social outburst has its roots in a civil society rebelling against an uncaring economic and political elite that has ruled the country since its return to democracy in 1990. Mass protests were soon followed by a muddle of misinformation, both online and in the traditional press. In this blog post, I provide insights into how Chilean activists, including journalists, filmmakers, and demonstrators themselves, have started using citizen-generated data to fight media disinformation and the government’s attempts to conceal cases of human rights violations from the public.

Background
The evening of October 18th 2019 saw how Chileans started to demand the end of a neoliberal-based economic system, perceived among citizens as the main cause for the social inequalities and political injustices that occurred in the country over the last decades. However, demonstrations were met with brutal police repression and several corroborated cases of human rights violations, including sexual torture. To this day, information gathered by national and international non-governmental organizations show at least that 26 people have died and more than 2.200 have been injured during the rallies.

Although I was raised in Chile, today I am living in Amsterdam. Therefore, I could only follow the news as any other Chilean abroad; online. I placed a screen in my room streaming in a loop the YouTube channels of the prime-time late-night news of major media outlets. During the day, I constantly checked different social media platforms like Facebook or Twitter, and from time to time I would get news and tips from friends and fellow journalists in the field over WhatsApp or Signal. Information started flooding every digital space available: a video posted on social media in the morning would have several different interpretations by that evening, and dissimilar explanations would be offered by experts across the entire media spectrum by night.

And this was only the start. Amidst the growing body of online videos and pictures showing evidence of excessive military force against demonstrators, Chilean President Sebastián Piñera sat in on a televised interview for CNN’s Oppenheimer Presenta where he claimed that many recordings circulating on social platforms like Facebook, Instagram, and Twitter have been either “misrepresenting events, or filmed outside of Chile.” The President effectively argued that many of these videos were clearly “fake news” disseminated by foreign governments seeking to destabilize the country, like those of Venezuela and Cuba. Although Piñera later backed down from his claims, substantial doubts were already planted in Chileans’ minds. How could the public be sure that the videos they were watching on their social networks were indeed real, contemporary, and locally filmed? How could someone prove that the images of soldiers shooting rubber bullets at unarmed civilians were not the result of a Castro-Chavista conspiracy, orchestrated by Venezuelan President Nicolás Maduro, as some tweets and posts seem to claim with a bewildering lack of doubt? How could these stories be corroborated when most of them were absent from the traditional media outlets’ agendas?

As a recent study suggests, unlike their parents or grandparents, the generation that was born in Chile after 1990 is less likely to self-censor their political opinions and show a higher willingness to participate in public discussion. After all, they were born in democracy and do not have the grim memories of the dictatorship in their minds. This is also the generation of activists who, using digital methods, have taking it up to themselves to mount the digital infrastructure that makes relevant information visible and, at the same time, accessible to an eager audience that cannot find on traditional media the horror tales and stories that reflect the ones told by their friends and neighbors. Thus, different digital projects have started to gather and report data collected by a network of independent journalists, non-governmental organizations, and the protestors themselves in order to engage politically with the reality of the events occurring on the streets. Of these new digital projects, here I present only two that stand out in particular, and which I argue help to alleviate, or at least they did for me, the uncertainty of news consumption in times of social unrest.

DSC06091-Editar

(Image courtesy of Osvaldo Pereira) 

From singular stories to collective data
Only four days after the beginning of the protests, journalists Miguel Paz and Nicolás Ríos started ChileRegistra.info (or Chile-Records in English), a depository of audio-visual material and information regarding the ongoing protests. Chile-Registra stores and distributes videos that have been previously shared by volunteers and social networks users who have attended the rallies. According to these journalists, traditional media could not show videos of human rights violations shared on social networks because they were unable to verify them, and therefore would only broadcast images of riots and barricades, which would later produce higher levels of mistrust between the demonstrators and the press.

As a response to this problem, the project has two main purposes; First, to create a “super data base” with photos and videos of the protests, and military and police abuses. Second, to identify the creators of videos and photos already posted and shared on social networks, in order to make these users available as news source or witness for both traditional media and the prosecutors. National newspaper La Tercera and Publimetro, among other national and international media outlets, did already use this platform to published or broadcast data collected within the depository. By using this project, users were able to easily discredit Piñera’s claims that many of these videos were being recorded abroad.

The second project I would like to draw attention to is Proyecto AMA (The Audio-visual Memory Archive Project in English). AMA is a collective of journalists, photographers, and filmmakers who have been interviewing victims of human rights violations during the protests. Using the Knight Lab’s StoryMap tools, AMA’s users can also track where and when these violations have taken place, and read the personal stories behind the videos that they most probably saw before online. According to their website, members of this project “feel the urgent need to generate a memory file with the images shared on social networks, and give voice and face to the stories of victims of police, military and civil violence in Chile.”

These two projects have certainly different approaches for how they generate content. While ChileRegistra relies on collecting data from social media and citizen journalists uploading audio-visual material, Proyecto AMA’s members interview and collect testimonies from victims of repression and brutality. Although the physical and technological boundaries of each media platform are still present, these projects complement each other in a cross-media effort that precisely plays with the strengths of each of the platforms used to inform the work activists do.

New sources for informed-activism
These projects are at the intersection between technology and social justice, between the ideation and application of a new digital-oriented, computer assisted reporting. Moreover, the creation and continuous updating of these “bottom-up” data sets detailing serious human rights violations have not only been used to further the social movements, but they also indicate the necessity that digital activist have to gather, organize, classify, and perhaps more importantly, corroborate information in times of social unrest.

As long as Chileans keep taking to the streets, this civil revolution presents the opportunity to observe new ways of activism, including the use of independently-gathered data by non-traditional media and the collection of evidence and testimonies from victims of police and military brutality in the streets, hospitals, and prisons.

What can we, only relying on our remote gaze, learn from looking at the situation going on today in Chile? This movement has shown us how the public engagement of a fear-free generation and the development of a strong digital infrastructure are helping to shape collaborative data-based projects with deep democratic roots.

Lastly, let’s hope that these projects, among others, also shed some light on how social movements can be empowered and engaged by new ways of activism actively creating their own data infrastructure in order to challenge existing power relations, seemingly resistant to fade into history.

 

new article out: “Enter the WhatsApper: Reinventing digital activism at the time of chat apps” (First Monday)

Our first article of 2020 is out! Entitled “Enter the WhatsApper: Reinventing digital activism at the time of chat apps”, it reflects on the evolution of political participation and digital activism at the time of chat applications. It is part of a special issue of the open access journal First Monday dedicated to the (first) ten years of WhatsApp. The abstract is below. The article can be read at this link.

This paper investigates how the appropriation of chat apps by social actors is redesigning digital activism and political participation today. To this end, we look at the case of #Unidos Contra o Golpe (United Against the Coup), a WhatsApp “private group” which emerged in 2016 in Florianópolis, Brazil, to oppose the controversial impeachment of the then-president Dilma Rousseff. We argue that a new type of political activist is emerging within and alongside with contemporary movements: the WhatsApper, an individual who uses the chat app intensely to serve her political agenda, leveraging its affordances for political participation. We explore WhatsApp as a discursive opportunity structure and investigate the emergence of a repertoire specific to chat apps. We show how recurrent interaction in the app results into an all-purpose, identity-like sense of connectedness binding social actors together. Diffuse leadership and experimental pluralism emerge as the bare organizing principles of these groups. The paper is based on a qualitative analysis of group interactions and conversations, complemented by semi-structured interviews with group members. It shows how WhatsApp is more than a messaging app for “hanging out” with like-minded people and has come to constitute a key platform for digital activism, in particular in the Global South. DOI: https://doi.org/10.5210/fm.v25i12.10414

Cite as 

Milan, S., & Barbosa, S. (2020). Enter the WhatsApper: Reinventing digital activism at the time of chat apps. First Monday, 25(1). https://doi.org/10.5210/fm.v25i12.10414

Call for papers: Palabra Clave special issue

Please note an exciting upcoming special issue of Palabra Clave, titled “Latin American perspectives on datafication and artificial intelligence” with Stefania Milan & Emiliano Treré as guest editors of this special issue.
More information on the CfP here:
Call for papers (Español): http://bit.ly/Pacla-CFP-2021-2-ES

Call for papers (English): http://bit.ly/Pacla-CFP-2021-2-EN

Call for papers en (Portugués): http://bit.ly/Pacla-CFP-2021-2-PT

***hot off the press*** Working Paper “Big Data from the South: Towards a Research Agenda”

What would datafication look like seen… ‘upside down’? What questions would we ask? What concepts, theories and methods would we embrace or have to devise? These questions were at the core of the two-day immersive research workshop ‘Big Data from the South: Towards a Research Agenda’ (University of Amsterdam, 4-5 December 4-5 2018). The event was the third gathering of the Big Data from the South Initiative (BigDataSur), a research network and program launched in 2017 by Stefania Milan (University of Amsterdam) and Emiliano Treré (Cardiff University).

The workshop report has finally been released and is ready for download! Special thanks go to the workshop participants, the authors of the thematic areas Anna Berti Suman (Tilburg University), Niels ten Never, Guillén Torres, Kersti R. Wissenbach and Zhen Ye (University of Amsterdam), and to Tomás Doods, Jeroen de Vos and Sander van Haperen for the editorial assistance.

We take the opportunity to once again thank the sponsors that made the event possible, namely the European Research Council (grant agreement No 639379-DATACTIVE; https://data-activism.net), the Amsterdam Center for Global Studies, the Amsterdam School of Cultural Analysis and the Amsterdam Center for European Studies. Our gratitude extends also to SPUI25, the University of Amsterdam and Terre Lente for hosting us.

[BigDataSur] Widening the field of Critical Data Studies: reflections on four years of DATA POWER

Guest Author: Güneş Tavmen

In June 2015, on a Sunday afternoon, I was walking around the centre of Sheffield to buy an outfit to wear while presenting at my first major academic conference. Having forgotten the dress I had prepared at home, I was desperately trying to fix something that would make me look fairly presentable. The conference was the first-ever ‘Data Power’, one of the first academic conferences providing with a space focused on the critical interventions on ‘data’s ever more ubiquitous power’. While it was unclear as to whether this was a one-off conference, together with its successful reception, it has become a biannual conference later on. Besides the usual nerves that every PhD researcher experiences at their first international conference, I was also quite intimidated by the idea that my co-panellist was Rob Kitchin, one of the foremost academics in the smart city research. Having recently finished my first year into PhD studies, I found it daunting to talk about my work in progress next to such high-profile names. Fast forward to September 2019, this time I was strolling the streets of Bremen as I travelled there to attend the third Data Power conference – this time as a fresh doctor who does not get as nervous about what to wear while presenting anymore. Having marked the beginning and the end of my PhD (unfortunately, I couldn’t attend the second Data Power conference in 2017 since I had no travel funding to afford a trip to Canada), I want to briefly reflect on the shifts I have observed between the first conference and the last, since these transformations might also relate to the larger terrain of an emerging field that has become known as “Critical Data Studies”.

The first Data Power conference was an academic celebrity gathering with an exceptionally large number of established scholars across the field giving papers. The range of presentations was wide in disciplinary approach but was narrow in geographical diversity and representation. Many papers were adopting a philosophical point of view presenting ontological discussions on the datafication of, well, everything. ‘Big Data’ seemed to be the hot topic with many papers addressing it, and, was discussed in relation to a wide range of areas from art to finance. There was a high level of expectations and competition in the air as this field was still in the process of establishing itself as a distinct area of enquiry. Probably because of that, I remember being struck by the inflation of neologies offered in the papers across the panels. This was to the level that, it felt like everyone was working hard to mark their territories through these neologies in this newly established field.

To the contrary of the wide range of the topics discussed, there was a significant lack of diversity with a little attendance from the so-called ‘Global South’ – in other words, it was a highly ‘white’ conference both in terms of speakers and subjects discussed. Except for two presenters, all the papers and keynotes were from organisations in Europe, Australia and North America. I too was at the time representing an institution in London (Birkbeck, University of London) and my paper focused on the London case. However, I remember feeling like the odd one out as a participant originally from Turkey. At the end of the conference, I tweeted about this observation, and to my surprise, it was not very well received. Several attendants, who were all white and employed in European institutions, told me that it was not true that the field of critical data studies was not diverse enough. Well, at least, the alleged diversity was not observable at this particular conference.

Fast-forwarding to 2019, the third Data Power conference portrayed a significant acknowledgement of the need to ‘decolonise’ the field. From the selection of keynotes to the range of topics, there was a substantial effort to widen the field in terms of geographical and socio-cultural inclusion. However, this time, the diversity of the range of topics was relatively limited. Activism, algorithmic justice and ethics seemed to have been raised most frequently in the panels -together with high attention to algorithmic practices of public bodies- while many other topics seemed to have disappeared such as politics of quantified self, political economy of data practices, citizen-science and data-driven urbanism to name a few. Besides, the popularity of the label “Big Data” has gone down, replaced by lots of attention to artificial intelligence and machine learning.

The field of Critical Data Studies has undoubtedly gained huge traction within the space of four years. While Data Power is the only comprehensive and periodic conference to be solely dedicated to Critical Data Studies (at least to my knowledge), there are now many ad-hoc specialised events being organised that deal with a focus on an aspect of data studies (e.g. feminist approaches, fake news and disinformation, data visualisation etc). Arguably, one might say that this might be the reason for the narrowing down of the range of topics, but I think it is not enough to explain the situation at the third Data Power conference. The heavy presence of papers auditing a wealth of public data practices, and the lesser discussion on what makes data practices so prominent in the first place, made me feel like we have given up on asking ontological questions. An overwhelming focus on how to make these systems more ethical and just with a lack of contestation of the domination of these systems through raising philosophical questions may indeed result in auxiliary proposals that help sustain these systems. To be clear, by no means I deny the importance of discussions on ethics and justice, but I believe that there is a strong need also for more genealogical excavations into how and why these systems are in place, as well as questions regarding ‘at what expense’ they perpetuate (e.g. environmental effects, precarious labour practices, political economy perspective and so on) at this conference. Locating data practices within a broader context would thus also inform discussions on ethics and justice.

Let me finish by underlining that these are my humble observations, and of course, they are partial since I did not have the chance to listen to all the presentations. Whatever is expecting us in the next Data Power conference in 2021, I hope that there will be a diverse group of attendees and a solid critical approach, which might help tackle the atrocities our world is facing today. I also hope that it will be in a country where I will not need to go through the horrific process of visa application—which is another, often overlooked dimension to consider when we discuss ‘data power’.

About Güneş Tavmen

Güneş Tavmen is ESRC postdoctoral fellow at the Department of Digital Humanities, King’s College London. She earned her PhD from Birkbeck, University of London; her research focuses on the (open) data-driven initiatives, practices and discourses in the context of smart city making in London.

[BigDataSur] On the Coloniality of Data Relations: Revisiting Data Colonialism as Research Paradigm (2/2)

Author Monika Halkort

In this twofold blogpost (2/2), guest author Monika Halkort complicates the notion of ‘data colonialism’ as employed in The Cost of Connection by Nick Couldry & Ulises Mejias (2019a), drawing on a case study of early datafication practices in historical Palestine. This blogpost is the last out of two: the first contextualizes ‘the colonial’ in data colonialism, the second draws on the casestudy to argue for the need to reimagine data agency in times of data colonialism. Read the first post here.

In the previous blogpost I argued for historically situated study of data colonialism to highlight the intersectionality of its effects. In my work on data relations in the historical experience of Palestinians I draw on modern property, census practices and map making rationalities to achieve that. These examples demonstrate the profound ontological violence involved in projecting the bi-polar structure of European Cartesian thinking upon non-European people and places.

The Ottoman government had never conducted a comprehensive land survey up until the arrival of colonial explorers in the mid 19th century. The census, the cadaster and maps in this sense can be understood as the first wave of datafication in the history of Palestinians. Taken together they provided the key political technologies for dispossessing commonly held grazing grounds and agricultural resources, paving the way for the subsequent transfer of land to Zionist settlers long before the foundation of the state of Israel (Halkort, 2016; 2019). The combined impact of calculating, measuring and reclassifying social and spatial identities and relations systematically disaggregated shared ownership and use rights into exclusivist title deeds and data units, which facilitated the radical reterritorialization of spaces and bodies on the basis of abstract universals – race, class, colonial citizenship and religion – and rendered the lived and embodied topology of social contracts and obligations unintelligible and hence obsolete. As McRae (1993, p. 345) writes, the new techniques of surveying land reconstructed rights as something that could be clearly and objectively measured and determined, in a manner which precluded competing, loosely held customary claims.

The double movement of reterritorialization and enclosure conscripted the population into an ongoing process of self-measuring activity in which the political recognition of aspiring national subjects became ever more dependent on their social separability as property owners, on terms and conditions that were themselves racially marked. It’s in this sense, I conclude, that the accumulative impact of property, census and the map, enabled colonial data infrastructures to function as powerful ontological machines that fundamentally transformed the conditions for articulating and affirming the historical existence and claims of the Palestinian people – not through the use of force, but rather through the “self organizing” principles of the free market competition that brought race, class, religion, property and gender to fold into each other such that they provided a self-generating axes along which shared life unfolds.

Against his backdrop, it becomes possible to see that the violence of dispossession of both modern-colonial and contemporary data regimes, is not reducible to the unfettered capitalization of life without limit, nor to the totalizing structure of social control and ubiquitous surveillance data extraction enrolls. It rather lies in the attempt to enclose the very ‘substance’ of life as central object of political strategy and commodification (Foucault), while successfully concealing how this “substance” is configured alongside binary distinctions – i.e. space and society, data and subjects, nature and politics as the central organizing principle of social and political subjectivities in liberal-capitalist democracies. In other words, what is dispossessed in data relations, are not pre-existing social entities and relations, but rather the very capacity of enacting and sustaining world-building relations that constitute collective life. The violence of data extraction, in this sense, never works on self-enclosed, autonomous bodies or acquired resources but rather through the flexible (re)assemblage of human and non-human entities into transversal arrangements that variously disposition people and things in relation to things of value that help stabilize Cartesian dualisms by foreclosing other ways of being and becoming in the world.

This capacity to affect life not only as it is already given, but its very becoming calls for an uncompromising revision of the techno-political heuristic that currently defines data policy and practice. Such a revision needs to start with a radical re-conception of data agency as the lived and embodied potentiality of materializing relations that implicate data into dynamics of struggle across platforms, operational divisions and scalar domains. Such an idea of data agency is not necessarily empowering, much less confined to human ambitions and concerns. What is gained, however, by rethinking data agency as such a transversal, multi-species arrangement, is that it successfully disrupts the seamless naturalization of data into an ownerless, self-enclosed, and ontologically distinct resource or mere by-product of social activity and relations to make room for acknowledging data as inextricably bound up with the lived and embodied infrastructure of collective life making, and, hence, as inseparable from the ethico-political substance it configures and performs.

 

About Monika Halkort

Monika Halkort is Assistant Professor of Digital Media and Social Communication at the Lebanese American University in Beirut. Her work traverses the fields of feminist STS, political ecology and post-humanist thinking to unpack the intersectional dynamics of racialization, de-humanisation and enclosure in contemporary data regimes. Her most recent project looks at the new patterns of bio-legitimacy that emerge from the ever denser convergence of social, biological and machine intelligence in environmental sensing and Earth Observation. Taking the Mediterranean sea as her prime example she unpacks how conflicting models of risk and premature death in data recalibrate ‘zones of being’ and ‘non-being’ (Fanon), opening up new platforms of oppression, alienation and ontological displacement that have been characteristic of modern coloniality.

References

Braidotti, Rosi. (2016). Posthuman Critical Theory. In D. Banerji, M. R. Paranjape (eds.), Critical Posthumanism and Planetary Futures (pp. 13–32). e-book: Springer India. doi: 10.1007/978-81-322-3637-5
Couldry, N., & Mejias, U. (2019a). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Palo Alto: Stanford University Press.
Couldry, N., & Mejias, U. (2019b). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television and New Media, 1 -14.
Couldry, N., & Mejias, U. (2019c). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Retrieved March 25, 2019, from Colonised by Data: https://colonizedbydata.com/
Halkort, M. (forthcoming) ‘Dying in the Technosphere. An intersectional analysis of Migration Crisis Maps’, in Specht, D. Mapping Crisis, London Consortium for Human Rights, University of London, London, UK
Halkort, M. (2019). Decolonizing Data Relations: On the moral economy of Data Sharing in a Palestinian Refugee Camp. Canadian Journal of Communication , 317-329.
Halkort, M. (2016) ‘Liquefying Social Capital. The Bio-politics of Digital Circulation in a Palestinian Refugee Camp’, in Tecnoscienza, Nr. 13, 7(2)
Maldonado-Torres, N. (2007). On the coloniality of being: contributions to the development of a concept. Cultural Studies, 21(2-3), 240-270.
Mbebe, A. (2017). Critique of Black Reason. Durham, NC: Duke University Press.
McRae, Andrew. (1993). To know one’s own: Estate surveying and the representation of the land in early modern England. The Huntington Library Quarterly, 56(4), 333–57. doi: 10.2307/3817581
Mignolo, W. (2009). Coloniality: The darker side of modernity. In S. Breitwieser (Hrsg.), Modernologies. Contemporary artists researching modernity and modernism (S. 39 – 49). Barcelona: MACBA.
Quijano, A. (2007). Coloniality and Modernity/Rationality. Cultural Studies, 21(2-3), 168-178.

[BigDataSur] On the Coloniality of Data Relations: Revisiting Data Colonialism as Research Paradigm (1/2)

Author Monika Halkort

In this twofold blogpost (1/2), guest author Monika Halkort complicates the notion of ‘data colonialism’ as employed in The Cost of Connection by Nick Couldry & Ulises Mejias (2019a), drawing on a case study of early datafication practices in historical Palestine. This blogpost is one out of two: the first contextualizes ‘the colonial’ in data colonialism, the second draws on the casestudy to argue for the need to reimagine data agency in times of data colonialism.

One of the underlying themes running through the workshop Big Data from the South: Towards a Research Agenda was the question how to characterize our relations with data, focusing specifically on the geo- and bio-politically context of the Souths. Our working group took up the critical task of reviewing the concept of ‘data colonialism’ to discuss whether it provides a productive framework for understanding forms of dispossession, enclosure and violence inhered in contemporary data regimes. Nick Couldry and Ulises Mejias (2019a), both participants in the workshop, make precisely this point in their new book “The Cost of Connection” where they argue that the unfettered capture of data from social activities and relations confronts us with a new social order – a new universal regime of appropriation – akin to the extractive logic of historical colonization. (Find Ulises Mejias’ recent blogpost on decolonizing data here.)

Data colonialism, in their view, combines the predatory extractive practices of the past with the abstract quantification methods of contemporary computing. This would ensure a seemingly natural conversion of daily life converges into streams of data that can be appropriated for value, based on the premise of generating new insights from data that would otherwise be considered to be noise. Thus, while data colonialism may not forcefully annex or dispossess land, people or territories in the way historical colonialism did, it nonetheless relies on the same self-legitimizing, utilitarian logic that objectified nature and the environment as raw materials, that are ‘just out there’, only waiting to be extracted, monetized or mined (2019b, p. 4). It’s this shift from the appropriation of natural to social resources that, for Couldry and Mejias, characterizes the colonial moment of contemporary data capitalism (2019b, p. 10). It ushers in a new regime of dispossession and enclosure that leaves no part of human life, no layer of experience, that is not extractable for economic value (2019b, p. 3; 2019c). This produces the social for capital under the pretext of advancing scientific knowledge, rationalizing management or personalizing marketing and services (2019c).

Couldry and Mejias emphasis on the expansion of dispossession from natural to social resources begs for a closer examination, for it implies an inherent split between the social and the natural as ontologically distinct categories of social existence, that may end up reifying the very structures of coloniality they seek to confront. Or, to put it differently, there is a need to better situate data within ontologies of the social if we are to fully understand who or what is dispossessed in data and in the name of whom or what. Such a self-reflexive task only becomes meaningful if conducted in historically and geographically specific contexts, to avoid losing sight of the differential effects that distinguish the beneficiaries of historical forms of colonialism from those who continue to struggle against its impact and consequence. Such a situated analysis also helps to emphasize the intersectionality of violence of dispossession and displacement in data relations and to draw a clear distinction between settler colonialism and other modes of colonisation, both of which are the main focus of my own research (2019 forthcoming, 2019, 2016).

Colonialism, after all, is not a fixed, universal structure, much less a coherent vector of power or rule. Colonialism operates through multiple forms of domination – military, economic, religious, cultural and onto-epistemic – each with its own legitimation narratives and tactics, rhetorical maneuvers and trajectories. What unites them into a shared set of characteristics, in my view, are the ways they contributed to the projection of modern, European knowledge onto the rest of the planet, such that other ways of knowing and being in the world were delegitimated and disavowed.

Modern European knowledge, as decolonial theory remind us, was firmly grounded in Cartesian dualisms that divided the world into two separate independent realms – body and mind, thinking and non-thinking substance – from which a whole range of other binaries i.e.: nature and society, subjects and objects of knowledge, human and non-human could be inferred (Braidotti, 2016; Maldonado-Torres, 2007; Mbebe, 2017; Quijano, 2007). Taken together they provided the normative horizon for managing social and spatial relations throughout the modern colonial period and that laid out the central parameters around which ethico-political subjectivities could be forged.

In part two: early Palestinian datafication practices demonstrate the violence of Cartesian thinking as a case of reimagining data agency. Read it here.

 

About Monika Halkort

Monika Halkort is Assistant Professor of Digital Media and Social Communication at the Lebanese American University in Beirut. Her work traverses the fields of feminist STS, political ecology and post-humanist thinking to unpack the intersectional dynamics of racialization, de-humanisation and enclosure in contemporary data regimes. Her most recent project looks at the new patterns of bio-legitimacy that emerge from the ever denser convergence of social, biological and machine intelligence in environmental sensing and Earth Observation. Taking the Mediterranean sea as her prime example she unpacks how conflicting models of risk and premature death in data recalibrate ‘zones of being’ and ‘non-being’ (Fanon), opening up new platforms of oppression, alienation and ontological displacement that have been characteristic of modern coloniality.

 

References

Braidotti, Rosi. (2016). Posthuman Critical Theory. In D. Banerji, M. R. Paranjape (eds.), Critical Posthumanism and Planetary Futures (pp. 13–32). e-book: Springer India. doi: 10.1007/978-81-322-3637-5
Couldry, N., & Mejias, U. (2019a). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Palo Alto: Stanford University Press.
Couldry, N., & Mejias, U. (2019b). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television and New Media, 1 -14.
Couldry, N., & Mejias, U. (2019c). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Retrieved March 25, 2019, from Colonised by Data: https://colonizedbydata.com/
Halkort, M. (forthcoming) ‘Dying in the Technosphere. An intersectional analysis of Migration Crisis Maps’, in Specht, D. Mapping Crisis, London Consortium for Human Rights, University of London, London, UK
Halkort, M. (2019). Decolonizing Data Relations: On the moral economy of Data Sharing in a Palestinian Refugee Camp. Canadian Journal of Communication , 317-329.
Halkort, M. (2016) ‘Liquefying Social Capital. The Bio-politics of Digital Circulation in a Palestinian Refugee Camp’, in Tecnoscienza, Nr. 13, 7(2)
Maldonado-Torres, N. (2007). On the coloniality of being: contributions to the development of a concept. Cultural Studies, 21(2-3), 240-270.
Mbebe, A. (2017). Critique of Black Reason. Durham, NC: Duke University Press.
McRae, Andrew. (1993). To know one’s own: Estate surveying and the representation of the land in early modern England. The Huntington Library Quarterly, 56(4), 333–57. doi: 10.2307/3817581
Mignolo, W. (2009). Coloniality: The darker side of modernity. In S. Breitwieser (Hrsg.), Modernologies. Contemporary artists researching modernity and modernism (S. 39 – 49). Barcelona: MACBA.
Quijano, A. (2007). Coloniality and Modernity/Rationality. Cultural Studies, 21(2-3), 168-178.

[BigDataSur] Some thoughts on decolonizing data

By Ulises Mejias

Would it be too far-fetched to call the variety of today’s data collection practices a new form of colonialism, given the violence and historical specificity of European colonialism? In our work, Nick Couldry and I try to make a careful argument that yes, we should call it colonialism. We focus not so much on the form or content of European colonialism, but on the historical function, which was to dispossess. Instead of natural resources or human labor, what this new form of colonialism expropriates is human life, through the medium of digital data. We therefore define “data colonialism” as an emerging order for the appropriation of human life so that data can be continuously extracted from it for profit. This form of extractivism comes with its own forms of rationalization and violence, although the modes, intensities, and scales are different from those we saw during European colonialism.

It should then be possible to decolonize data in the same way we have decolonized history, knowledge, and culture. I can think of at least three initial approaches.

First, by questioning the universalism behind this new form of appropriation. During European colonialism, the colonized were presented with a justification for dispossession that revolved around grand narratives such as Progress, Development, and the Supremacy of European culture and history—indeed about the supremacy of the White race. These narratives were universalizing in that they sought to obliterate any challenges to them (European values were the *only* standards to be recognized). Today, the narratives which justify data extraction are equally universalizing and totalizing. We are told the dispossession of human life through data represents progress, that it is done for the benefit of humanity; that it brings human connection, new knowledge, distributed wealth, etc. Furthermore we are told that even though it is *our* data, we don’t have the knowledge and means to make use of this resource, so we better get out of the way and let the corporations do it for us, as they did during colonialism. The first step to decolonize data is to realize that this is the same ruse the powerful have played on us for 500 years. There is nothing natural, normal, or universally valid about the way human life is becoming a mere factor in capitalist production, and we must reject the new narratives deployed to justify this form of dispossession.

The second way in which data can be decolonized is by reclaiming the very resources that have been stolen from us. In other words, we need to rescue colonized space and time: the space that has become populated by devices that monitor our every move; the time (usually in front of a screen) that we devote to the production of data that is used to generate profit for corporations. Our spaces and times are not empty, passively available for extraction. We need to re-invest them with value, as a way to protect them from appropriation by corporations. Yes, at a basic level this might mean simply opting-out of certain platforms. But I think it goes deeper than that. To decolonize our space and our time means to re-conceptualize our role within capitalism, which extends beyond data relations. It extends to the environment, to the workplace… I am inspired to see that the environmental movement, the labor movement, the social justice movement, the peace movement, and the critical science & technology movement are converging, and are being reconfigured in the process. Yes, huge challenges remain—especially in the face of populist movements like the ones we are seeing around Trump, Balsonaro and Modi—but at least we are developing the awareness that individual and disjointed action (like, say, quitting Facebook) is meaningless if it doesn’t happen in connection with other struggles.

Speaking of which, we have to remain vigilant and sceptical of “solutions” that legitimize the status quo. Recently, the New York Times published a glossy proposal for “saving” the internet by making sure we get paid for the data we generate. Is this a viable solution? Imagine one day you discover hidden cameras have been installed to track your every move, invading your privacy in order to generate profit for a company. Would you be satisfied if, instead of removing the cameras and addressing the injustice, the company promised to pay you to continue to record your life? If you are facing economic hardship, you might accept, but that still wouldn’t make it right. The only thing that would be accomplished would be the continuation—the normalization, in fact—of a massive system of dispossession. To redirect a small portion of the accumulated wealth generated through data extraction to the people who actually generate it while leaving the rest of the system intact is not a return to dignity but the equivalent of putting a seal of approval on a system that has inequality at its core.

The last suggestion for decolonizing data is to learn from other decolonization struggles of the past and the present. It might seem like capitalism and data colonialism are all-encompassing regimes which we are incapable of resisting. But people have always found ways of resisting—whether through physical action or, when that is not possible, through intellectual work. The colonized employ their culture, their history, and even the technologies and languages of the colonizer to resist, to reject. I’m not saying this is as simple as declaring that we are all now as oppressed as native peoples in this new system. If anything, the legacy of colonial oppression continues to exact a heavier cost on vulnerable populations, which continue to be disproportionately discriminated against and abused under the new data colonialism. But I am saying that even privileged subjects can learn some lessons from people who have been resisting colonialism for centuries. More importantly, we need to develop new forms of solidarity that incorporate the fight against the appropriation of human life through data as part of the struggle for a better world.

 

About Ulises Mejias

Ulises A. Mejias is an associate professor in the Communication Studies department and the director of the Institute for Global Engagement at the State University of New York at Oswego. His research interests include critical internet studies, philosophy and sociology of technology, and political economy of digital media. His most recent book, co-authored with Nick Couldry, is The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism (2019, Stanford University Press). He is also the author of Off the Network: Disrupting the Online World (2013, University of Minnesota Press), as well as various journal articles. For more info, see ulisesmejias.com.