Category: show on blog page

[BigDataSur] Inteligencia artificial y soberanía digital

Por Lucía Benítez Eyzaguirre

Resumen

La autonomía que van logrando los algoritmos, y en especial la inteligencia artificial, nos obliga a repensar los riesgos de la falta de calidad de los datos, de que en general no estén desagregados y los sesgos y aspectos ocultos de los algoritmos. Las cuestiones de seguridad y éticas están en el centro de las decisiones a adoptar en Europa relacionadas con estos temas. Todo un reto, cuando todavía no hemos logrado ni la soberanía digital.

Abstract

The autonomy that the algorithms are achieving and, especially, artificial intelligence forces us to rethink the risks of lack of quality in data, the fact that in general they are not disaggregated, and the biases and hidden aspects of the algorithms. Security and ethical issues are at the center of the decisions to be taken in Europe related to these issues. It looks like a big challenge, considering that we have not yet achieved even digital sovereignty.

IA y soberanía digital

Los algoritmos organizan y formatean nuestra vida. Como si fueran un software social y cultural, éstos se van adaptando a los comportamientos humanos, y avanzan en su existencia autónoma. Sin embargo, vivimos de forma ajena a su capacidad de control sobre la desigualdad, sobre la vigilancia de nuestras vidas o al margen del desarrollo inminente del internet de las cosas o de la inteligencia artificial (IA): como si pudiéramos darnos el lujo de ignorar cómo se van independizando cada vez más de las decisiones humanas. Ahora, por ejemplo, por primera vez se ha planteado si habrá que modificar los criterios de patentes después de la intención de registrar como propiedad intelectual los inventos y diseños hechos por una inteligencia artificial. De momento, ni la Unión Europea (UE) ni el Reino Unido se han mostrado dispuestos a aceptar una iniciativa de este tipo sin un debate sobre el papel de la IA y del escenario de incertidumbre que esta situación abre.

Es en este contexto que comienza a oírse una pluralidad de voces que piden una regulación de las tecnologías asociadas a la IA; un freno al futuro de un desarrollo autónomo e inseguro. Algunas de las corporaciones de los GAFAM -el grupo que concentra las cinco empresas más grandes en tecnología en el mundo-, como Microsoft o Google ya han pedido esta regulación. Es más, incluso pareciera que estos gigantes tecnológicos comienzan a avanzar hacia la autorregulación en cuestiones éticas o de responsabilidad social, a la vista del impacto que no hacerlo puede tener sobre su reputación. La cuestión para la UE supone valorar y reconocer los riesgos del devenir incontrolable de la IA, sobre todo en asuntos como la salud o la vigilancia. De ahí que parece que el reconocimiento facial en lugares públicos se frenará en los próximos años en algunos países de Occidente, para así prevenir los riesgos detectados en China.

Para combatir los riesgos de la IA hay que comenzar por asegurar la calidad de los datos y los algoritmos, investigar sobre los sesgos que producen y la responsabilidad sobre los errores y criterios. La IA se entrena en muchos casos con datasets no desagregados y a menudo ya sesgados, por lo que conducirá a algoritmos deformados y poco representativos de la población y a desarrollos parciales, de baja calidad y dudosos resultados. Frente al cada vez más numeroso trabajo que se realiza con datos masivos, apenas hay estudios técnicos sobre su impacto humano y social. Por lo mismo, trabajos como los del profesor Matthew Fuller son un clásico recurso para tomar conciencia de la importancia de la transparencia sobre el funcionamiento de los algoritmos. Fuller plantea la aplicación de sistemas que garanticen la condición verdadera de los resultados, la mejora del modelo a partir de un mayor número de conexiones, un funcionamiento que muestre las conexiones sociales o que ponga en evidencia que a menudo se supera la capacidad de los propios sistemas que se analizan con algoritmos.

Si queremos atender a los riesgos de la IA hay que comenzar por el logro de la “gobernabilidad algorítmica”. Este concepto supone la prevención del abuso y del control con el que los algoritmos regulan nuestra vida o con el que la programación rige nuestro quehacer, nuestras rutinas. Esta gobernanza es una garantía de la transparencia, con la supervisión colectiva de usuarios y empresas de los resultados, y la responsabilidad ante el uso de la información. Los algoritmos deben garantizar la transparencia y calidad de los datos (concepto conocido como open data en inglés), ofrecer su propio código de fuente abierto, que sea auditable por sus usuarios y que pueda responder a las reclamaciones fruto de los controles ciudadanos. Pero también es imprescindible que el algoritmo sea leal y justo, es decir, que evite la discriminación que sufren las mujeres, las minorías, o cualquier otro colectivo desfavorecido. Y si se trata de un algoritmo en línea, hay que tener también en cuenta las API (Application Public Programming Interface) públicas porque condicionan tanto la recolecta de datos como la forma en que se aplican técnicas comerciales, que oculta cómo se apropian de la información.

Este espíritu también se recoge en la Declaración de Zaragoza de 2019 a partir del debate de profesionales y académicos sobre los efectos adversos, y los riesgos potenciales. Sin embargo, esta declaración también señala las recomendaciones de uso de la IA, da a conocer sus impactos y su evolución en la sociedad. Esto lo hace a través de cinco puntos sobre las dimensiones humana y social, el enfoque transdisciplinar con el que abordar la AI, la responsabilidad y el respeto a los derechos, a partir de un código deontológico propio.

La Declaración pone el acento en la necesidad de desarrollos para las políticas de interés público y la sostenibilidad, pero siempre a partir de sistemas trazables y auditables, con un compromiso con los usuarios para evaluar el cumplimiento de sus objetivos y separar los defectos o desviaciones. En cuestiones éticas, la Declaración propone la formación de los programadores no sólo técnica sino ética, social y humanista, ya que los desarrollos de software también deben contemplar estas dimensiones, así como diferentes fuentes de conocimiento y experiencia.

La Declaración de Zaragoza también incluye un “derecho a la explicación” sobre las decisiones algorítmicas, siempre y cuando éstas entren en juego con los derechos fundamentales de las personas. A pesar del que el Reglamento General de Protección de Datos de la Unión Europea ha avanzado en derechos digitales, todavía estamos muy lejos de una soberanía tecnológica al estilo de la francesa. Desde 2016, Francia se rige por la “Ley de la república digital” que impulsa los algoritmos auditables, la neutralidad de la red, la apertura de datos, la protección de la privacidad y lealtad de las plataformas con la información de sus consumidores, el derecho a la fibra y a la conexión a Internet, el derecho al olvido, la herencia digital, la obligación de informar de las brechas de seguridad detectadas, las multas en materia de protección de datos.

 

Magma guide release announcement

January 29, 2020

By Vasilis Ververis, DATACTIVE

We are very pleased to announce you that the magma guide has been released.

What is the magma guide?

An open-licensed, collaborative repository that provides the first publicly available research framework for people working to measure information controls and online censorship activities. In it, users can find the resources they need to perform their research more effectively and efficiently.

It is available under the following website: https://magma.lavafeld.org

The content of the guide represents industry best practices, developed in consultation with networking researchers, activists, and technologists. And it’s evergreen, too–constantly updated with new content, resources, and tutorials. The host website is regularly updated and synced to a version control repository (Git) that can be used by members of the network measurements community to review, translate, and revise content of the guide.

If you or someone you know is able to provide such information, please get in touch with us or read on how you can directly contribute to the guide.

All content of the magma guide (unless otherwise mentioned) is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).

Many thanks to everyone who helped make the magma guide a reality.

You may use any of the communication channels (listed in contact page) to get in touch with us.

 

Vasilis Ververis is a research associate with DATACTIVE and a practitioner of the principles ~ undo / rebuild ~ the current centralization model of the internet. Their research deals with internet censorship and investigation of collateral damage via information controls and surveillance. Some recent affiliations: Humboldt-Universität zu Berlin, Germany; Universidade Estadual do Piaui, Brazil; University Institute of Lisbon, Portugal.

[BigDataSur] How Chilean activists used citizen-generated data to fight disinformation

by Tomás Dodds

Introduction
For over 80 days now, and with no end in sight, Chile has been in the grip of waves of social protests and cultural manifestations with tens of thousands of demonstrators taking to the streets across the country. For many, the upsurge of this social outburst has its roots in a civil society rebelling against an uncaring economic and political elite that has ruled the country since its return to democracy in 1990. Mass protests were soon followed by a muddle of misinformation, both online and in the traditional press. In this blog post, I provide insights into how Chilean activists, including journalists, filmmakers, and demonstrators themselves, have started using citizen-generated data to fight media disinformation and the government’s attempts to conceal cases of human rights violations from the public.

Background
The evening of October 18th 2019 saw how Chileans started to demand the end of a neoliberal-based economic system, perceived among citizens as the main cause for the social inequalities and political injustices that occurred in the country over the last decades. However, demonstrations were met with brutal police repression and several corroborated cases of human rights violations, including sexual torture. To this day, information gathered by national and international non-governmental organizations show at least that 26 people have died and more than 2.200 have been injured during the rallies.

Although I was raised in Chile, today I am living in Amsterdam. Therefore, I could only follow the news as any other Chilean abroad; online. I placed a screen in my room streaming in a loop the YouTube channels of the prime-time late-night news of major media outlets. During the day, I constantly checked different social media platforms like Facebook or Twitter, and from time to time I would get news and tips from friends and fellow journalists in the field over WhatsApp or Signal. Information started flooding every digital space available: a video posted on social media in the morning would have several different interpretations by that evening, and dissimilar explanations would be offered by experts across the entire media spectrum by night.

And this was only the start. Amidst the growing body of online videos and pictures showing evidence of excessive military force against demonstrators, Chilean President Sebastián Piñera sat in on a televised interview for CNN’s Oppenheimer Presenta where he claimed that many recordings circulating on social platforms like Facebook, Instagram, and Twitter have been either “misrepresenting events, or filmed outside of Chile.” The President effectively argued that many of these videos were clearly “fake news” disseminated by foreign governments seeking to destabilize the country, like those of Venezuela and Cuba. Although Piñera later backed down from his claims, substantial doubts were already planted in Chileans’ minds. How could the public be sure that the videos they were watching on their social networks were indeed real, contemporary, and locally filmed? How could someone prove that the images of soldiers shooting rubber bullets at unarmed civilians were not the result of a Castro-Chavista conspiracy, orchestrated by Venezuelan President Nicolás Maduro, as some tweets and posts seem to claim with a bewildering lack of doubt? How could these stories be corroborated when most of them were absent from the traditional media outlets’ agendas?

As a recent study suggests, unlike their parents or grandparents, the generation that was born in Chile after 1990 is less likely to self-censor their political opinions and show a higher willingness to participate in public discussion. After all, they were born in democracy and do not have the grim memories of the dictatorship in their minds. This is also the generation of activists who, using digital methods, have taking it up to themselves to mount the digital infrastructure that makes relevant information visible and, at the same time, accessible to an eager audience that cannot find on traditional media the horror tales and stories that reflect the ones told by their friends and neighbors. Thus, different digital projects have started to gather and report data collected by a network of independent journalists, non-governmental organizations, and the protestors themselves in order to engage politically with the reality of the events occurring on the streets. Of these new digital projects, here I present only two that stand out in particular, and which I argue help to alleviate, or at least they did for me, the uncertainty of news consumption in times of social unrest.

DSC06091-Editar

(Image courtesy of Osvaldo Pereira) 

From singular stories to collective data
Only four days after the beginning of the protests, journalists Miguel Paz and Nicolás Ríos started ChileRegistra.info (or Chile-Records in English), a depository of audio-visual material and information regarding the ongoing protests. Chile-Registra stores and distributes videos that have been previously shared by volunteers and social networks users who have attended the rallies. According to these journalists, traditional media could not show videos of human rights violations shared on social networks because they were unable to verify them, and therefore would only broadcast images of riots and barricades, which would later produce higher levels of mistrust between the demonstrators and the press.

As a response to this problem, the project has two main purposes; First, to create a “super data base” with photos and videos of the protests, and military and police abuses. Second, to identify the creators of videos and photos already posted and shared on social networks, in order to make these users available as news source or witness for both traditional media and the prosecutors. National newspaper La Tercera and Publimetro, among other national and international media outlets, did already use this platform to published or broadcast data collected within the depository. By using this project, users were able to easily discredit Piñera’s claims that many of these videos were being recorded abroad.

The second project I would like to draw attention to is Proyecto AMA (The Audio-visual Memory Archive Project in English). AMA is a collective of journalists, photographers, and filmmakers who have been interviewing victims of human rights violations during the protests. Using the Knight Lab’s StoryMap tools, AMA’s users can also track where and when these violations have taken place, and read the personal stories behind the videos that they most probably saw before online. According to their website, members of this project “feel the urgent need to generate a memory file with the images shared on social networks, and give voice and face to the stories of victims of police, military and civil violence in Chile.”

These two projects have certainly different approaches for how they generate content. While ChileRegistra relies on collecting data from social media and citizen journalists uploading audio-visual material, Proyecto AMA’s members interview and collect testimonies from victims of repression and brutality. Although the physical and technological boundaries of each media platform are still present, these projects complement each other in a cross-media effort that precisely plays with the strengths of each of the platforms used to inform the work activists do.

New sources for informed-activism
These projects are at the intersection between technology and social justice, between the ideation and application of a new digital-oriented, computer assisted reporting. Moreover, the creation and continuous updating of these “bottom-up” data sets detailing serious human rights violations have not only been used to further the social movements, but they also indicate the necessity that digital activist have to gather, organize, classify, and perhaps more importantly, corroborate information in times of social unrest.

As long as Chileans keep taking to the streets, this civil revolution presents the opportunity to observe new ways of activism, including the use of independently-gathered data by non-traditional media and the collection of evidence and testimonies from victims of police and military brutality in the streets, hospitals, and prisons.

What can we, only relying on our remote gaze, learn from looking at the situation going on today in Chile? This movement has shown us how the public engagement of a fear-free generation and the development of a strong digital infrastructure are helping to shape collaborative data-based projects with deep democratic roots.

Lastly, let’s hope that these projects, among others, also shed some light on how social movements can be empowered and engaged by new ways of activism actively creating their own data infrastructure in order to challenge existing power relations, seemingly resistant to fade into history.

 

[blog] Why Psychologists need New Media Theory

by Salvatore Romano

 

I’m a graduate student at the University of Padova, Italy. I’m studying Social Psychology, and I spent four months doing an Erasmus Internship with the DATACTIVE team in Amsterdam.

 

It’s not so common to find a student of psychology in the department of Media Studies; some of my Italian colleagues asked me the reason for my choice. So I would like to explain four good reasons for a student of psychology to get interested in New Media Theory and Digital Humanities. In doing that, I will quote some articles to give a starting point to other colleagues who would like to study similar issues.

I participated in the “Digital Method Summer School,” which has been an excellent way to get a general overview of the different topics and methodologies in use in the department. In just two weeks, we discussed many things: from a sociological point of view on the Syrian war to an anthropological comprehension of alt-right memes, passing by semantic analysis, and data scraping tools. In the following months, I had the chance to deepen the critical approach and the activist’s point of view, collaborating with the Tracking Exposed project. The main question that drove my engagement for the whole period has been: “what reflections should we make before using the so-called ‘big data’ made available by digital media?”.

The first important point to note is: research through media should always be also research about media. It is possible to use this data to investigate the human mind and not just to make assumptions about the medium itself. However, it is still essential to have specific knowledge about the medium. New Media theory is interesting not only because it tells you what New Media are, but rather because it is crucial to understand how to use new media data to answer different questions coming from various fields of studies. That’s why, also as psychologists, we can benefit from the discussion.

The second compelling reason is that you need specific and in-deep knowledge to deal with technical problems related to digital media and its data. I experienced some of the difficulties that you can face while researching social media data: most of the time you need to build your research tools, because no one had your exact question before you or, at least, you need to be able to adapt someone else’s tool to your needs. And this is just the beginning; to keep your (or other’s) tool working, you need to update it really often, sometimes also fighting with a company that tries to obstruct independent research as much as possible. In general, the world of digital media is changing much faster than traditional media; you could have a new trendy platform each year; stay up to date is a real challenge, and we cannot turn a blind eye to all of this.

Precisely for that reason, the third reflection I made is about the reliability of the data we use for psychological research. Especially in social psychology, students are familiar with using questionnaires and experiments to validate their hypotheses. With those kinds of methodologies, the measurement error is mostly controlled by the investigator that creates the sample and assures that the experimental conditions are respected. But with big data social sciences experiment, the possibility to trace significant collective dynamics down to single interactions, as long as you can get those data and analyze them properly. To make use of this opportunity, we analyze databases that are not recorded by us, and that lack an experimental environment (for example, when using Facebook API). This lack of independence could introduce distortions imputable to the standardization operated by social media platforms and not monitorable by the researcher. Moreover, to use APIs without general knowledge about what kind of media recorded those data is really dangerous, as the chances to misunderstand the authentic meaning of the communication we analyze are high.

Also if we don’t administer a test directly to the subjects, or we don’t make assumptions just from experimental set-up, we still need to reproduce a scientific accuracy to analyze big data produced by digital media. It is essential to build our tools to create the database independently; it’s necessary to know the medium to reduce misunderstandings, and all this is something we can learn from a Media Studies approach, also as psychologists.

The fourth point is about how digital media implement psychological theory to shape at best their design. Those platforms use psychology to augment the engagement (and profits), while psychologists use very rarely the data stored by the same platforms to improve psychological knowledge. Most of the time, omnipotent multinational corporations play with targeted advertising, escalating to psychological manipulation, while a lot of psychologists struggle to understand the real potential of those data.

Concrete examples of what we could do are the analysis of the hidden effects of the Dark Patterns adopted by Facebook to glue you to the screen; the “Research Personas” method to uncover the affective charge created by apps like Tinder; the graphical representation of the personalization process involved in the Youtube algorithm.

 

In general, I think that it’s essential for us, as academic psychologists, to test all the possible effects of those new communication platforms, not relying just on the analysis made by the same company about itself, we need instead to produce independent and public research. The fundamental discussion about how to build the collective communications system should be driven by those types of investigations, and should not just follow uncritically what is “good” for those companies themselves.

 

Off the Beaten Path: Human rights advocacy to change the Internet infrastructure

Report on Public Interest Internet Infrastructure workshop held at Harvard University in September 2019

by Corinne Cath-Speth and Niels ten Oever

Introduction

Surveillance-based business model[s] force people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse.

Choice words from the latest report published by Amnesty International, in which they consider the human rights’ implications of Big Tech’s extractive business model. Their conclusions are bleak; the terms of service on which we engage in social media and search are diametrically opposed to human rights. This, however, comes as no surprise to academics and activists who have been highlighting the Internet’s negative ramifications over the past decade. In this blog, we present some thoughts on the promises and perils of human rights advocacy aimed at changing computer, rather than, legal code. It draws on insights shared during a two-day workshop on public interest advocacy and design in Internet governance processes, with a particular focus on Internet standards. The workshop, entitled “Future Paths to a Public Interest Internet Infrastructure” took place in the fall of 2019 at the Harvard Kennedy School, in Cambridge, Massachusetts. It brought together 26 academics, activists, technologists, civil servants, and private sector representatives from 12 countries.

Concerns at the intersection of Internet governance and society span way beyond—or rather, below—those touching on social media, search engines, or e-commerce. They also include technologies, like Internet standards and protocols, that most of us have never seen but rely on for our day-to-day use of the Internet. The development and governance of these technologies is increasingly subject to scrutiny of public interest advocates. This is not that surprising given the history of struggles over power, norms, and values that colour the development of global communications infrastructures, like the phone, the telegraph, and Internet standards.

The advocates currently participating in governance and standards bodies are legion: they span from the American Civil Liberties Union (ACLU) to various Centres for Internet and Society, to the C-suites of tech-companies. Their theories-of-change rooted in the idea that digital technologies shape communication such that it can impede or enable the exercise of rights. Their tactics focused on direct engagement with companies, often through the technical working groups of the key Internet governance organizations. Little, however, is known about these advocacy efforts. Like the standards they focus on, these efforts are largely invisible. The ferocity of the public debates about the negative impact of the Internet on society, as well as growing condemnation of industry-led tech ethics efforts, calls for these efforts to be brought to light.

Documenting Workshop Discussions

The discussion at the workshop took us from the very top of the Internet’s stack, where our social media and search applications live, to its depths where sharks chew on Internet cables. We discussed expanding, collapsing, horizontally and vertically integrating the Internet’s stack, and even doing away with the concept all together. Likewise, we discussed what it means to do public interest advocacy aimed at changing the Internet’s infrastructure, what “public interest” entails as a concept, how different stakeholders can be effective advocates of it, and what it takes to study it. We do not aim to provide definitive answers. Rather, we will highlight three discussions that show where participants diverged and converged on their respective path(s) towards including public interest considerations in the Internet’s infrastructure.

  • Pragmatism and its politics: How and when public interest advocates should team up with colleagues in the private sector or government was a crucial discussion during the workshop. It revealed that cross-industry cooperation often put the public interest advocates between a rock and a hard problem: how do you known when cooperation turns into co-optation? Many took a “pragmatist” position, acknowledging that their concerns around tech development often stemmed from core-business decisions, which they considered beyond their influence. However, they argued, this was insufficient reason to write off strategic cooperation to move the technical needle. Even if it meant much of their work was focused on treating symptoms rather than causes. The turn to pragmatism, highlighted an underlying concern. As with most social values, “public interest”, means different things to different folks. This in turn implies that public interest representatives are not only contending with difficult choices about strategic collaboration across sectors, but also within them. This tension is both irresolvable and interesting, for the debate and careful articulation of advocacy positions it requires. Which, as one participant optimistically quipped: is helpful because now at least I know where you are going and what it takes for us to get there.
  • Shrinking space for civil society: Civil society organisations trying to raise public interest considerations in Internet governance are fighting on multiple fronts. Within Internet governance organisations they are contending with inherent hurdles: the power differentials between corporate and non-commercial participants, lack of civil society funding for work seen as technically opaque and difficult to explain to funders; the technical learning curve; lack of consensus among allied organisations; and the confrontational culture of Internet standardisation bodies. At the same time, they are operating in a broader context of a shrinking space for civil society. In many countries, the regulatory environment is such that it is near impossible to be an effective civil society organisation. The question then becomes how to grow and sustain civil society participation in the development of the Internet’s infrastructure in the face of internal and external pressure that limit it.
  • What is the endgame? For some getting the tech right was their main concern. Other argued that this was too narrow an endgame for public interest representation in Internet governance. Focusing on the tech is necessary but insufficient. Code, the participants agreed, is not the pinnacle of societal change. In order for these interventions to have ramifications beyond their direct context they need to connect to existing work done outside of a limited number of Internet standardisation bodies. Many of the participants were actively creating these necessary connections to other technical communities, by talking to Internet Service Providers (ISP) and other Internet governance stakeholders. Yet, many agreed that ensuring the Internet’s infrastructure reflects particular articulations of “the public interest” requires policy as much as protocol intervention.

These three discussions only scratch at the surface of the conversation during the workshop. If you are interested in learning more, please see here for the full workshop report. The social movements bringing a range of public interest considerations (from civil liberties, to social justice, to human rights) to the Internet infrastructure and its governance processes, will keep evolving. Like the Internet’s infrastructure itself. This blog should thus as is good practice in academia, engineering, and activism alike, be seen as documentation of known issues and efforts at this current moment. Rather than a singular path-forward. It provides a departure point to further develop this conversation to include a broader range of stakeholders, network engaged scholars, and practitioners.

The workshop was organised by:

  • Niels ten Oever, DATACTIVE, University of Amsterdam
  • Corinne Cath-Speth, Oxford Internet Institute, Digital Ethics Lab, University of Oxford
  • Beatrice Martini, Digital HKS, Harvard Kennedy School

We would like to thank the Harvard Kennedy School, ARTICLE19, Ford Foundation, MacArthur Foundation, Open Technology Fund, European Research Council, DATACTIVE, and the Amsterdam School for Globalisation Studies for their generous support that made this this workshop possible.

 

Internet governance, standards, and infrastructure

Everyday Data: a Workshop Report

By Becky Kazansky and Guillen Torres

Intro

On September 15th 2019, DATACTIVE held a one-day workshop following on the heels of the Data Power conference in Bremen, Germany. We were kindly hosted by the Centre for Media, Communication and Information Research (ZeMKIi) of the University of Bremen. Over this day, we sought to create a space to explore and unpack the concept of the ‘everyday’ as it figures into studies of data practices and resistance to datafication. The workshop brought together a small group of interdisciplinary scholars working on issues related to the making and unmaking of datafication, to paraphrase Neal and Murji (2015). Participants came from sociology, anthropology, computer science, media studies, and informatics. Their topics of research include community activism, platform labor, feminist data practices, and the data-resistant practices of states, studying datafication through the respective participation of citizens, governments, corporations, and academia. In this blog post we explain our inspiration for this workshop, and highlight some of the discussions that resulted. We conclude with an invitation for further ideas and contributions. 

 

From data activism to everyday data

Since coming together in 2015, the DATACTIVE research group has been engaged in the empirical study of the ‘politics of data according to civil society’. During the past four years, we have interviewed over 200 civil society actors from all over the world, who ‘reactively’ or ‘proactively’ (see: Milan and Van der Velden, 2016) engage with datafication through a myriad of different projects (Check our output and blogs for some examples!).  Our approach to these data practices was initially guided by the category of data activism, which helped us foreground new types of political activity made possible by the availability of data. We have since observed that the data activist lens holds the potential to draw sharp boundaries between political and non-political engagements with data. Yet, as datafication has continued to become more pervasive, with responses (including tactics of resistance from different parts of society) to it ever more varied, it has become harder to pinpoint what practices qualify as activism per se —  and which ones do not. 

 

In our research we have encountered many ‘data practices’ that sit within an interzone that blurs hard distinctions between the ‘activist’ and the ‘everyday’. Furthermore, the big and small data-related controversies of the past years have made evident that what is regarded as ‘normal’ or ‘ordinary’ shifts with the diffusion of new technologies, forms of knowledge production, and sociopolitical instabilities (Amoore, 2013). Furthermore, we’ve noted that what is considered ‘everyday’ or ‘extraordinary’ fundamentally pivots around the perspective privileged in making this distinction. We have thus grown interested in exploring how the ordinary and everyday should be accounted for in the study of data practices and in our understanding of resistance to the harms of datafication. 

 

Much research on the relation between datafication and people’s agency has focused on highly- skilled proactive data activists (Gutierrez and Milan 2019), or on how human agency is overridden by algorithmic decision-making. Taking a slightly different road, we seek to explore how power asymmetries are constantly reproduced or challenged through people’s engagement with data in everyday life. In our view, investigating how datafication is “made and unmade” in everyday life implies foregrounding practices which may not be immediately recognized as data activism, but still consist of a response that can be understood as political, even if not necessarily classified as such. 

As part of our ongoing interest in locating spaces for human agency within datafication, we DATACTIVE project members have engaged in a number of lively internal discussions about how data activism fits with broader conceptualizations of ‘data practices’ (Fotoupoulo, 2019), emerging notions of ‘data politics’ (Ruppert et al., 2017), and the imperative to study the ‘everyday’ of dataficatication (Kennedy, 2018). With the goal of questioning the notion of the “ordinary” amidst continuous optimization (Gürses, et al, 2018), creeping surveillance (Monahan, 2010) and perpetually looming states of exception (McQuillan, 2015), we decided to organize a workshop to explore the role of the everyday as a locus of agency, resistance and political intervention. 

 

The workshop

We kept the format of the day a bit experimental: rather than requiring participants to produce an original piece for the workshop, we asked them to take their existing work around datafication and reflect upon it through the lens of several exploratory questions:

  • How do every-day acts come to be understood as spaces of political intervention?
  • What are the every-day and banal aspects of “acting on” and “through” data? 
  • How does agency evolve in relation to everyday engagement with data? 
  • Who determines what is considered the “everyday”?
  • What perspectives are privileged to build the ordinary/extraordinary distinction?
  • How does the ordinary change with the diffusion of new technologies and politics?
  • What happens between the extraordinary moments of political mobilisation that we hear about in the media?

Probing these unwieldy questions in our small pocket of space-time surfaced a number of shared concerns, which we briefly highlight below. 

 

Big P politics and the everyday

Subjacent to our interest in the everyday is the distinction between The Political (read in an egregious Carl Schmitt voice) and politics. During the workshop, this found expression in a collective concern about what we, as researchers, may leave out of sight if we only focus on what seems overtly political. One of the initial intuitions guiding the theme of the  workshop was that the distinction between activist and non-activist engagements with data hides a very Political decision that needs to be questioned, and during the discussion this proved to be a key topic. When focusing on everyday experiences of datafication, we, as researchers, are responsible for locating, highlighting and questioning the political consequences of our making (extra)ordinary of data practices. This requires a sensibility towards the context and discourses of the people enacting the practices we study, which means that their status as Political/activist depends more on their own lived experiences and less on our analytical categories. The relevance of people’s everyday lived experiences also means that we need to remain attentive to how race, gender, class and politics influence what practitioners, observers and powerful actors understand as Political or ordinary.

 

Marginalized, minoritized, colonized and exploited, but (re)gaining agency.

Slowly but surely, narratives about datafication in which human agency is missing are being challenged. All workshop presentations reflected around the ever-growing number of ways through which people can and already do gain agency through or in relation to data, overcoming governments or companies who, thanks to their privileged access to technology, have turned datafication into a tool particularly suitable for control, oppression, surveillance and exploitation. The examples of responses to this fatalist narrative of datafication are as diverse as the communities who put them forward. Inspired by Dr Seeta Peña Gangadharan’s keynote days earlier at the Data Power conference, we discussed calls to practice (and recognize) small acts of refusal in situations of data harm — as well as the long history of organizing that informs recent calls to abolish unjust data-driven systems. We looked at feminist data practices putting forward alternative versions of datafication to question privileges and oppression. We discussed contemporary modes of worker resistance to the unethical conditions of surveillance capitalism, as well as the forms of ‘resistance’ that can arise from people participating within oppressive structures themselves. The general feeling of the workshop was that the pervasiveness of datafication is making evident a plethora of other spaces and strategies for claiming agency beyond exceptional moments of collective mobilisation and existing categories of explicitly political action.

 

In all these examples, we notice the presence of actors who might not fit the label of data activists very visibly challenging the unjust consequences of datafication in their everyday lives. This is, however, hardly a new phenomenon. Minoritized, marginalized, colonized and exploited communities have always experienced everyday life as a space of political struggle. Workshop participants reflected on the experiences of people of color, rural dwellers attempting to benefit from the perks of digital citizenship, Latin American feminist activists, and data intermediaries working with marginalized city dwellers, amongst others.  From these reflections originated questions concerning research ethics and positionality: What role does the ‘agency’ of these communities play in making and unmaking datafication? Where does individual agency fit in relation to governance and accountability for data harms? Is it right to analyze the refusal of actors thought of as more ‘powerful’ through the same lens of resistance as marginalized or harmed communities? 

 

Acting on the everyday

Another one of our core interests in organizing the Everyday Data workshop was to reflect around the everyday as a space to foster resistance to the harmful consequences of datafication, and whether we, as academics, should open it up for examination or leave it alone to prevent its cooptation. During the discussion, this concern acquired two forms. The first was related to how to approach the everyday from our positionality as academics, which implies questioning how notions of ‘everyday’ are shaped not just by datafication but by the way ‘ life’ is ordered and categorized — for example, imagining what the everyday would mean without the implicit structuring of capitalist consumption or labor. The second concern was connected to the role that research on these issues may play in relation to advocacy. What do we want to see ‘happen’ with our research findings? How to best support groups seeking just conditions under datafication? These questions are particularly hard when we decide to join the work of the communities we are interested in on their own terms and honoring the specificities of their values and their epistemic contributions, rather than imposing academic frameworks around ‘justice’ and ‘fairness’.

 

Contribute to the discussion

Following the rich discussion of our workshop, we are looking into ways to grow our brainstorm further. To that end, we invite those interested in reflecting upon the everyday dimension of datafication to write for our blog or propose another contribution. Please get in touch directly with Guillen & Becky. 

 

References and further reading

Amoore, L. (2013). The politics of possibility: Risk and security beyond probability. Durham: Duke University Press.

Datafication and Community Activism Workshop Participants (2019), What We Mean When We Say #AbolishBigData2019. In: Medium. Available at: https://medium.com/@rncrooks/what-we-mean-when-we-say-abolishbigdata2019-d030799ab22e.

D’Ignazio, Catherine. K., Lauren, F. (2020). Data Feminism. S.I.: MIT Press.

Fotopoulou, A. (2019). Understanding citizen data practices from a feminist perspective. Embodiment and the ethics of care. In H. Stephansen & E. Trere (Eds.), Citizen Media and Practice. Oxford: Routledge.

Gutiérrez, M., & Milan, S. (2019). Playing with data and its consequences. First Monday, 24(1). http://dx.doi.org/10.5210/fm.v24i1.9554

Gurses, Seda, Rebekah Overdorf, and Ero Balsa. (2018). POTs: The revolution will not be optimized? 11th Hot Topics in Privacy Enhancing Technologies (HotPETs).

Kennedy, H. (n.d.). Living With Data: Aligning Data Studies and Data Activism Through a Focus on Everyday Experiences of Datafication. Krisis: Journal for Contemporary Philosophy, 1, 18–30.

Milan, S., & van der Velden, L. (2018). Reversing Data Politics: An Introduction to the Special Issue. Krisis: Journal for Contemporary Philosophy, 2018(1), 1–3.

Milan, S., & Velden, L. van der. (2016). The Alternative Epistemologies of Data Activism. Digital Culture & Society, 2(2). https://doi.org/10.14361/dcs-2016-0205

Neal, Sarah and Karim Murji. (2015). “Sociologies of everyday life: editors’ introduction to the special issue.” Sociology 49 (5): 811-819.

Ruppert, E., Isin, E., & Bigo, D. (2017). Data politics. Big Data & Society, 4(2), 205395171771774. https://doi.org/10.1177/2053951717717749

Photo Credit: Telmo32

 

Niels at ECREA: Infrastructures and Inequalities: Media industries, digital cultures and politics

The European Communication Research and Education Association (ECREA) organized a workshop about Infrastructures and Inequalities. Here Niels presented his recent work on an experiment to inscribe legal and ethical norms into the Internet routing infrastructure. The conference helped to further concept of infrastructure, that continues to gaining traction in the fields of geography, media studies, anthropology, and science and technology studies.

Niels at Kyiv Biennial on architecture, protocols, routing, power, and control

The topic of the Kyiv Biennial this year is ‘the Black Cloud’. The title reminiscences the contaminated cloud that traveled over Europe after the Chernobyl disaster and invites us to reflect on the role of technology. At the Kyiv Biennial, the critical media scholar Svitlana Matviyenko organized a two-day symposium with the title ‘communicative militarism‘. Here Niels spoke about the evolution of power and control in the Internet architecture, the political economy that shapes it, and the threats and opportunities that lie ahead. Other speakers at the symposium were Geert Lovink, Clemens Apprich, Svitlana Matviyenko, and Asia BazdyrievaIMG_20191018_201112

YouTube Algorithm Exposed: DMI Summer School project week 1

DATACTIVE participated in the first week of the Digital Methods Initiative summer school 2019 with a data sprint related to the side project ALEX. DATACTIVE’s insiders Davide and Jeroen, together with research associate and ALEX’s software developer Claudio Agosti, pitched a project aimed at exploring the logic of YouTube’s recommendation algorithm, using the ALEX-related browser extension youtube.tracking.exposed. ytTREX allows you to produce copies of the set of recommended videos, with the main purpose to investigate the logic of personalization and tracking behind the algorithm. During the week, together with a number of highly motivated students and researchers, we engaged in collective reflection, experiments and analysis, fueled by Brexit talks, Gangnam Style beats, and the secret life of octopuses. Our main findings (previewed below, and detailed later in a wiki report) pertain look into which factors (language settings, browsing behavior, previous views, domain of videos, etc.) help trigger the highest level of personalization in the recommended results.

 

Algorithm exposed_ investigasting Youtube – slides