From show in team updates


Stefania speaks at Falling Walls 2018

DATACTIVE PI Stefania Milan is in Berlin on November 8-9, as an invited speaker at the Falling Wall conference 2018. Falling Walls is an annual science event that coincides with the anniversary of the Fall of the Berlin Wall. The one-day scientific conference showcases the research work of international scientists from a wide range of fields. Stefania’s presentation will revolve around the theme of data empowerment. Check out the conference program, and the description. The event is streamed live.

Stefania will also attend the Falling Wall Circle, whose theme this year in “Human genius in the age of Artificial Intelligence”.

Photo by Kaique Rocha from Pexels

Open Sourcing Open Source Intelligence

In late September, I gave a talk in which she considered the connections between Open Source Intelligence (OSINT) and data activism at the ‘DIGITAL CULTURES: Knowledge / Culture / Technology’ conference at Leuphana University Lüneburg. The presentation asked how OSINT might be understood through the prism of ‘data activist epistemologies’ (Milan and Van der Velden 2016).

The starting point for this interrogation is that Open Source Intelligence, despite its name, appears to have little in common with ‘open source’ cultures as we know them, for example through open source technologies. Open Source Intelligence simply means intelligence, for states or businesses, that is gathered from ‘open’ or publicly available sources. The initial question in the paper is, thus, one of terminology: What is really ‘open source’ about OSINT? And how might a critical interrogation of ‘open source’ change the way we think about OSINT? Hence the title of the talk: ‘Open Sourcing Open Source Intelligence’.

As a type of data activism, open source can be described as having its associated ‘epistemic culture’. This is a concept which refers to the diversity in modes of knowledge-making. ‘Epistemic culture’ originally comes from studies into scientific practices, and it directs attention to the ‘specific strategies that generate, validate, and communicate scientific accomplishments’ (Knorr-Cetina and Reichmann 2015, 873). It guides one’s focus toward the complex ‘relationships between experts, organisational formats, and epistemic objects’ (ibid. 873-4).

What we encounter in open source cultures is that knowledge is not legitimated institutionally, but technologically: the (open source) software function as a token of trust. The knowledge is legitimated because the software and the verification model can be reviewed, the methods are shared publicly, many of the findings are publicly shared, public learning is crucial and, ideally, expertise thus becomes distributed.

Open Source Intelligence (OSINT), by contrast, is a practice that seems to belong to – and to be legitimated by – formal and relatively closed institutions such as intelligence agencies. Yet the label can usefully be reclaimed to describe activist projects – such as the Syrian Archive – which seek to put open source tools and principles in the service of a different kind of knowledge-making, one that is genuinely public-oriented and collective. The question thus becomes: What can we learn from the interface between OSINT and open source? What kind of knowledge is being made, how? And how might activist forms of OSINT inform our understanding of data activism broadly speaking?

Stay tuned for the forthcoming paper, which is being co-authored with Jeff Deutch from the Syrian Archive. It will no doubt be enriched by a good discussion with the conference audience.

The abstract for the talk is available through the full conference programme (pp. 215-6).


Lonneke van der Velden is postdoctoral researcher with DATACTIVE and a lecturer at the department of media studies at the University of Amsterdam. Her research deals with internet surveillance and activism. She is part of the editorial board of Krisis, Journal for Contemporary Philosophy, and is on the Board of Directors of Bits of Freedom.   



Knorr Cetina, Karin, and Werner Reichmann (2015) Epistemic cultures, in International Encyclopedia of the Social & Behavioral Sciences, ed. James D. Wright. Amsterdam: Elsevier, pp. 873-880.

Milan, Stefania, and Lonneke Van der Velden (2016) The Alternative Epistemologies of Data Activism. Digital Culture & Society 2(2) pp. 57-74.

Photo by Vladislav Reshetnyak from Pexels

26 October: Noortje Marres and DATACTIVE in conversation on the social science scene today

On 26 October, DATACTIVE hosts the philosopher and science studies scholar Noortje Marres to discuss and problematize the role of social science today.  The DATACTIVE team will engage with Marres to discuss chapters of her book Digital Sociology: The Reinvention of Social Research. The exchange is expected to delve into the social sciences from various perspectives derived from team members’ research fields, and will be anchored in the contemporary challenges to digital societies and beyond.

Marres is Associate Professor in the Centre for Interdisciplinary Methodologies at the University of Warwick and sits in the advisory board of DATACTIVE. Currently, she is a Visiting Professor in the Centre for Science and Technology Studies at the University of Leiden. Her work is located in the interdisciplinary field of Science, Technology and Society (STS).

Photo by Ngai Man Yan from Pexels

Organization After Social Media: orgnets and alternative socio-technical infrastructures

by Lonneke van der Velden

Last month, I was invited to be a respondent (together with Harriet Bergman) for the launch of Geert Lovink and Ned Rossiter’s latest book, Organization After Social Media. The book is a collection of essays which re-interrogate the concerns and contributions of social movements and counter-cultural collectives in light of a significant contemporary problem: the existence of tech-monopolies such as Google and Facebook.

If social media cannot deliver on their promise to help collectives organize, how then should movements proceed? How to make such movements sustainable? The authors invite us to reflect on these issues through central concept of ‘organized networks’, or ‘orgnets’.

I liked many aspects of the book, but will highlight here two things I found interesting from the perspective of DATACTIVE’s own concerns. The first has to do with a re-evaluation of encryption, and the second with where we search for historical and theoretical lessons to help us organize ‘after social media’.

Re-evaluating encryption?

One thing I read in the book is a re-evaluation of encryption. Encryption is presented, not as an individual intervention, but as an intervention with a potential to allow for the emergence of collective forms. “The trick,” the authors tell us, “is to achieve a form of collective invisibility without having to reconstitute authority” (p. 5).

I think this collective potential of encryption is interesting. Research into activism in the UK after the Snowden revelations (Dencik, Hintz & Cable 2016) showed that digital rights groups tended to operate in a rather specialized space demarcated from issues championed by other groups and organizations. Digital rights organizations speak about privacy and freedom of speech, but hardly touch upon other social issues. And vice versa: organizations that work on, for instance, social justice issues, tend to regard digital rights as a working package for those specific NGOs that are dedicated to privacy. Encryption does not feature as a strategy that is part of their activist work. This has only partly to do with a lack of knowledge. What’s more, activists told the researchers that they want to do things in public, and using encryption is associated with having something to hide. This is a reductive summary of some of the findings by Dencik and others, but the study provides food for thought about how encryption is often precipitated.

What Lovink and Rossiter’s book nicely does is show that this is not the only possible way to conceive of encryption, opening up a different interpretation. Not one that stages privacy or security, which is a discourse about protection, but one that forefronts organized unpredictability, which is a more productive discourse about what encryption has to offer in terms of collective organization. This idea might be more interesting for activist groups; that is, if they are not interested in hiding, they might well want to remain unsuspected and surprising.

Against the background of the analysis that social media and algorithms make people readable and predictable, infrastructures that help organize unpredictability become important. In fact, from the discussion that followed with the authors during the book launch, it turned out that many of the concerns in the book relate to organizing unpredictability: merging the inventive (as exemplified by tactical media) with a wish for sustainability. How to build digital infrastructures that allow for the disruptive qualities that tactical media had in the past?

Some questions remain. Technologies of encryption are not infrastructures that can emerge out of the blue: they in turn need organized networks and communal work to remain up to date. Together with the audience at the book launch, we had an interesting back and forth about whether a notion of ‘community’, and community struggles, was needed.

Realizing organized networks

Another thing we talked about that evening was the tension between organized networks as a concept and as actually-existing practices. As the authors write: “Organized networks are out there. They exist. But they should still be read as a proposal. This is why we emphasize the design element. Please come on board to collectively define what orgnets could be all about.” (p. 16)

Hence, the authors invite anyone who has been part of an organized network, or thinks that he or she had been part of one, or wished that their network had been more organized, to ‘fill in’ their core concept. That means that much is left open in the book to the inventive powers of orgnet-organizers.

Technological infrastructures are an exception: the book is quite prescriptive in this regard, arguing for example that servers should not be centralized, and that we should prevent the emergence of tech-giants and develop alternative protocols and new spaces for action.

I could not help but wonder about the other kinds of prescription that are not so present in the book. Might we also offer prescriptive accounts in respect to things social movements experience over and over again, such as burnouts, internal strife, sexual harassment, and all things that hinder the sustainability of networks? And shouldn’t we reach out for documentation from, say, social movement studies or feminist histories, in addition to media theory? I am thinking about these in echo of Kersti’s and others’ discussion around community and critical community studies.

All in all, given that the focus of Lovink and Rossiter’s book is on forms of organization ‘after social media’, the choice of focusing on (alternative) socio-technical infrastructures is as understandable as it is valuable in itself. Indeed, it is an issue our research group cares about a lot; we hope to contribute to some of the causes laid out in the book.

The book can be ordered here and is also freely available online in pdf.


Lonneke van der Velden is a lecturer at the department of media studies at the University of Amsterdam. Her research deals with conceptualizations of internet surveillance and internet activism. She is also on the Board of Directors of Bits of Freedom.   


Dencik, Lina, Arne Hintz, and Jonathan Cable. 2016. “Towards Data Justice? The Ambiguity of Anti-Surveillance Resistance in Political Activism.” Big Data & Society 3 (2): 1–12.

Rossiter, Ned, and Geert Lovink. Organization after Social Media. Minor Compositions, 2018.


Stefania at the AoIR 2018 conference, Montreal

DATACTIVE PI Stefania Milan has taken part in the annual conference of the Association of Internet Researchers, in Montreal (Canada), October 10-13. This year’s conference theme was “Transnational materialities”. Among others, she presented a work in progress, co-authored with Miren Gutierrez (Universidad de Deusto), on the social consequences of engagement with data and data infrastructure. On October 14th, she has taken part in the academic Festschrift to celebrate the career of Prof. Marc Raboy. The event, entitled Networking Global Communication in and Beyond the Age of Social Media, took place at McGill University.


NEW article out: “Everyday acts of authoritarianism in the liberal West”, International Journal of Communication

DATACTIVE is happy to announce the publication of the article “Through a Glass, Darkly”: Everyday Acts of Authoritarianism in the Liberal West, co-Arne Hintz (Data Justice Lab, Cardiff University) and Stefania Milan, in the International Journal of Communication. The essay is part of a Special Section on “Authoritarian Practices in the Digital Age”, edited by Marlies Glasius and Marcus Michaelsen, University of Amsterdam. The Special Section brings together nine papers that extend our understanding of the relationship between contemporary forms of authoritarianism and digital communication technologies. The contributions investigate Internet control and censorship, surveillance, and disinformation, presenting insights from China, Russia and Central Asia, Iran, Pakistan, Sub-Saharan Africa, and Western Europe. The articles are available in open-access. The abstract of Through a Glass, Darkly is below.

“Through a Glass, Darkly”: Everyday Acts of Authoritarianism in the Liberal West

Institutional practices undermining citizen agency and infringing on individual freedoms are typically associated with authoritarian countries. However, they are also proliferating in Western democracies. This article redefines data-based surveillance as a “Western” authoritarian and illiberal practice in the digital realm, resulting from state–industry collaboration and alienated from accountability mechanisms. Straddling critical data studies and surveillance studies, the article explores these dynamics of surveillance in the West by focusing on two dimensions: the institutionalization of governmental practices in law and the societal normalization of surveillance in popular cultural practices. It thus investigates the renegotiation of the boundaries of state power along two axes—top down and bottom up. It connects the notions of “authoritarian and illiberal practices” and “surveillance cultures,” asking how the former are produced, negotiated, and legitimized and reviewing their consequences for citizens and civil society. Based on empirical data from two projects exploring the interplay between citizenship and surveillance, the article argues that acts of authoritarianism in the West are institutionalized at the intersection of top-down governmental practices and bottom-up popular reactions.

Keywords: authoritarian practices, surveillance, surveillance cultures, liberal democracy, Internet freedoms

APC final photo (2)

Why we won’t be at APC 2018

In October 2018, the Amsterdam Privacy Conference (APC) will be back at the University of Amsterdam. Two DATACTIVE project team members, Stefania (Principal Investigator), and Becky (PhD candidate), enthusiastically supported the conference as coordinators of the ‘Digital Society and Surveillance’ theme. The Data Justice Lab at Cardiff University submitted a panel proposal, which was successfully included. Regretfully, neither will take part in the conference: DATACTIVE and the Data Justice Lab have decided to withdraw over the participation of the US-based software company Palantir as one of the APC’s Platinum Sponsors.

Our decision to withdraw stems from an active refusal to legitimize companies accused of enabling human rights abuses, and a concern with the lack of transparency surrounding sponsorship.

Palantir is a company specializing in big data analytics, which develops technologies for the military, law enforcement and border control. The deployment of Palantir’s technologies has raised wide-spread concern among civil liberties and human rights advocates. Reporting shows that, in the United States, Palantir has played an important role in enabling the efforts of the ICE (Immigration and Customs Enforcement) to identify, detain, and deport undocumented immigrants, refugees, and asylum seekers. This has resulted in the indefinite detention of thousands of children who have been separated from their parentsThis indefensible policy has come under strong criticism from the United Nations and prompted an alliance of technology workers and affected communities, to call – so far, unsuccessfully – for Palantir to cancel its contracts with ICE.

We feel that providing Palantir with a platform, as a sponsor of a prominent academic conference on privacy, significantly undermines efforts to resist the deployment of military-grade surveillance against migrants and marginalized communities already affected by abusive policing. 

Because we have organized conferences ourselves, we believe transparency in sponsorship agreements is key. While we praise the APC organizing committee forcommitting to full transparency, we were not informed of sponsorship agreements until the very last minute. The APC Sponsors page, in addition, was only populated after the participant registration deadline. As conference coordinators and prospective participants, we feel that we were not given the chance to make an informed choice about our contribution.

Sponsorship concerns are not a new issue: the very same controversy, around the involvement of this very same company (as well as others), emerged during the 2015 edition of APC. Though we acknowledge the complexity of corporate sponsorship, we note that other prominent tech policy conferences, such as Computers, Privacy and Data Protection (CPDP) conference, have recently stopped accepting sponsorship from Palantir. We thus believe this is a good moment for a larger discussion about how conferences should be organized in the future.

Academia—and especially publicly-funded universities—need to consider their role in efforts to neutralize or undermine human rights concerns. Such considerations are particularly pertinent in the context of what has been described as the increased neoliberalization of higher education, in which there is significant pressure to attract and pursue funding from different sources. As academics and as citizens, we will increasingly be asked to make choices of this kind. Hence, we believe it is time to set down a clear set of principles for sponsorship going forward.


Amsterdam and Cardiff, 19 September 2018

Stefania Milan and Becky Kazansky (DATACTIVE) & Lina Dencik, Arne Hintz, Joanna Redden, Fieke Jansen (Data Justice Lab)


Welcome to DATACTIVE’s spinoff ALEX! An interview with fbtrex Lead Developer Claudio Agosti

by Tu Quynh Hoang and Stefania Milan

DATACTIVE is proud to announce that its spin-off ALEX project has been awarded a Proof of Concept grant of the European Research Council. ALEX, which stands in for “ALgorithms Exposed (ALEX). Investigating Automated Personalization and Filtering for Research and Activism”, aims at unmasking the functioning of personalization algorithms on social media platforms, initially taking Facebook as a test case. ALEX marks the engagement of DATACTIVE with “data activism in practice”that is to say, turning data into a point of intervention in society.

To mark the occasion, we go public with an interview with Claudio Agosti, DATACTIVE Research Associate and Lead Developer of browser extension (fbtrex), whose open-source code is at the core of the ALEX project. Claudio was interviewed by DATACTIVE Principal Investigator Stefania Milan at the Internet Freedom Festival in Valencia, Spain, in relation to a project on content regulation on/by platforms.

Claudio (also known as vecna) is a self-taught technician in digital security. With the internet gradually becoming a central agent in the political debate, he moved from the corporate security services to the defence of human rights in the digital sphere. Currently, he is exploring the influence of algorithms on society. Claudio is the coordinator of the free software projects behind and a Founding Member and Vice-President of the Hermes Center for Transparency and Digital Human Rights

Stefania: Is the spread of fake news predominantly a technical or social problem?

Claudio: It is a social problem in the sense that the lack of critical judgment in individuals creates the conditions for fake news or misinformation to spread. However, through technology, the dissemination of misinformation is much faster and can scale up. The problem we are facing now is that when the costs of spreading content drop, the possibilities for an individual to deliver a successful information operation (or infops, I feel this term is more accurate than propaganda) is higher. However, it isn’t true that people lack critical judgment in absolute terms. At a personal level, one can only be an knowledgeable on a limited range of subjects, but the information we receive is very diverse and, most of the time, outside of our domain of expertise. As social media users and information consumers, we should have a way to validate that information. I wonder what if we would know how to validate on our own? This does not exist in mainstream news media either. It is possible, for example, on Wikipedia, but anywhere else, the way that information is spread implies that information is true on its own. A news report, a blog post or a status update on social media do not contain any information that helps validation. All in all, I think fake news is simultaneously a technical and a political problem, because those who create and spread information have responsibility towards user expectations, and this shape also the users’ vulnerability to infops.

Stefania: As a developer, what is your main goal with the browser extension?

Claudio: At the moment we haven’t had the tools to assess responsibility with respect to infops. If we say that fake news is a social problem because people are gullible, we put responsibility on users/readers. But it’s also a problem of those publishing the information, who allow themselves to publish incorrect information because they will be hardly held accountable. According to some observers, social media platforms such as Facebook are to be blamed for the spread of misinformation. We have three actors: the user/the reader, the publisher, and the platform. With, I’m trying to collect actual data that allows us to reflect on where the responsibilities are. For example, sometimes Facebook is thought to be responsible but in fact it is the responsibility of the content publisher. And sometimes the publishers are to be blamed, but are not legally responsible. We want to collect actual data that can help investigate these assumptions. We do so from an external, neutral position.

Stefania: Based on your studies of the spread of information on social media during the recent elections in Argentina and Italy, can you tell us what the role of platforms is, and of Facebook in particular?

Claudio: In the analyses we did in Argentina and Italy, we realized that there are two accountable actors: the publisher and the platform. Some of the publishers are actually spamming users’ timelines as they are producing too many posts per day. I find it hard to believe that they are producing quality content in that way. They just aim at occupying users’ timelines to exploit some of their seconds of attention. In my opinion, this is to be considered spam. What we also found is that Facebook’s algorithms are completely arbitrary in deciding what a user is or is not going to see. It’s frightening when we consider that a person that displays some kind of deviant behavior such as reading and sharing only fascist or racist content will keep being exposed to even less diverse content. From our investigations of social media content during two heated election campaigns, we have the evidence that if a person expresses populist or fascist behavior, the platform is designed to show her less diverse information in comparison to other users, and that can only reinforce her position. We can also argue that the information experience of that person is of lower quality, assuming that maximum information exposure is always to be preferred.  

Stefania: So what can users do to fix this problem? 

Claudio: I think users should be empowered to run their own algorithms and they should have better tools at their disposal to select the sources of their information diets. This has to become also a task of information publishers. Although everybody on social media is both a publisher and a consumer, people who do publishing as their daily jobs are ever more responsible. For example, they should create much more metadata to go along with information so to permit the system to better filter and categorize content. Users, on the other hand, should have these tools in hand. When we don’t have that set of metadata and thus the possibility to define our own algorithm, we have to rely on Facebook’s algorithms. But Facebook’s algorithms are implicitly promoting Facebook’s agenda and its capitalist imperative of maximizing users’ attention and engagement. For users to have the possibility of defining their own algorithms, we should first of all create the need and the interest to do so by showing how much of the algorithm is the platform’s agenda and how it can really influence our perception of reality. That is what I’m doing now: collecting evidence about this problems and trying to explain it to a broader audience, raising awareness amongst social media users. 

Stefania: Do you think we should involve the government in the process? From your perspective of software developer, do you think we need more regulation?

Claudio: Regulation is really key because it’s important to keep corporations in check. But I’m afraid that, among others, there is a misunderstanding in making regulations which seem to have direct benefits on people’s life, but for example might end up limiting some of the key features of open source software and its distribution. Therefore I’m quite skeptical. I have to say that high level regulations like the General Data Protection Regulation do not try to regulate the technology but rather its effects and in particular data usage. They are quite abstract and distant from the technology itself. If the regulators want to tell the company what to do and what not to do, I’m afraid that in the democratic competition of the technical field the probability of making mistakes is higher. On the other hand, if we just regulate users/consumers production explicitly, we would end up reinforcing the goals of the data corporations even more. So far, regulations have in fact been exploited by the biggest fish in the market. In this game we can distinguish three political entities: users, companies, and governments. In retrospect, we see that there have been cases where companies have helped citizens against governments and, in some other case, governments have helped citizen against companies. I hope we can aggregate users and civil society organizations around our project, because that’s the political entity that is in utmost need to be somehow guided or supported.

Stefania: So the solution is ultimately in users?

Claudio: The problem is complex thus the solution can’t be found in one of the three entities only. With ALEX we will have the opportunity to re-use our data with policies we determine, and therefore try to produce features which can, at least, offer a new social imaginary.

First of all, we aim at promoting diversity. Fbtrex will provide users with tools for comparing their social media timelines to those of others users, based on mutual sharing agreements which puts the users—rather than the company—on the driver seat. The goal is to involve and compare a diverse group of users and their timelines across the globe. In so doing, we empower users to understand what is hidden from them on a given topic. Targeted communication and user defined grouping, as implemented on most social media, lead to fragmentation of knowledge. Filtered interactions confirming a user’s position have been complicit in this fragmentation. Our approach doesn’t intend to solve this technocratic subterfuges with other technological fixes, but to let the user explore the diversity.

In fact, the fragmentation of information and individuals produced by social media has made it even more difficult for users to relate to problems far removed from their reality. How do you understand the problems of migrants, for example, if you have never been away from home yourself, and you don’t spend time in their company? To counter this effect, thanks to the funding of the European Research Council, we will work on an advanced functionality which will… turn the so-called filter bubbles against themselves, sort to speak. 

Secondly, we want to support delegation and fact-checking, enabling third-party researchers to play a role in the process. The data mined by fbtrex will be anonymized and provided to selected third-party researchers, either individuals or collectives. These will be enabled to contextualize the findings, combine it with other data and complement it with data obtained through other social science research methods such as focus groups. But, thanks to the innovative data reuse protocols we will devise, in any given moment users, as data producers, will have a say as to whether and how they want to volunteer their data. We will also work to create trusted relationships and networks with researchers and users.

In conclusion, if users want to really be free, they have to be empowered to be able to exercise their freedom. This means: they have to own their own data, run their algorithms, and understand the political implications behind technological decision-making. To resort to a metaphor, this is exactly the difference between dictatorship and democracy: you can believe or accept that someone will do things for your own good like in a dictatorship, or you can decide to assume your share of responsibility, taking things in your hands and trying to do what is best for you while respecting others—which is exactly what democracy teaches us.


ALEX is a joint effort by Claudio Agosti, Davide Beraldo, Jeroen de Vos and Stefania Milan.

See more: the news in Dutch, the press release by the ERC, our project featured in the highlights of the call

Stay tuned for details.

The new website will go live soon!



Stefania at AlgoSov Summit in Copenhagen, 8 September

On the 8th of September Stefania will give a talk at the Algorithmic Sovereignty Summit in Copenhagen, in the framework of the TechFestival, a festival “to find human answers to the big questions of technology”.

The Summit in an initiative of Jaromil Rojo from, who also sits in the DATACTIVE’s Ethics Advisory Board. The summit kickstarts the European Observatory on Algorithmic Sovereignty.