From show on blog page

Photo by Kaique Rocha from Pexels

Open Sourcing Open Source Intelligence

In late September, I gave a talk in which she considered the connections between Open Source Intelligence (OSINT) and data activism at the ‘DIGITAL CULTURES: Knowledge / Culture / Technology’ conference at Leuphana University Lüneburg. The presentation asked how OSINT might be understood through the prism of ‘data activist epistemologies’ (Milan and Van der Velden 2016).

The starting point for this interrogation is that Open Source Intelligence, despite its name, appears to have little in common with ‘open source’ cultures as we know them, for example through open source technologies. Open Source Intelligence simply means intelligence, for states or businesses, that is gathered from ‘open’ or publicly available sources. The initial question in the paper is, thus, one of terminology: What is really ‘open source’ about OSINT? And how might a critical interrogation of ‘open source’ change the way we think about OSINT? Hence the title of the talk: ‘Open Sourcing Open Source Intelligence’.

As a type of data activism, open source can be described as having its associated ‘epistemic culture’. This is a concept which refers to the diversity in modes of knowledge-making. ‘Epistemic culture’ originally comes from studies into scientific practices, and it directs attention to the ‘specific strategies that generate, validate, and communicate scientific accomplishments’ (Knorr-Cetina and Reichmann 2015, 873). It guides one’s focus toward the complex ‘relationships between experts, organisational formats, and epistemic objects’ (ibid. 873-4).

What we encounter in open source cultures is that knowledge is not legitimated institutionally, but technologically: the (open source) software function as a token of trust. The knowledge is legitimated because the software and the verification model can be reviewed, the methods are shared publicly, many of the findings are publicly shared, public learning is crucial and, ideally, expertise thus becomes distributed.

Open Source Intelligence (OSINT), by contrast, is a practice that seems to belong to – and to be legitimated by – formal and relatively closed institutions such as intelligence agencies. Yet the label can usefully be reclaimed to describe activist projects – such as the Syrian Archive – which seek to put open source tools and principles in the service of a different kind of knowledge-making, one that is genuinely public-oriented and collective. The question thus becomes: What can we learn from the interface between OSINT and open source? What kind of knowledge is being made, how? And how might activist forms of OSINT inform our understanding of data activism broadly speaking?

Stay tuned for the forthcoming paper, which is being co-authored with Jeff Deutch from the Syrian Archive. It will no doubt be enriched by a good discussion with the conference audience.

The abstract for the talk is available through the full conference programme (pp. 215-6).

 

Lonneke van der Velden is postdoctoral researcher with DATACTIVE and a lecturer at the department of media studies at the University of Amsterdam. Her research deals with internet surveillance and activism. She is part of the editorial board of Krisis, Journal for Contemporary Philosophy, and is on the Board of Directors of Bits of Freedom.   

 

References:

Knorr Cetina, Karin, and Werner Reichmann (2015) Epistemic cultures, in International Encyclopedia of the Social & Behavioral Sciences, ed. James D. Wright. Amsterdam: Elsevier, pp. 873-880.

Milan, Stefania, and Lonneke Van der Velden (2016) The Alternative Epistemologies of Data Activism. Digital Culture & Society 2(2) pp. 57-74.

Photo by Vladislav Reshetnyak from Pexels

26 October: Noortje Marres and DATACTIVE in conversation on the social science scene today

On 26 October, DATACTIVE hosts the philosopher and science studies scholar Noortje Marres to discuss and problematize the role of social science today.  The DATACTIVE team will engage with Marres to discuss chapters of her book Digital Sociology: The Reinvention of Social Research. The exchange is expected to delve into the social sciences from various perspectives derived from team members’ research fields, and will be anchored in the contemporary challenges to digital societies and beyond.

Marres is Associate Professor in the Centre for Interdisciplinary Methodologies at the University of Warwick and sits in the advisory board of DATACTIVE. Currently, she is a Visiting Professor in the Centre for Science and Technology Studies at the University of Leiden. Her work is located in the interdisciplinary field of Science, Technology and Society (STS).

Photo by Ngai Man Yan from Pexels

Organization After Social Media: orgnets and alternative socio-technical infrastructures

by Lonneke van der Velden

Last month, I was invited to be a respondent (together with Harriet Bergman) for the launch of Geert Lovink and Ned Rossiter’s latest book, Organization After Social Media. The book is a collection of essays which re-interrogate the concerns and contributions of social movements and counter-cultural collectives in light of a significant contemporary problem: the existence of tech-monopolies such as Google and Facebook.

If social media cannot deliver on their promise to help collectives organize, how then should movements proceed? How to make such movements sustainable? The authors invite us to reflect on these issues through central concept of ‘organized networks’, or ‘orgnets’.

I liked many aspects of the book, but will highlight here two things I found interesting from the perspective of DATACTIVE’s own concerns. The first has to do with a re-evaluation of encryption, and the second with where we search for historical and theoretical lessons to help us organize ‘after social media’.

Re-evaluating encryption?

One thing I read in the book is a re-evaluation of encryption. Encryption is presented, not as an individual intervention, but as an intervention with a potential to allow for the emergence of collective forms. “The trick,” the authors tell us, “is to achieve a form of collective invisibility without having to reconstitute authority” (p. 5).

I think this collective potential of encryption is interesting. Research into activism in the UK after the Snowden revelations (Dencik, Hintz & Cable 2016) showed that digital rights groups tended to operate in a rather specialized space demarcated from issues championed by other groups and organizations. Digital rights organizations speak about privacy and freedom of speech, but hardly touch upon other social issues. And vice versa: organizations that work on, for instance, social justice issues, tend to regard digital rights as a working package for those specific NGOs that are dedicated to privacy. Encryption does not feature as a strategy that is part of their activist work. This has only partly to do with a lack of knowledge. What’s more, activists told the researchers that they want to do things in public, and using encryption is associated with having something to hide. This is a reductive summary of some of the findings by Dencik and others, but the study provides food for thought about how encryption is often precipitated.

What Lovink and Rossiter’s book nicely does is show that this is not the only possible way to conceive of encryption, opening up a different interpretation. Not one that stages privacy or security, which is a discourse about protection, but one that forefronts organized unpredictability, which is a more productive discourse about what encryption has to offer in terms of collective organization. This idea might be more interesting for activist groups; that is, if they are not interested in hiding, they might well want to remain unsuspected and surprising.

Against the background of the analysis that social media and algorithms make people readable and predictable, infrastructures that help organize unpredictability become important. In fact, from the discussion that followed with the authors during the book launch, it turned out that many of the concerns in the book relate to organizing unpredictability: merging the inventive (as exemplified by tactical media) with a wish for sustainability. How to build digital infrastructures that allow for the disruptive qualities that tactical media had in the past?

Some questions remain. Technologies of encryption are not infrastructures that can emerge out of the blue: they in turn need organized networks and communal work to remain up to date. Together with the audience at the book launch, we had an interesting back and forth about whether a notion of ‘community’, and community struggles, was needed.

Realizing organized networks

Another thing we talked about that evening was the tension between organized networks as a concept and as actually-existing practices. As the authors write: “Organized networks are out there. They exist. But they should still be read as a proposal. This is why we emphasize the design element. Please come on board to collectively define what orgnets could be all about.” (p. 16)

Hence, the authors invite anyone who has been part of an organized network, or thinks that he or she had been part of one, or wished that their network had been more organized, to ‘fill in’ their core concept. That means that much is left open in the book to the inventive powers of orgnet-organizers.

Technological infrastructures are an exception: the book is quite prescriptive in this regard, arguing for example that servers should not be centralized, and that we should prevent the emergence of tech-giants and develop alternative protocols and new spaces for action.

I could not help but wonder about the other kinds of prescription that are not so present in the book. Might we also offer prescriptive accounts in respect to things social movements experience over and over again, such as burnouts, internal strife, sexual harassment, and all things that hinder the sustainability of networks? And shouldn’t we reach out for documentation from, say, social movement studies or feminist histories, in addition to media theory? I am thinking about these in echo of Kersti’s and others’ discussion around community and critical community studies.

All in all, given that the focus of Lovink and Rossiter’s book is on forms of organization ‘after social media’, the choice of focusing on (alternative) socio-technical infrastructures is as understandable as it is valuable in itself. Indeed, it is an issue our research group cares about a lot; we hope to contribute to some of the causes laid out in the book.

The book can be ordered here and is also freely available online in pdf.

 

Lonneke van der Velden is a lecturer at the department of media studies at the University of Amsterdam. Her research deals with conceptualizations of internet surveillance and internet activism. She is also on the Board of Directors of Bits of Freedom.   

 

Dencik, Lina, Arne Hintz, and Jonathan Cable. 2016. “Towards Data Justice? The Ambiguity of Anti-Surveillance Resistance in Political Activism.” Big Data & Society 3 (2): 1–12. https://doi.org/10.1177/2053951716679678.

Rossiter, Ned, and Geert Lovink. Organization after Social Media. Minor Compositions, 2018.

APC final photo (2)

Why we won’t be at APC 2018

In October 2018, the Amsterdam Privacy Conference (APC) will be back at the University of Amsterdam. Two DATACTIVE project team members, Stefania (Principal Investigator), and Becky (PhD candidate), enthusiastically supported the conference as coordinators of the ‘Digital Society and Surveillance’ theme. The Data Justice Lab at Cardiff University submitted a panel proposal, which was successfully included. Regretfully, neither will take part in the conference: DATACTIVE and the Data Justice Lab have decided to withdraw over the participation of the US-based software company Palantir as one of the APC’s Platinum Sponsors.

Our decision to withdraw stems from an active refusal to legitimize companies accused of enabling human rights abuses, and a concern with the lack of transparency surrounding sponsorship.

Palantir is a company specializing in big data analytics, which develops technologies for the military, law enforcement and border control. The deployment of Palantir’s technologies has raised wide-spread concern among civil liberties and human rights advocates. Reporting shows that, in the United States, Palantir has played an important role in enabling the efforts of the ICE (Immigration and Customs Enforcement) to identify, detain, and deport undocumented immigrants, refugees, and asylum seekers. This has resulted in the indefinite detention of thousands of children who have been separated from their parentsThis indefensible policy has come under strong criticism from the United Nations and prompted an alliance of technology workers and affected communities, to call – so far, unsuccessfully – for Palantir to cancel its contracts with ICE.

We feel that providing Palantir with a platform, as a sponsor of a prominent academic conference on privacy, significantly undermines efforts to resist the deployment of military-grade surveillance against migrants and marginalized communities already affected by abusive policing. 

Because we have organized conferences ourselves, we believe transparency in sponsorship agreements is key. While we praise the APC organizing committee forcommitting to full transparency, we were not informed of sponsorship agreements until the very last minute. The APC Sponsors page, in addition, was only populated after the participant registration deadline. As conference coordinators and prospective participants, we feel that we were not given the chance to make an informed choice about our contribution.

Sponsorship concerns are not a new issue: the very same controversy, around the involvement of this very same company (as well as others), emerged during the 2015 edition of APC. Though we acknowledge the complexity of corporate sponsorship, we note that other prominent tech policy conferences, such as Computers, Privacy and Data Protection (CPDP) conference, have recently stopped accepting sponsorship from Palantir. We thus believe this is a good moment for a larger discussion about how conferences should be organized in the future.

Academia—and especially publicly-funded universities—need to consider their role in efforts to neutralize or undermine human rights concerns. Such considerations are particularly pertinent in the context of what has been described as the increased neoliberalization of higher education, in which there is significant pressure to attract and pursue funding from different sources. As academics and as citizens, we will increasingly be asked to make choices of this kind. Hence, we believe it is time to set down a clear set of principles for sponsorship going forward.

 

Amsterdam and Cardiff, 19 September 2018

Stefania Milan and Becky Kazansky (DATACTIVE) & Lina Dencik, Arne Hintz, Joanna Redden, Fieke Jansen (Data Justice Lab)

By London School of Economics Library and Political Science - https://www.flickr.com/photos/lselibrary/3925726761/in/set-72157622828540200/, No restrictions, https://commons.wikimedia.org/w/index.php?curid=10180000

Data Colonialism – the first article of the Special Issue on “Big Data from the South” is out

By London School of Economics Library and Political Science - https://www.flickr.com/photos/lselibrary/3925726761/in/set-72157622828540200/, No restrictions, https://commons.wikimedia.org/w/index.php?curid=10180000
Photo by London School of Economics Library and Political Science

Nick Couldry and Ulisse A. Mejias re-frame the Data from the South debate within the context of modern day colonialism: data colonialism; an alarming stage where human life is “appropriated through data” and life is, eventually, “capitalized without limit”.

This essay marks the beginning of a series of articles under a special issue on Big Data from the South, edited by Stefania Milan and Emiliano Trerè and published on the Television and New Media Journal. This article will be freely accessible for the first month, so we encourage you to put it high up on your to-read list.

The special issue promises interesting takes and approaches from renowned scholars and experts in the filed, such as Angela Daly and Monique Mann, Payal Arora, Stefania Milan and Emiliano Trerè, Jean-Marie Chenou and Carolina Cepeda, Paola Ricaurte Quijano, Jacobo Najera and Jesús Robles Maloof, with a special commentary by Anita Say Chan. Stay tuned for our announcements of these articles as they come up.

iu

Welcome to DATACTIVE’s spinoff ALEX! An interview with fbtrex Lead Developer Claudio Agosti

by Tu Quynh Hoang and Stefania Milan

DATACTIVE is proud to announce that its spin-off ALEX project has been awarded a Proof of Concept grant of the European Research Council. ALEX, which stands in for “ALgorithms Exposed (ALEX). Investigating Automated Personalization and Filtering for Research and Activism”, aims at unmasking the functioning of personalization algorithms on social media platforms, initially taking Facebook as a test case. ALEX marks the engagement of DATACTIVE with “data activism in practice”that is to say, turning data into a point of intervention in society.

To mark the occasion, we go public with an interview with Claudio Agosti, DATACTIVE Research Associate and Lead Developer of facebook.tracking.exposed browser extension (fbtrex), whose open-source code is at the core of the ALEX project. Claudio was interviewed by DATACTIVE Principal Investigator Stefania Milan at the Internet Freedom Festival in Valencia, Spain, in relation to a project on content regulation on/by platforms.

Claudio (also known as vecna) is a self-taught technician in digital security. With the internet gradually becoming a central agent in the political debate, he moved from the corporate security services to the defence of human rights in the digital sphere. Currently, he is exploring the influence of algorithms on society. Claudio is the coordinator of the free software projects behind https://tracking.exposed and a Founding Member and Vice-President of the Hermes Center for Transparency and Digital Human Rights

Stefania: Is the spread of fake news predominantly a technical or social problem?

Claudio: It is a social problem in the sense that the lack of critical judgment in individuals creates the conditions for fake news or misinformation to spread. However, through technology, the dissemination of misinformation is much faster and can scale up. The problem we are facing now is that when the costs of spreading content drop, the possibilities for an individual to deliver a successful information operation (or infops, I feel this term is more accurate than propaganda) is higher. However, it isn’t true that people lack critical judgment in absolute terms. At a personal level, one can only be an knowledgeable on a limited range of subjects, but the information we receive is very diverse and, most of the time, outside of our domain of expertise. As social media users and information consumers, we should have a way to validate that information. I wonder what if we would know how to validate on our own? This does not exist in mainstream news media either. It is possible, for example, on Wikipedia, but anywhere else, the way that information is spread implies that information is true on its own. A news report, a blog post or a status update on social media do not contain any information that helps validation. All in all, I think fake news is simultaneously a technical and a political problem, because those who create and spread information have responsibility towards user expectations, and this shape also the users’ vulnerability to infops.

Stefania: As a developer, what is your main goal with the facebook.tracking.exposed browser extension?

Claudio: At the moment we haven’t had the tools to assess responsibility with respect to infops. If we say that fake news is a social problem because people are gullible, we put responsibility on users/readers. But it’s also a problem of those publishing the information, who allow themselves to publish incorrect information because they will be hardly held accountable. According to some observers, social media platforms such as Facebook are to be blamed for the spread of misinformation. We have three actors: the user/the reader, the publisher, and the platform. With facebook.tracking.exposed, I’m trying to collect actual data that allows us to reflect on where the responsibilities are. For example, sometimes Facebook is thought to be responsible but in fact it is the responsibility of the content publisher. And sometimes the publishers are to be blamed, but are not legally responsible. We want to collect actual data that can help investigate these assumptions. We do so from an external, neutral position.

Stefania: Based on your studies of the spread of information on social media during the recent elections in Argentina and Italy, can you tell us what the role of platforms is, and of Facebook in particular?

Claudio: In the analyses we did in Argentina and Italy, we realized that there are two accountable actors: the publisher and the platform. Some of the publishers are actually spamming users’ timelines as they are producing too many posts per day. I find it hard to believe that they are producing quality content in that way. They just aim at occupying users’ timelines to exploit some of their seconds of attention. In my opinion, this is to be considered spam. What we also found is that Facebook’s algorithms are completely arbitrary in deciding what a user is or is not going to see. It’s frightening when we consider that a person that displays some kind of deviant behavior such as reading and sharing only fascist or racist content will keep being exposed to even less diverse content. From our investigations of social media content during two heated election campaigns, we have the evidence that if a person expresses populist or fascist behavior, the platform is designed to show her less diverse information in comparison to other users, and that can only reinforce her position. We can also argue that the information experience of that person is of lower quality, assuming that maximum information exposure is always to be preferred.  

Stefania: So what can users do to fix this problem? 

Claudio: I think users should be empowered to run their own algorithms and they should have better tools at their disposal to select the sources of their information diets. This has to become also a task of information publishers. Although everybody on social media is both a publisher and a consumer, people who do publishing as their daily jobs are ever more responsible. For example, they should create much more metadata to go along with information so to permit the system to better filter and categorize content. Users, on the other hand, should have these tools in hand. When we don’t have that set of metadata and thus the possibility to define our own algorithm, we have to rely on Facebook’s algorithms. But Facebook’s algorithms are implicitly promoting Facebook’s agenda and its capitalist imperative of maximizing users’ attention and engagement. For users to have the possibility of defining their own algorithms, we should first of all create the need and the interest to do so by showing how much of the algorithm is the platform’s agenda and how it can really influence our perception of reality. That is what I’m doing now: collecting evidence about this problems and trying to explain it to a broader audience, raising awareness amongst social media users. 

Stefania: Do you think we should involve the government in the process? From your perspective of software developer, do you think we need more regulation?

Claudio: Regulation is really key because it’s important to keep corporations in check. But I’m afraid that, among others, there is a misunderstanding in making regulations which seem to have direct benefits on people’s life, but for example might end up limiting some of the key features of open source software and its distribution. Therefore I’m quite skeptical. I have to say that high level regulations like the General Data Protection Regulation do not try to regulate the technology but rather its effects and in particular data usage. They are quite abstract and distant from the technology itself. If the regulators want to tell the company what to do and what not to do, I’m afraid that in the democratic competition of the technical field the probability of making mistakes is higher. On the other hand, if we just regulate users/consumers production explicitly, we would end up reinforcing the goals of the data corporations even more. So far, regulations have in fact been exploited by the biggest fish in the market. In this game we can distinguish three political entities: users, companies, and governments. In retrospect, we see that there have been cases where companies have helped citizens against governments and, in some other case, governments have helped citizen against companies. I hope we can aggregate users and civil society organizations around our project, because that’s the political entity that is in utmost need to be somehow guided or supported.

Stefania: So the solution is ultimately in users?

Claudio: The problem is complex thus the solution can’t be found in one of the three entities only. With ALEX we will have the opportunity to re-use our data with policies we determine, and therefore try to produce features which can, at least, offer a new social imaginary.

First of all, we aim at promoting diversity. Fbtrex will provide users with tools for comparing their social media timelines to those of others users, based on mutual sharing agreements which puts the users—rather than the company—on the driver seat. The goal is to involve and compare a diverse group of users and their timelines across the globe. In so doing, we empower users to understand what is hidden from them on a given topic. Targeted communication and user defined grouping, as implemented on most social media, lead to fragmentation of knowledge. Filtered interactions confirming a user’s position have been complicit in this fragmentation. Our approach doesn’t intend to solve this technocratic subterfuges with other technological fixes, but to let the user explore the diversity.

In fact, the fragmentation of information and individuals produced by social media has made it even more difficult for users to relate to problems far removed from their reality. How do you understand the problems of migrants, for example, if you have never been away from home yourself, and you don’t spend time in their company? To counter this effect, thanks to the funding of the European Research Council, we will work on an advanced functionality which will… turn the so-called filter bubbles against themselves, sort to speak. 

Secondly, we want to support delegation and fact-checking, enabling third-party researchers to play a role in the process. The data mined by fbtrex will be anonymized and provided to selected third-party researchers, either individuals or collectives. These will be enabled to contextualize the findings, combine it with other data and complement it with data obtained through other social science research methods such as focus groups. But, thanks to the innovative data reuse protocols we will devise, in any given moment users, as data producers, will have a say as to whether and how they want to volunteer their data. We will also work to create trusted relationships and networks with researchers and users.

In conclusion, if users want to really be free, they have to be empowered to be able to exercise their freedom. This means: they have to own their own data, run their algorithms, and understand the political implications behind technological decision-making. To resort to a metaphor, this is exactly the difference between dictatorship and democracy: you can believe or accept that someone will do things for your own good like in a dictatorship, or you can decide to assume your share of responsibility, taking things in your hands and trying to do what is best for you while respecting others—which is exactly what democracy teaches us.

***

ALEX is a joint effort by Claudio Agosti, Davide Beraldo, Jeroen de Vos and Stefania Milan.

See more: the news in Dutch, the press release by the ERC, our project featured in the highlights of the call

Stay tuned for details.

The new website https://algorithms.exposed will go live soon!

 

bigbang

BigBang v0.2.0 ‘Tulip Revolution’ released

DATACTIVE has been collaborating with researchers from New York University and the University of California at Berkeley to release version 0.2.0 of the quantitative mailinglists analysis software BigBang. Mailinglists are among the most widely used communication tools in Internet Governance institutions and among software developers. Therefore mailinglists lend themselves really well to do analysis on the development of the communities as well as topics for discussion and their propagation through the community. BigBang, a python based tool, is there to facilitate this. You can start analyzing mailinglists with BigBang by following the installation instructions.

This release, BigBang v0.2.0 Tulip Revolution, marks a new milestone in BigBang development. A few new features:
– Gender participation estimation
– Improved support for IETF and ICANN mailing list ingest
– Extensive gardening and upgrade of the example notebooks
– Upgraded all notebooks to Jupyter 4
– Improved installation process based on user testing

En route to this milestone, the BigBang community made a number of changes to its procedures. These include:

– The adoption of a Governance document for guiding decision-making.
– The adoption of a Code of Conduct establishing norms of respectful behavior within the community.
– The creation of an ombudsteam for handling personal disputes.

We have also for this milestone adopted by community decision the GNU Affero General Public License v3.0.

If you have any questions or comment, feel free to join the mailinglist,
join us on gitter chat or file an issue on Github.

If you are interested in using BigBang but don’t know where to start, we are happy to help you on your way via videochat or organize a webinar for you and your community. Feel free to get in touch!

Miren

[blog] Growth for Critical Studies? Social scholars, let’s be shrewder

Author: Miren Gutierrez
This is a response the call for a critical community studies  ‘Tech, data and social change: A plea for cross-disciplinary engagement, historical memory, and … Critical Community Studies‘ by Kersti Wissenbach and the first contribution to the debate  ‘Can We Plan Slow – But Steady – Growth for Critical Studies?’ by Charlotte Ryan.
Commenting on the thought-provoking blogs by Charlotte Ryan and Kersti Wissenbach, I feel in good company. Both of them speak of the need in research to address inequalities embedded in technology and to focus on the critical role that communities play in remedying dominant techno-centric discourses and practices, and of the idea of new critical community studies. That is, the need to place people and communities at the centre of our activity as researchers and practitioners, asking questions about the communities instead of about the technologies, demanding a stronger collaboration between the two, and the challenges that this approach generates.
Their blogs incite different but related ideas.

First, different power imbalances can be found in scholarship. Wissenbach suggests that dominant discourses in academia, as well as in practice and donor agendas, are driving the technology hype. But as Ryan proposes, academia is not a homogeneous terrain.

Always speaking from the point of view of critical data studies, the current predominant techno-centrism seems to be diverting research funding towards applied sciences, engineering and tools (what Wissenbach calls “the state of technology” and Ryan refers to as a “profit-making industry”). Talking about Canada, Srigley describes how, for a while, even the Social Sciences and Humanities Research Council of Canada “fell into line by focusing its funding on business-related degrees. All the while monies for teaching and research in the humanities, social sciences, and sciences with no obvious connection to industry, which is to say, most of it, began to dry up”(Srigley 2018).  Srigley seems to summarise what is happening everywhere. “Sponsored research” and institutions requiring that research can be linked to business and industry partners appear as the current mantra.

Other social scholars around me are coming up with similar stories: social sciences focusing critically on data are getting a fraction of the research funding opportunities vis-à-vis computational data-enabled science and engineering within the fields of business and industry, environment and climate, materials design, robotics, mechanical and aerospace engineering, and biology and biomedicine. Meanwhile, critical studies on data justice, governance and how ordinary people, communities and non-governmental organisations experience and use data are left for another day.
Thus, the current infatuation with technology appears not to be evenly distributed across donors and academia.

Second, I could not agree more with Wissenbach and Ryan when they say that we should take communities as entry points in the study of technology for social change. Wissenbach further argues against the objectification of “communities”, calling for actual needs-driven engaged research and more aligned with practice.

Then again, here lies another imbalance. Even if scholars work alongside with practitioners to bolster new critical community studies, these actors are not in the same positions. We, social scholars, are gatekeepers of what is known in academia, we are often more skilful in accessing funds, we dominate the lingo. Inclusion therefore lies at the heart of this argument and remains challenging.

If funds for critical data studies are not abundant, resources to put in place data projects with social goals and more practice engaged research are even scarcer. That is, communities facing inequalities may find themselves competing for resources not only within their circles (as Ryan suggests). Speaking too as a data activist involved in projects that look at illegal fishing’s social impacts on coastal communities of developing countries (and trying hard to fund-raise for them), I think that we must make sure that more possible for data activism research does not mean less funding for data activist endeavours. I know they are not the same funds, but there are ways in which research could foster practice, and one of them is precisely putting communities at the centre.

Third, another divide lies underneath the academy’s resistance to engaged scholarship. While so-called “hard sciences” have no problems with “engaging”, some scholars in “soft-sciences” seem to recoil from it. Even if few people still support Chris Anderson’s “end of theory” musings (Anderson 2008), some techno-utopians pretend a state of asepsis exists, or at least it is possible now, in the age of big data. But they could not be more misleading. What can be more “engaged scholarship” than “sponsored research”? I mean, research driven and financed by companies is necessarily “engaged” with the private sector and its interests, but rarely acknowledges its own biases. Meanwhile, five decades after Robert Lynd asked “Knowledge for what?” (Lynd 1967), this question still looms over social sciences. Some social scientists shy away from causes and communities just in case they start marching into the realm of advocacy and any pretentions of “objectivity” disappear. While we know computational data-enabled science and engineering cannot be “objective”, why not accept and embrace engaged scholarship in social sciences, as long as we are transparent about our prejudices and systematically critical about our assumptions?

Fourth, data activism scholars have to be smarter in communicating findings and influencing discourses. Our lack of influence is not all attributable to market trends and donors’ obsessions; it is also our making. Currently, the stories of data success and progress come mostly from the private sector. And even when prevailing techno-enthusiastic views are contested, prominent criticism comes from the same quarters. An example is Bernard Marr’s article “Here’s why Data Is Not the New Oil”. Marr does not mention the obvious, that data are not natural resources, spontaneous and inevitable, but cultural ones, “made” in processes that are also “made” (Boellstorff 2013). In his article, Marr refers only to certain characteristics that make data different from oil. For example, while fossil fuels are finite, data are “infinitely durable and reusable”, etc. Is that all that donors, and publics, need to know about data? Although this debate has grown over the last years, is it reaching donors’ ears? Not long ago, during the Datamovida conference in Madrid in 2016 organised by Vizzuality, a fellow speaker –Aditya Agrawal from the Open Data Institute and Global Partnership for Sustainable Development Data— opened his presentation saying precisely that data were “the new oil”. If key data people in the UN system have not caught up with the main ideas emerging from critical data studies, we are in trouble and it is partly our making.

This last argument is closely related to the other ideas in this blog. The more we can influence policy, public opinion, decision-makers and processes, the more resources data activism scholars can gather to work alongside with practitioners in exploring how people and organisations appropriate data and their processes, create new data relations and reverse dominant discourses. We cannot be content with publishing a few blogs, getting our articles in indexed journals and meeting once in a while in congresses that seldom resonate beyond our privilege bubbles. Both Wissenbach and Ryan argue for stronger collaborations and direct community engagement; but this is not the rule in social sciences.

Making an effort to reach broader publics could be a way to break the domination that, as Ryan says, brands, market niches and revenue streams seem to exert on academic institutions. Academia is a bubble but not entirely hermetic. And even if critical community studies will not ever be a “cash cow”, they could be influential. There are other critical voices in the field of journalism, for example, which have denounced a sort of obsession with technology (Kaplan 2013; Rosen 2014). Maybe critical community studies should embrace not only involved communities and scholars but also other critical voices from journalism, donors and other fields. The collective “we” that Ryan talks about could be even more inclusive. And to do that, we have to expand beyond the usual academic circles, which is exactly what Wissenbach and Ryan contend.

I do not know how critical community studies could look like; I hope this is the start of a conversation. In Madrid, in April, donors, platform developers, data activists and journalists met at the “Big Data for the Social Good” conference, organised by my programme at the University of Deusto focussing on what works and what does not in critical data projects. The more we expand this type of debates the more influence we could gain.

Finally, the message emerging from critical data studies cannot be only about dataveillance (van Dijck 2014) and ways of data resistance. However imperfect and biased, the data infrastructure is enabling ordinary people and organised society to produce diagnoses and solutions to their problems (Gutiérrez 2018). Engaged research means we need to look at what communities do with data and how the experience the data infrastructure, not only at how communities contest dataveillance, which I have the feeling has dominated critical data studies so far. Yes, we have to acknowledge that often these technologies are shaped by external actors with vested interests before communities use them and that they embed power imbalances. But if we want to capture people’s and donor’s imagination, the stories of data success and progress within organised and non-organised society should be told by social scholarship as well. Paraphrasing Ryan, we may lose but live to fight another day.

Cited work
Anderson, Chris. 2008. ‘The End of Theory: The Data Deluge Makes the Scientific Method Obsolete’. Wired. https://www.wired.com/2008/06/pb-theory/.
Boellstorff, Tom. 2013. ‘Making Big Data, in Theory’. First Monday 18 (10). http://firstmonday.org/article/view/4869/3750.
Dijck, Jose van. 2014. ‘Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology’. Surveillance & Society 12 (2): 197–208.
Gutierrez, Miren. 2018. Data Activism and Social Change. Pivot. London: Palgrave Macmillan.
Kaplan, David E. 2013. ‘Why Open Data Isn´t Enough’. Global Investigative Journalism Network (GIJN). 4 February 2013. http://gijn.org/2013/04/02/why-open-data-isnt-enough/.
Lynd, Robert Staughton. 1967. Knowledge for What: The Place of Social Science in American Culture. Princeton: Princeton University Press. http://onlinelibrary.wiley.com/doi/10.1525/aa.1940.42.1.02a00250/pdf.
Rosen, Larry. 2014. ‘Our Obsessive Relationship With Technology’. Huffington Post, 2014. https://www.huffingtonpost.com/dr-larry-rosen/our-obsession-relationshi_b_6005726.html?guccounter=1.
Srigley, Ron. 2018. ‘Whose University Is It Anyway?’, 2018. https://lareviewofbooks.org/article/whose-university-is-it-anyway/#_ednref37.

Still from Incoming (2017), by Richard Mosse - http://www.richardmosse.com/projects/incoming

[blog] Making ‘community’ critical: Tech collectives through the prism of power

Author: Fabien Cante

In her recent blog post, Kersti Wissenbach expresses her frustration with the field of “civic tech,” which, as she puts it, remains far more focused on the “tech” than the “civic.” This resonates with me in many ways. I write as someone who is possibly more of an outsider to the field than Wissenbach: my previous research was on local radio (all analog), and as my DATACTIVE colleagues have found out, I am clueless about even the basics of encryption, so anything more technically complex will leave me flummoxed. In agreeing with Wissenbach, then, I do not mean to diminish the wonders of tech itself, as a field of knowledge and intervention, but rather underscore that the civic (or “the social,” as my former supervisor Nick Couldry would put it) is itself an immensely complex realm of knowledge, let alone action.

Wissenbach proposes that our thinking efforts, as scholars and activists concerned about the relations between technology and social change, shift toward “Critical Community Studies.” By this she means that we should ask “critical questions beyond technology and about communities instead.” I strongly agree. The research projects around data, tech and society that most excite me are the ones that are rooted in some community or other – transnational activist communities, in the case of DATACTIVE, or marginalised urban communities, in the case of the Our Data Bodies project. However, like Charlotte Ryan (whose response to Wissenbach can be read here), I would also like to be a bit cautious. In what follows, I really emphasise the critical and contextual aspects of Critical Community Studies, as envisioned by Wissenbach. I do so because I am a bit sceptical about the middle term – community.

I am involved in urban planning struggles in south London where the word “community” is frequently employed. Indeed, it serves as a kind of talisman: it invokes legitimacy and embeddedness. Community is claimed by activists, local authorities, and even developers, for obviously very different aims. This experience has shown me that, politically, community is an empty signifier. Bullshit job titles like “Community Manager” in marketing departments (see also Mark Zuckerberg speeches) further suggest to me that community is one of the most widely misappropriated words of our time.

More seriously, and more academically perhaps, community denotes a well-defined and cohesive social group, based on strong relationships, and as such self-evident for analysis. This is not, in many if not most circumstances, what collectives actually look like in real life. Anthropologist John Postill (2008), studying internet uptake in urban Malaysia, writes that researchers too often approach tech users as either “communities” or “networks.” Neither of these concepts captures how technology is woven into social relations. Where community presumes strong bonds and a shared identity, network reduces human relations to interaction frequencies and distance between nodes, flattening power differentials.

As Wissenbach rightly notes, people who use tech, either as producers or users, are “complex [beings] embedded in civil society networks and power structures.” It is these power structures, and the often tense dynamics of embeddedness, that Wissenbach seems to find most interesting – and I do too. This, for me, is the vital question behind Critical Community Studies (or, for that matter, the study of data activism): what specific power relations do groups enact and contest?

Still from Incoming (2017), by Richard Mosse - http://www.richardmosse.com/projects/incoming
Still from Incoming (2017), by Richard Mosse – http://www.richardmosse.com/projects/incoming

The critical in Critical Community Studies thus asks tough questions about race, class, gender, and other lines of inequality and marginalization. It asks how these lines intersect both in the community under study (in the quality of interactions, the kinds of capital required to join the collective, language, prejudices, etc.) and beyond it (in wider patterns of inequality, exclusion, and institutionalized domination). We see examples of such questioning happening, outside academia, through now widespread feminist critiques calling out pervasive gender inequalities in the tech industry, or through Data for Black Lives’ efforts to firmly center race as a concern for digital platforms’ diversity and accountability. Within the university, Seda Gürses, Arun Kundnani and Joris Van Hoboken’s (2016) paper “Crypto and Empire,” which could be said to examine the “crypto community” (however diffuse), offers some brilliant avenues to think data/tech communities critically, and thereby “re-politicize” data itself. More broadly, a wealth of feminist, post/decolonial (e.g. Mignolo 2011; Bhambra 2014; or Flavia Dzodan’s stellar Twitter feed) and critical race theory (see for example Browne 2015) can help us think through the histories from which civic tech communities arise, their positions in a complex landscape of power and inequality, and the ways in which they see their place in the world.

There is always a risk, when researchers consider community critically, that they put certain communities under strain; that they are seen to hinder positive work through their (our) critical discourse. Certainly, challenging a community’s inclusiveness is hard (and researchers are very bad at challenging their “own” community). But I think this is a limited view of critique as “not constructive” (a crime in certain circles where “getting things done” is a primary imperative). I would argue that collectives are strengthened through critique. As Charlotte Ryan beautifully puts it, Critical Community Studies can be instrumental in forming “a real ‘we’.” She adds: “an aggregate of individuals, even if they share common values, does not constitute ‘us’.” Building a “we” requires, at every step, asking difficult questions about who that “we” is (“we” the social movement, or “we” the civic tech community), who doesn’t fall under “we’s” embrace, and why.

Bibliography

Bhambra, Gurminder K. (2014) Connected Sociologies. London: Bloomsbury Press

Browne, Simone (2015) Dark Matters: On the Surveillance of Blackness. Durham, NC & London: Duke University Press

Gürses, Seda, Kundnani, Arun & Joris Van Hoboken (2016) “Crypto and Empire: The Contradictions of Counter-Surveillance Advocacy” Media, Culture & Society 38 (4), 576-590

Mignolo, Walter (2011) The Darker Side of Western Modernity: Global Futures, Decolonial Options. Durham, NC & London: Duke University Press

Postill, John (2008) “Localizing the Internet Beyond Communities and NetworksNew Media & Society 10 (3), 413-431