Category: show on blog page

Why we won’t be at APC 2018

In October 2018, the Amsterdam Privacy Conference (APC) will be back at the University of Amsterdam. Two DATACTIVE project team members, Stefania (Principal Investigator), and Becky (PhD candidate), enthusiastically supported the conference as coordinators of the ‘Digital Society and Surveillance’ theme. The Data Justice Lab at Cardiff University submitted a panel proposal, which was successfully included. Regretfully, neither will take part in the conference: DATACTIVE and the Data Justice Lab have decided to withdraw over the participation of the US-based software company Palantir as one of the APC’s Platinum Sponsors.

Our decision to withdraw stems from an active refusal to legitimize companies accused of enabling human rights abuses, and a concern with the lack of transparency surrounding sponsorship.

Palantir is a company specializing in big data analytics, which develops technologies for the military, law enforcement and border control. The deployment of Palantir’s technologies has raised wide-spread concern among civil liberties and human rights advocates. Reporting shows that, in the United States, Palantir has played an important role in enabling the efforts of the ICE (Immigration and Customs Enforcement) to identify, detain, and deport undocumented immigrants, refugees, and asylum seekers. This has resulted in the indefinite detention of thousands of children who have been separated from their parentsThis indefensible policy has come under strong criticism from the United Nations and prompted an alliance of technology workers and affected communities, to call – so far, unsuccessfully – for Palantir to cancel its contracts with ICE.

We feel that providing Palantir with a platform, as a sponsor of a prominent academic conference on privacy, significantly undermines efforts to resist the deployment of military-grade surveillance against migrants and marginalized communities already affected by abusive policing. 

Because we have organized conferences ourselves, we believe transparency in sponsorship agreements is key. While we praise the APC organizing committee forcommitting to full transparency, we were not informed of sponsorship agreements until the very last minute. The APC Sponsors page, in addition, was only populated after the participant registration deadline. As conference coordinators and prospective participants, we feel that we were not given the chance to make an informed choice about our contribution.

Sponsorship concerns are not a new issue: the very same controversy, around the involvement of this very same company (as well as others), emerged during the 2015 edition of APC. Though we acknowledge the complexity of corporate sponsorship, we note that other prominent tech policy conferences, such as Computers, Privacy and Data Protection (CPDP) conference, have recently stopped accepting sponsorship from Palantir. We thus believe this is a good moment for a larger discussion about how conferences should be organized in the future.

Academia—and especially publicly-funded universities—need to consider their role in efforts to neutralize or undermine human rights concerns. Such considerations are particularly pertinent in the context of what has been described as the increased neoliberalization of higher education, in which there is significant pressure to attract and pursue funding from different sources. As academics and as citizens, we will increasingly be asked to make choices of this kind. Hence, we believe it is time to set down a clear set of principles for sponsorship going forward.

 

Amsterdam and Cardiff, 19 September 2018

Stefania Milan and Becky Kazansky (DATACTIVE) & Lina Dencik, Arne Hintz, Joanna Redden, Fieke Jansen (Data Justice Lab)

Data Colonialism – the first article of the Special Issue on “Big Data from the South” is out

By London School of Economics Library and Political Science - https://www.flickr.com/photos/lselibrary/3925726761/in/set-72157622828540200/, No restrictions, https://commons.wikimedia.org/w/index.php?curid=10180000
Photo by London School of Economics Library and Political Science

Nick Couldry and Ulisse A. Mejias re-frame the Data from the South debate within the context of modern day colonialism: data colonialism; an alarming stage where human life is “appropriated through data” and life is, eventually, “capitalized without limit”.

This essay marks the beginning of a series of articles under a special issue on Big Data from the South, edited by Stefania Milan and Emiliano Trerè and published on the Television and New Media Journal. This article will be freely accessible for the first month, so we encourage you to put it high up on your to-read list.

The special issue promises interesting takes and approaches from renowned scholars and experts in the filed, such as Angela Daly and Monique Mann, Payal Arora, Stefania Milan and Emiliano Trerè, Jean-Marie Chenou and Carolina Cepeda, Paola Ricaurte Quijano, Jacobo Najera and Jesús Robles Maloof, with a special commentary by Anita Say Chan. Stay tuned for our announcements of these articles as they come up.

Welcome to DATACTIVE’s spinoff ALEX! An interview with fbtrex Lead Developer Claudio Agosti

by Tu Quynh Hoang and Stefania Milan

DATACTIVE is proud to announce that its spin-off ALEX project has been awarded a Proof of Concept grant of the European Research Council. ALEX, which stands in for “ALgorithms Exposed (ALEX). Investigating Automated Personalization and Filtering for Research and Activism”, aims at unmasking the functioning of personalization algorithms on social media platforms, initially taking Facebook as a test case. ALEX marks the engagement of DATACTIVE with “data activism in practice”that is to say, turning data into a point of intervention in society.

To mark the occasion, we publish an interview with Claudio Agosti, DATACTIVE Research Associate and Lead Developer of facebook.tracking.exposed browser extension (fbtrex), whose open-source code is at the core of the ALEX project. Claudio was interviewed by DATACTIVE Principal Investigator Stefania Milan at the Internet Freedom Festival in Valencia, Spain, in relation to a project on content regulation on/by platforms.

Claudio (also known as vecna) is a self-taught technician in digital security. With the internet gradually becoming a central agent in the political debate, he moved from the corporate security services to the defence of human rights in the digital sphere. Currently, he is exploring the influence of algorithms on society. Claudio is the coordinator of the free software projects behind https://tracking.exposed and a Founding Member and Vice-President of the Hermes Center for Transparency and Digital Human Rights

Stefania: Is the spread of fake news predominantly a technical or social problem?

Claudio: It is a social problem in the sense that the lack of critical judgment in individuals creates the conditions for fake news or misinformation to spread. However, through technology, the dissemination of misinformation is much faster and can scale up. The problem we are facing now is that when the costs of spreading content drop, the possibilities for an individual to deliver a successful information operation (or infops, I feel this term is more accurate than propaganda) is higher. However, it isn’t true that people lack critical judgment in absolute terms. At a personal level, one can only be an knowledgeable on a limited range of subjects, but the information we receive is very diverse and, most of the time, outside of our domain of expertise. As social media users and information consumers, we should have a way to validate that information. I wonder what if we would know how to validate on our own? This does not exist in mainstream news media either. It is possible, for example, on Wikipedia, but anywhere else, the way that information is spread implies that information is true on its own. A news report, a blog post or a status update on social media do not contain any information that helps validation. All in all, I think fake news is simultaneously a technical and a political problem, because those who create and spread information have responsibility towards user expectations, and this shape also the users’ vulnerability to infops.

Stefania: As a developer, what is your main goal with the facebook.tracking.exposed browser extension?

Claudio: At the moment we haven’t had the tools to assess responsibility with respect to infops. If we say that fake news is a social problem because people are gullible, we put responsibility on users/readers. But it’s also a problem of those publishing the information, who allow themselves to publish incorrect information because they will be hardly held accountable. According to some observers, social media platforms such as Facebook are to be blamed for the spread of misinformation. We have three actors: the user/the reader, the publisher, and the platform. With facebook.tracking.exposed, I’m trying to collect actual data that allows us to reflect on where the responsibilities are. For example, sometimes Facebook is thought to be responsible but in fact it is the responsibility of the content publisher. And sometimes the publishers are to be blamed, but are not legally responsible. We want to collect actual data that can help investigate these assumptions. We do so from an external, neutral position.

Stefania: Based on your studies of the spread of information on social media during the recent elections in Argentina and Italy, can you tell us what the role of platforms is, and of Facebook in particular?

Claudio: In the analyses we did in Argentina and Italy, we realized that there are two accountable actors: the publisher and the platform. Some of the publishers are actually spamming users’ timelines as they are producing too many posts per day. I find it hard to believe that they are producing quality content in that way. They just aim at occupying users’ timelines to exploit some of their seconds of attention. In my opinion, this is to be considered spam. What we also found is that Facebook’s algorithms are completely arbitrary in deciding what a user is or is not going to see. It’s frightening when we consider that a person that displays some kind of deviant behavior such as reading and sharing only fascist or racist content will keep being exposed to even less diverse content. From our investigations of social media content during two heated election campaigns, we have the evidence that if a person expresses populist or fascist behavior, the platform is designed to show her less diverse information in comparison to other users, and that can only reinforce her position. We can also argue that the information experience of that person is of lower quality, assuming that maximum information exposure is always to be preferred.  

Stefania: So what can users do to fix this problem? 

Claudio: I think users should be empowered to run their own algorithms and they should have better tools at their disposal to select the sources of their information diets. This has to become also a task of information publishers. Although everybody on social media is both a publisher and a consumer, people who do publishing as their daily jobs are ever more responsible. For example, they should create much more metadata to go along with information so to permit the system to better filter and categorize content. Users, on the other hand, should have these tools in hand. When we don’t have that set of metadata and thus the possibility to define our own algorithm, we have to rely on Facebook’s algorithms. But Facebook’s algorithms are implicitly promoting Facebook’s agenda and its capitalist imperative of maximizing users’ attention and engagement. For users to have the possibility of defining their own algorithms, we should first of all create the need and the interest to do so by showing how much of the algorithm is the platform’s agenda and how it can really influence our perception of reality. That is what I’m doing now: collecting evidence about this problems and trying to explain it to a broader audience, raising awareness amongst social media users. 

Stefania: Do you think we should involve the government in the process? From your perspective of software developer, do you think we need more regulation?

Claudio: Regulation is really key because it’s important to keep corporations in check. But I’m afraid that, among others, there is a misunderstanding in making regulations which seem to have direct benefits on people’s life, but for example might end up limiting some of the key features of open source software and its distribution. Therefore I’m quite skeptical. I have to say that high level regulations like the General Data Protection Regulation do not try to regulate the technology but rather its effects and in particular data usage. They are quite abstract and distant from the technology itself. If the regulators want to tell the company what to do and what not to do, I’m afraid that in the democratic competition of the technical field the probability of making mistakes is higher. On the other hand, if we just regulate users/consumers production explicitly, we would end up reinforcing the goals of the data corporations even more. So far, regulations have in fact been exploited by the biggest fish in the market. In this game we can distinguish three political entities: users, companies, and governments. In retrospect, we see that there have been cases where companies have helped citizens against governments and, in some other case, governments have helped citizen against companies. I hope we can aggregate users and civil society organizations around our project, because that’s the political entity that is in utmost need to be somehow guided or supported.

Stefania: So the solution is ultimately in users?

Claudio: The problem is complex thus the solution can’t be found in one of the three entities only. With ALEX we will have the opportunity to re-use our data with policies we determine, and therefore try to produce features which can, at least, offer a new social imaginary.

First of all, we aim at promoting diversity. Fbtrex will provide users with tools for comparing their social media timelines to those of others users, based on mutual sharing agreements which puts the users—rather than the company—on the driver seat. The goal is to involve and compare a diverse group of users and their timelines across the globe. In so doing, we empower users to understand what is hidden from them on a given topic. Targeted communication and user defined grouping, as implemented on most social media, lead to fragmentation of knowledge. Filtered interactions confirming a user’s position have been complicit in this fragmentation. Our approach doesn’t intend to solve this technocratic subterfuges with other technological fixes, but to let the user explore the diversity.

In fact, the fragmentation of information and individuals produced by social media has made it even more difficult for users to relate to problems far removed from their reality. How do you understand the problems of migrants, for example, if you have never been away from home yourself, and you don’t spend time in their company? To counter this effect, thanks to the funding of the European Research Council, we will work on an advanced functionality which will… turn the so-called filter bubbles against themselves, sort to speak. 

Secondly, we want to support delegation and fact-checking, enabling third-party researchers to play a role in the process. The data mined by fbtrex will be anonymized and provided to selected third-party researchers, either individuals or collectives. These will be enabled to contextualize the findings, combine it with other data and complement it with data obtained through other social science research methods such as focus groups. But, thanks to the innovative data reuse protocols we will devise, in any given moment users, as data producers, will have a say as to whether and how they want to volunteer their data. We will also work to create trusted relationships and networks with researchers and users.

In conclusion, if users want to really be free, they have to be empowered to be able to exercise their freedom. This means: they have to own their own data, run their algorithms, and understand the political implications behind technological decision-making. To resort to a metaphor, this is exactly the difference between dictatorship and democracy: you can believe or accept that someone will do things for your own good like in a dictatorship, or you can decide to assume your share of responsibility, taking things in your hands and trying to do what is best for you while respecting others—which is exactly what democracy teaches us.

***

ALEX is a joint effort by Claudio Agosti, Davide Beraldo, Jeroen de Vos and Stefania Milan.

See more: the news in Dutch, the press release by the ERC, our project featured in the highlights of the call

Stay tuned for details.

The new website https://algorithms.exposed will go live soon!

 

BigBang

BigBang v0.2.0 ‘Tulip Revolution’ released

DATACTIVE has been collaborating with researchers from New York University and the University of California at Berkeley to release version 0.2.0 of the quantitative mailinglists analysis software BigBang. Mailinglists are among the most widely used communication tools in Internet Governance institutions and among software developers. Therefore mailinglists lend themselves really well to do analysis on the development of the communities as well as topics for discussion and their propagation through the community. BigBang, a python based tool, is there to facilitate this. You can start analyzing mailinglists with BigBang by following the installation instructions.

This release, BigBang v0.2.0 Tulip Revolution, marks a new milestone in BigBang development. A few new features:
– Gender participation estimation
– Improved support for IETF and ICANN mailing list ingest
– Extensive gardening and upgrade of the example notebooks
– Upgraded all notebooks to Jupyter 4
– Improved installation process based on user testing

En route to this milestone, the BigBang community made a number of changes to its procedures. These include:

– The adoption of a Governance document for guiding decision-making.
– The adoption of a Code of Conduct establishing norms of respectful behavior within the community.
– The creation of an ombudsteam for handling personal disputes.

We have also for this milestone adopted by community decision the GNU Affero General Public License v3.0.

If you have any questions or comment, feel free to join the mailinglist,
join us on gitter chat or file an issue on Github.

If you are interested in using BigBang but don’t know where to start, we are happy to help you on your way via videochat or organize a webinar for you and your community. Feel free to get in touch!

[blog] Growth for Critical Studies? Social scholars, let’s be shrewder

Author: Miren Gutierrez
This is a response the call for a critical community studies  ‘Tech, data and social change: A plea for cross-disciplinary engagement, historical memory, and … Critical Community Studies‘ by Kersti Wissenbach and the first contribution to the debate  ‘Can We Plan Slow – But Steady – Growth for Critical Studies?’ by Charlotte Ryan.
Commenting on the thought-provoking blogs by Charlotte Ryan and Kersti Wissenbach, I feel in good company. Both of them speak of the need in research to address inequalities embedded in technology and to focus on the critical role that communities play in remedying dominant techno-centric discourses and practices, and of the idea of new critical community studies. That is, the need to place people and communities at the centre of our activity as researchers and practitioners, asking questions about the communities instead of about the technologies, demanding a stronger collaboration between the two, and the challenges that this approach generates.
Their blogs incite different but related ideas.

First, different power imbalances can be found in scholarship. Wissenbach suggests that dominant discourses in academia, as well as in practice and donor agendas, are driving the technology hype. But as Ryan proposes, academia is not a homogeneous terrain.

Always speaking from the point of view of critical data studies, the current predominant techno-centrism seems to be diverting research funding towards applied sciences, engineering and tools (what Wissenbach calls “the state of technology” and Ryan refers to as a “profit-making industry”). Talking about Canada, Srigley describes how, for a while, even the Social Sciences and Humanities Research Council of Canada “fell into line by focusing its funding on business-related degrees. All the while monies for teaching and research in the humanities, social sciences, and sciences with no obvious connection to industry, which is to say, most of it, began to dry up”(Srigley 2018).  Srigley seems to summarise what is happening everywhere. “Sponsored research” and institutions requiring that research can be linked to business and industry partners appear as the current mantra.

Other social scholars around me are coming up with similar stories: social sciences focusing critically on data are getting a fraction of the research funding opportunities vis-à-vis computational data-enabled science and engineering within the fields of business and industry, environment and climate, materials design, robotics, mechanical and aerospace engineering, and biology and biomedicine. Meanwhile, critical studies on data justice, governance and how ordinary people, communities and non-governmental organisations experience and use data are left for another day.
Thus, the current infatuation with technology appears not to be evenly distributed across donors and academia.

Second, I could not agree more with Wissenbach and Ryan when they say that we should take communities as entry points in the study of technology for social change. Wissenbach further argues against the objectification of “communities”, calling for actual needs-driven engaged research and more aligned with practice.

Then again, here lies another imbalance. Even if scholars work alongside with practitioners to bolster new critical community studies, these actors are not in the same positions. We, social scholars, are gatekeepers of what is known in academia, we are often more skilful in accessing funds, we dominate the lingo. Inclusion therefore lies at the heart of this argument and remains challenging.

If funds for critical data studies are not abundant, resources to put in place data projects with social goals and more practice engaged research are even scarcer. That is, communities facing inequalities may find themselves competing for resources not only within their circles (as Ryan suggests). Speaking too as a data activist involved in projects that look at illegal fishing’s social impacts on coastal communities of developing countries (and trying hard to fund-raise for them), I think that we must make sure that more possible for data activism research does not mean less funding for data activist endeavours. I know they are not the same funds, but there are ways in which research could foster practice, and one of them is precisely putting communities at the centre.

Third, another divide lies underneath the academy’s resistance to engaged scholarship. While so-called “hard sciences” have no problems with “engaging”, some scholars in “soft-sciences” seem to recoil from it. Even if few people still support Chris Anderson’s “end of theory” musings (Anderson 2008), some techno-utopians pretend a state of asepsis exists, or at least it is possible now, in the age of big data. But they could not be more misleading. What can be more “engaged scholarship” than “sponsored research”? I mean, research driven and financed by companies is necessarily “engaged” with the private sector and its interests, but rarely acknowledges its own biases. Meanwhile, five decades after Robert Lynd asked “Knowledge for what?” (Lynd 1967), this question still looms over social sciences. Some social scientists shy away from causes and communities just in case they start marching into the realm of advocacy and any pretentions of “objectivity” disappear. While we know computational data-enabled science and engineering cannot be “objective”, why not accept and embrace engaged scholarship in social sciences, as long as we are transparent about our prejudices and systematically critical about our assumptions?

Fourth, data activism scholars have to be smarter in communicating findings and influencing discourses. Our lack of influence is not all attributable to market trends and donors’ obsessions; it is also our making. Currently, the stories of data success and progress come mostly from the private sector. And even when prevailing techno-enthusiastic views are contested, prominent criticism comes from the same quarters. An example is Bernard Marr’s article “Here’s why Data Is Not the New Oil”. Marr does not mention the obvious, that data are not natural resources, spontaneous and inevitable, but cultural ones, “made” in processes that are also “made” (Boellstorff 2013). In his article, Marr refers only to certain characteristics that make data different from oil. For example, while fossil fuels are finite, data are “infinitely durable and reusable”, etc. Is that all that donors, and publics, need to know about data? Although this debate has grown over the last years, is it reaching donors’ ears? Not long ago, during the Datamovida conference in Madrid in 2016 organised by Vizzuality, a fellow speaker –Aditya Agrawal from the Open Data Institute and Global Partnership for Sustainable Development Data— opened his presentation saying precisely that data were “the new oil”. If key data people in the UN system have not caught up with the main ideas emerging from critical data studies, we are in trouble and it is partly our making.

This last argument is closely related to the other ideas in this blog. The more we can influence policy, public opinion, decision-makers and processes, the more resources data activism scholars can gather to work alongside with practitioners in exploring how people and organisations appropriate data and their processes, create new data relations and reverse dominant discourses. We cannot be content with publishing a few blogs, getting our articles in indexed journals and meeting once in a while in congresses that seldom resonate beyond our privilege bubbles. Both Wissenbach and Ryan argue for stronger collaborations and direct community engagement; but this is not the rule in social sciences.

Making an effort to reach broader publics could be a way to break the domination that, as Ryan says, brands, market niches and revenue streams seem to exert on academic institutions. Academia is a bubble but not entirely hermetic. And even if critical community studies will not ever be a “cash cow”, they could be influential. There are other critical voices in the field of journalism, for example, which have denounced a sort of obsession with technology (Kaplan 2013; Rosen 2014). Maybe critical community studies should embrace not only involved communities and scholars but also other critical voices from journalism, donors and other fields. The collective “we” that Ryan talks about could be even more inclusive. And to do that, we have to expand beyond the usual academic circles, which is exactly what Wissenbach and Ryan contend.

I do not know how critical community studies could look like; I hope this is the start of a conversation. In Madrid, in April, donors, platform developers, data activists and journalists met at the “Big Data for the Social Good” conference, organised by my programme at the University of Deusto focussing on what works and what does not in critical data projects. The more we expand this type of debates the more influence we could gain.

Finally, the message emerging from critical data studies cannot be only about dataveillance (van Dijck 2014) and ways of data resistance. However imperfect and biased, the data infrastructure is enabling ordinary people and organised society to produce diagnoses and solutions to their problems (Gutiérrez 2018). Engaged research means we need to look at what communities do with data and how the experience the data infrastructure, not only at how communities contest dataveillance, which I have the feeling has dominated critical data studies so far. Yes, we have to acknowledge that often these technologies are shaped by external actors with vested interests before communities use them and that they embed power imbalances. But if we want to capture people’s and donor’s imagination, the stories of data success and progress within organised and non-organised society should be told by social scholarship as well. Paraphrasing Ryan, we may lose but live to fight another day.

Cited work
Anderson, Chris. 2008. ‘The End of Theory: The Data Deluge Makes the Scientific Method Obsolete’. Wired. https://www.wired.com/2008/06/pb-theory/.
Boellstorff, Tom. 2013. ‘Making Big Data, in Theory’. First Monday 18 (10). http://firstmonday.org/article/view/4869/3750.
Dijck, Jose van. 2014. ‘Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology’. Surveillance & Society 12 (2): 197–208.
Gutierrez, Miren. 2018. Data Activism and Social Change. Pivot. London: Palgrave Macmillan.
Kaplan, David E. 2013. ‘Why Open Data Isn´t Enough’. Global Investigative Journalism Network (GIJN). 4 February 2013. http://gijn.org/2013/04/02/why-open-data-isnt-enough/.
Lynd, Robert Staughton. 1967. Knowledge for What: The Place of Social Science in American Culture. Princeton: Princeton University Press. http://onlinelibrary.wiley.com/doi/10.1525/aa.1940.42.1.02a00250/pdf.
Rosen, Larry. 2014. ‘Our Obsessive Relationship With Technology’. Huffington Post, 2014. https://www.huffingtonpost.com/dr-larry-rosen/our-obsession-relationshi_b_6005726.html?guccounter=1.
Srigley, Ron. 2018. ‘Whose University Is It Anyway?’, 2018. https://lareviewofbooks.org/article/whose-university-is-it-anyway/#_ednref37.

[blog] Making ‘community’ critical: Tech collectives through the prism of power

Author: Fabien Cante

In her recent blog post, Kersti Wissenbach expresses her frustration with the field of “civic tech,” which, as she puts it, remains far more focused on the “tech” than the “civic.” This resonates with me in many ways. I write as someone who is possibly more of an outsider to the field than Wissenbach: my previous research was on local radio (all analog), and as my DATACTIVE colleagues have found out, I am clueless about even the basics of encryption, so anything more technically complex will leave me flummoxed. In agreeing with Wissenbach, then, I do not mean to diminish the wonders of tech itself, as a field of knowledge and intervention, but rather underscore that the civic (or “the social,” as my former supervisor Nick Couldry would put it) is itself an immensely complex realm of knowledge, let alone action.

Wissenbach proposes that our thinking efforts, as scholars and activists concerned about the relations between technology and social change, shift toward “Critical Community Studies.” By this she means that we should ask “critical questions beyond technology and about communities instead.” I strongly agree. The research projects around data, tech and society that most excite me are the ones that are rooted in some community or other – transnational activist communities, in the case of DATACTIVE, or marginalised urban communities, in the case of the Our Data Bodies project. However, like Charlotte Ryan (whose response to Wissenbach can be read here), I would also like to be a bit cautious. In what follows, I really emphasise the critical and contextual aspects of Critical Community Studies, as envisioned by Wissenbach. I do so because I am a bit sceptical about the middle term – community.

I am involved in urban planning struggles in south London where the word “community” is frequently employed. Indeed, it serves as a kind of talisman: it invokes legitimacy and embeddedness. Community is claimed by activists, local authorities, and even developers, for obviously very different aims. This experience has shown me that, politically, community is an empty signifier. Bullshit job titles like “Community Manager” in marketing departments (see also Mark Zuckerberg speeches) further suggest to me that community is one of the most widely misappropriated words of our time.

More seriously, and more academically perhaps, community denotes a well-defined and cohesive social group, based on strong relationships, and as such self-evident for analysis. This is not, in many if not most circumstances, what collectives actually look like in real life. Anthropologist John Postill (2008), studying internet uptake in urban Malaysia, writes that researchers too often approach tech users as either “communities” or “networks.” Neither of these concepts captures how technology is woven into social relations. Where community presumes strong bonds and a shared identity, network reduces human relations to interaction frequencies and distance between nodes, flattening power differentials.

As Wissenbach rightly notes, people who use tech, either as producers or users, are “complex [beings] embedded in civil society networks and power structures.” It is these power structures, and the often tense dynamics of embeddedness, that Wissenbach seems to find most interesting – and I do too. This, for me, is the vital question behind Critical Community Studies (or, for that matter, the study of data activism): what specific power relations do groups enact and contest?

Still from Incoming (2017), by Richard Mosse - http://www.richardmosse.com/projects/incoming
Still from Incoming (2017), by Richard Mosse – http://www.richardmosse.com/projects/incoming

The critical in Critical Community Studies thus asks tough questions about race, class, gender, and other lines of inequality and marginalization. It asks how these lines intersect both in the community under study (in the quality of interactions, the kinds of capital required to join the collective, language, prejudices, etc.) and beyond it (in wider patterns of inequality, exclusion, and institutionalized domination). We see examples of such questioning happening, outside academia, through now widespread feminist critiques calling out pervasive gender inequalities in the tech industry, or through Data for Black Lives’ efforts to firmly center race as a concern for digital platforms’ diversity and accountability. Within the university, Seda Gürses, Arun Kundnani and Joris Van Hoboken’s (2016) paper “Crypto and Empire,” which could be said to examine the “crypto community” (however diffuse), offers some brilliant avenues to think data/tech communities critically, and thereby “re-politicize” data itself. More broadly, a wealth of feminist, post/decolonial (e.g. Mignolo 2011; Bhambra 2014; or Flavia Dzodan’s stellar Twitter feed) and critical race theory (see for example Browne 2015) can help us think through the histories from which civic tech communities arise, their positions in a complex landscape of power and inequality, and the ways in which they see their place in the world.

There is always a risk, when researchers consider community critically, that they put certain communities under strain; that they are seen to hinder positive work through their (our) critical discourse. Certainly, challenging a community’s inclusiveness is hard (and researchers are very bad at challenging their “own” community). But I think this is a limited view of critique as “not constructive” (a crime in certain circles where “getting things done” is a primary imperative). I would argue that collectives are strengthened through critique. As Charlotte Ryan beautifully puts it, Critical Community Studies can be instrumental in forming “a real ‘we’.” She adds: “an aggregate of individuals, even if they share common values, does not constitute ‘us’.” Building a “we” requires, at every step, asking difficult questions about who that “we” is (“we” the social movement, or “we” the civic tech community), who doesn’t fall under “we’s” embrace, and why.

Bibliography

Bhambra, Gurminder K. (2014) Connected Sociologies. London: Bloomsbury Press

Browne, Simone (2015) Dark Matters: On the Surveillance of Blackness. Durham, NC & London: Duke University Press

Gürses, Seda, Kundnani, Arun & Joris Van Hoboken (2016) “Crypto and Empire: The Contradictions of Counter-Surveillance Advocacy” Media, Culture & Society 38 (4), 576-590

Mignolo, Walter (2011) The Darker Side of Western Modernity: Global Futures, Decolonial Options. Durham, NC & London: Duke University Press

Postill, John (2008) “Localizing the Internet Beyond Communities and NetworksNew Media & Society 10 (3), 413-431

[blog] Data by citizens for citizens

Author: Miren Gutierrez

In spite of what we know about how big data are employed to spy on us, manipulate us, lie to us and control us, there are still people who get excited by hype-generating narratives around social media influence, machine learning and business insights. At the other end of the spectrum, there is apocalyptic talk that preaches that we must become digital anchorites in small, secluded and secret cyber-cloisters.

Don’t get me wrong; I am a big fan of encryption and virtual private networks. And yes, the CEOs of the technology corporations have more resources than governments to understand social and individual realities. The consequence of this unevenness is evident because companies do not share their information unless forced or in exchange for something else. Thus, public representatives and citizens lose their capacity for action vis-à-vis private powers.

But precisely because of the severe imbalances in practices of dataveillance (van Dijck 2014) it is vital to consider alternative forms of data that enable the less powerful to act with agency (Poell, Kennedy, and van Dijck 2015) in the era of the so-called “data power”. While the debate on big data is hijacked by techno-utopians and techno-pessimists and the big data progress stories come from the private sector, little is being said about what ordinary people and non-governmental organisations do with data; namely, how data are created, amassed and used by alternative actors to come up with their own diagnoses and solutions.

hi

My new book Data activism and social change talks about how people and organised society are using the data infrastructure as a critical instrument in their quests. These people include fellow action-oriented researchers and number-churning practitioners and citizens generating new maps, platforms and alliances for a better world. And they are showing a high degree of ingenuity, against the odds.

The starting point of this book is an article in which Stefania Milan and I set the scene, link data activism to the tradition of citizens’ media and lay out the fundamental questions surrounding this new phenomenon (Milan and Gutierrez 2015).

Most of the thirty activists, practitioners and researchers I interviewed and forty plus organisations I observed for the book practice data activism in one way or another. In my analysis, I classify them in four not-so-neat boxes: These include skills transferrers, or organisations, such as DataKind, that transfer skills by deploying data scientists into non-governmental organisations so they can work together on projects. Other skills transferrers, for example, Medialab-Prado and Civio, create platforms and tools or generate the matchmaking opportunities for actors to meet and collaborate in data projects with social goals.

A second group –including catalysts such as the Open Knowledge Foundation— sponsor some of these endeavours. Journalism producers can include journalistic organisations such as the International Consortium of Investigative Journalists, or civil society organisations, such as Civio, providing analysis that can support campaigns and advocacy efforts.

missing_ship

This is a moment in the Western Africa’s Missing Fish map where irregular fish transshipments are being conducted in Senegal waters. See interactive map here.

Proper data activists take it further, securing in sheltered archives vital information and evidence of human rights abuses (i.e. The Syrian Archive); recreating stories of human suffering and abuse (i.e. Forensic Architecture’s “Liquid Traces”); tracking illegal fishing and linking it to development issues (i.e. “Western Africa’s Missing Fish”, co-led by me at the Overseas Development Institute); visualising evictions and mobilising crowds to stop them (i.e. in San Francisco and Spain); and mapping citizen data to produce verified and actionable information during humanitarian crises and emergencies (i.e. the “Ayuda Ecuador” application of the Ushahidi platform), to mention just a few.

This classification is offered as a heuristic tool to think more methodically about real cases of data activism, and also to guide efforts to generate more projects.

We know datasets and algorithms do not speak for themselves and are not neutral. Data cannot be raw (Gitelman 2013); data and metadata are “made” in processes that are “made” as well (Boellstorff 2013). That is, data are not to be treated as natural resources, inevitable and spontaneous, but as cultural resources that to be curated and stored. And the fact that the data infrastructure is employed in good causes does not abolish the prejudices and asymmetries present in datasets, algorithms, hardware and data processes. But the exciting thing is that even using flawed technology, these activists gets results.

But where do these activists get data from? Because data can be difficult to find…

How do activists get their hands on data?

Corporations do not usually give their data away, and the level of government openness is not fantastic. “Data is hard (or even impossible) to find online, 2) data is often not readily usable, 3) open licensing is rare practice and jeopardised by a lack of standards” (Global Open Data Index 2017). This lack of open access to public data is shocking when considering this is mostly information about how governments administer everyone’s resources and taxes.

So when governments and corporations do not open their data vaults, people get organised and generate their own data. This is the case of “Rede InfoAmazonia”, a project that maps water quality and quantity based on a network of sensors deployed by communities of the Brazilian Amazon. The map issues alarms to the community when water levels or quality surpass or fall behind a range of standard indicators.

In my book, I discuss five ways in which data activists and practitioners can get their hands on data: from the simplest to the most complex, 1) someone else (i.e. a whistle-blower) can offer them the data; 2) data activists can also resort to public data that can be acquired (i.e. automatic identification system signals captured by satellites from vessels) or are simply open; 3) they can generate communities to crowdsource citizen data; 4) they can appropriate data or resort to data scraping; and 5) they deploy drones and sensors to gather images or obtain data via primary research (i.e. surveys). Again, this taxonomy is offered as a tool to examine real cases.

Of them, crowdsourcing data can be a powerful process. The crowdsourced map set up using the Ushahidi platform in Haiti in 2010 tackled “key information gaps” in the early period of the response before large organisations were operative, providing geolocalised data to small non-governmental organisations that did not have a field presence, offering situational awareness and rapid information with high degree of accuracy, and enabling citizens’ decision-making, found an independent evaluation of the deployment (Morrow, Mock, and Papendieck 2011). The Haiti map marked a transformation in the way emergencies and crises are tackled, giving rise to digital humanitarianism.

forensic_architecture

Forensic Architecture’s Liquid Traces.

Other forms of obtaining data are quite impressive too. Forensic Architecture’s “Liquid Traces” employed AIS signals, heat signatures of the ships, radar signals and other surveillance technologies to demonstrate that the failure to save a group of 72 people who had been forced by armed Libyan soldiers on-board of an inflatable craft on March 27, 2011, was due to callousness, not the inability to locate them. Only nine would survive. Another organisation, WeRobotics, helps communities in Nepal to analyse and map vulnerability to landslides in a changing climate.

Alliances, maps and hybridisation

From the observation of how these organisations work, I have identified eleven traits that define data activists and organisations.

One interesting commonality is that data activists tend to work in alliances. This sounds quite commonsensical. Either the problems these activists are trying to analyse and solve are too big to tackle on their own (i.e. from a humanitarian crisis to climate change), or the datasets that they confront are too big (i.e. “Western Africa’s Missing Fish” and the ICIJ’s “Panama papers” processed terabytes of data). I cannot think of any data project that does not include some form of collaboration.

Ushahidi_map

The first Ushahidi map: Kenyan violence.

Another quality is that data activists often rely on maps as tools for analysis, coordination and mobilisation. Maps are objects bestowed with knowledge, power and influence (Denil 2011; Harley 1989; Hohenthal, Minoia, and Pellikka 2017). The rise of digital cartography, mobile media, data crowdsourcing platforms and geographic information systems reinforces the maps’ muscle. This trend overlaps with a growing interest in crisis and activist mapping, a practice that blends the capabilities of the geoweb with humanitarian assistance and campaigning. In the hands of people and organisations, maps have been a form of political counter-power (Gutierrez 2018). One example is Ushahidi’s first map (see map), which was set up in 2008 to bypass an information shutdown during the bloodbath that arose after the presidential elections in Kenya a year earlier, and to give voice to the anonymous victims. The deployment allowed victims to disseminate alternative narratives about the post-electoral violence.

The employment of maps is so usual in data activism that I have called this variety of data activism geoactivism –defined precisely by the way activists use digital cartography and often crowdsourced data to provide alternative narratives and spaces for communication and action. InfoAmazonia, an organisation dedicated to environmental issues and human rights in the Amazon region, is an example of another organisation specialised in visualising geolocalised data, in this case for journalism and advocacy. I defend the idea that this use of maps almost by default has generated a change in paradigm, standardising maps for humanitarianism and activism.

Vagabundos

Vagabundos de la chatarra, the book.

Besides, data activists usually do not have any qualms about mixing methods and tools from other trades. Not only many data organisations are hybrid –crossing the lines that separate journalism, advocacy, research and humanitarianism—, but they also combine repertoires of action from different areas. An example is “Los vagabundos de la chatarra”, a year-long project that includes comics journalism, a book, interactive maps, videos and a website to tell the stories of the people who gathered and sold scrap metal for a living on the edges of Barcelona during the economic crisis that started in 2007 (Gutierrez, Rodriguez, and Díaz de Guereñu 2018).

Civio, mentioned before, produces journalism, hosts data projects, advocates around issues such as transparency, corruption, health and forest fires. “España en llamas” is a project hatched at Civio that, for the first time in Spain, paints a comprehensive picture of fires. Civio also opens the data behind these projects.

The values that motivate these data activists include sharing knowledge, collaborating and inspiring processes of social change and justice, uncovering and providing undisputable evidence for them, and deploying collective action powered by indignation and also by hope. These data activists deserve more attention.

*A version of this blog has been published at Medium.

References

Boellstorff, Tom. 2013. ‘Making Big Data, in Theory’. First Monday 18 (10). http://firstmonday.org/article/view/4869/3750.

Denil, Mark. 2011. ‘The Search for a Radical Cartography’. Cartographic Perspectives 68. http://cartographicperspectives.org/index.php/journal/article/view/cp68-denil/14.

Gitelman, Lisa, ed. 2013. Raw Data Is an Oxymoron. Cambridge and London: The MIT Press.

Global Open Data Index. 2017. ‘The GODI 2016/17 Report: The State Of Open Government Data In 2017’. https://index.okfn.org/insights/.

Gutiérrez, Miren. 2018. ‘Maputopias: Cartographies of Knowledge, Communication and Action in the Big Data Society – The Cases of Ushahidi and InfoAmazonia’. GeoJournal 1–20. https://doi.org/https://doi.org/10.1007/s10708-018-9853-8.

Gutiérrez, Miren, Pilar Rodríguez, and Juan Manuel Díaz de Guereñu. 2018. ‘Journalism in the Age of Hybridization: Barcelona. Los Vagabundos de La Chatarra – Comics Journalism, Data, Maps and Advocacy’. Catalan Journal of Communication and Cultural Studies 10 (1): 43-62. https://doi.org/10.1386/cjcs.10.1.43_1

Harley, John Brian. 1989. ‘Deconstructing the Map’. Cartographica: The International Journal for Geographic Information and Geovisualization 26 (2): 1–20.

Hohenthal, Johanna, Paola Minoia, and Petri Pellikka. 2017. ‘Mapping Meaning: Critical Cartographies for Participatory Water Management in Taita Hills, Kenya’. The Professional Geographer 69 (3): 383–95. https://doi.org/10.1080/00330124.2016.1237294.

Milan, Stefania, and Miren Gutiérrez. 2015. ‘Citizens´ Media Meets Big Data: The Emergence of Data Activism’. Mediaciones 14. http://biblioteca.uniminuto.edu/ojs/index.php/med/article/view/1086/1027.

Morrow, Nathan, Nancy Mock, and Adam Papendieck. 2011. ‘Independent Evaluation of the Ushahidi Haiti Project’. Port-au-Prince: ALNAP. http://www.alnap.org/resource/6000.

Poell, Thomas, Helen Kennedy, and Jose van Dijck. 2015. ‘Special Theme: Data & Agency’. Big Data & Society. http://bigdatasoc.blogspot.com.es/2015/12/special-theme-data-agency.html.

van Dijck, Jose. 2014. ‘Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology’. Surveillance & Society 12 (2): 197–208.

 

About Miren
Miren is a Research Associate at DATACTIVE. She is also a professor of Communication, director of the postgraduate programme “Data analysis, research and communication”, and member of the research team of the Communication Department at the University of Deusto, Spain. Miren’s main interest is proactive data activism, or how the data infrastructure can be utilized for social change in areas such as development, climate change and the environment. She is a Research Associate at the Overseas Development Institute of London, where she leads and participates in data-based projects exploring the intersection between biodiversity loss, environmental crime and development.

[blog] Can We Plan Slow – But Steady – Growth for Critical Studies?

Author: Charlotte Ryan (University of Massachusetts, Lowell/Movement-Media
Research Action Project), member of the DATACTIVE ethics board.

This is a response post to the blog ‘Tech, data and social change: A plea for cross-disciplinary engagement, historical memory, and … Critical Community Studies‘ written by Kersti Wissenbach.

To maximize technologies’ value in social change efforts, Kersti Wissenbach urges researchers to join with communities facing power inequalities to draw lessons from practice. In short, the liberating potential of technologies for social change cannot be realized without holistically addressing broader inequalities. Her insights are many, in fact, communication activists and scholars could use her blog as a guide for ongoing conversations. Three points especially resonate with my experiences as a social movement scholar/activist working in collaboration with communities and other scholars:

  • Who is at the table?
    Wissenbach stresses the critical role of proactive communities in fostering technologies for social change as a corrective to the “dominant civic tech discourse [that] seems to keep departing from the ‘tech’ rather than the ‘civic’.” She stresses that an inclusive “we” emerges from intentional and sustained working relationships.
  • Power (and inequalities of power) matter!
    Acknowledging that technologies’ possibilities are often shaped long before many constituencies are invited to participate, Wissenbach asks those advancing social change technologies to notice the creation and recreation of power structures:
    “Only inclusive communities,” she cautions, “can really translate inclusive technology approaches, and consequently, inclusive governance.”
  • Tech for social change needs critical community studies
    Wissenbach calls for the emergence of critical community studies that—as do critical development, communication, feminist, and subaltern studies–crosses disciplines, “taking the community as an entry point in the study of technology for social change.” Practitioners and scholars would reflect together to draw and disseminate shared lessons from experience. This would allow “communities, supposed to benefit from certain decisions, [to] have a seat on the table.”

Anyone interested in the potential of civic tech—activists, scholar-activists, engineers, designers, artists, or other social communication innovators—will warmly welcome Wissenbach’s vision of Critical Community Studies. She proposes not another sub-specialty with esoteric journals and self-referential jargon, but a research network of learning communities expanding conceptual dialogs across the usual divides. And, she recognizes the urgent need to preserve and broadly disseminate learning about technologies for social change.

I agree but cautiously. It is just what’s needed. But the academy tends to resist engaged scholarship. We need to think about where to locate transformative theory-building; sadly, calls to break with traditional research approaches may be more warmly received outside academic institutions than within. The academy itself, at least in the United States, is under duress. How would Critical Community Studies explain itself to academic institutions fascinated by brand, market niche, and revenue streams? Critical Community Studies is not likely to be a cash cow generating more profits faster, and with less investment. The U.S. trend to turn education into a profit-making industry may be extreme, but it raises the need to look before we leap.

Like Wissenbach, I entered the academy with deep roots in social movements and community activism. Like her, I want the academy to produce knowledge and technology for the social good. Like her, I want communities directly affected to be fully vested in all phases of learning. Like her, I am eager to move beyond vague calls for participation and inclusion. My experiences to date, however, give me pause for thought.

button life

Caption: Thirty years in buttons

In the mid-1980’s, I was among a dozen established and emerging scholars who formed the university-based Media Research Action Project (MRAP). We were well-positioned to bridge the theorist-practitioner divide; many of us had begun as movement activists and we had ties to practitioners. This made it easier for MRAP to work with under-represented and misrepresented communities and constituencies to identify and challenge barriers to democratic communication and to build communication capacity.

U.S. based social movements face recurring challenges: our movements hemorrhage learning between generations; we still need to grapple with the legacies of slavery, colonialism and jingoism; our labor movement has withered. Living amidst relative plenty, U.S. residents may feel far removed from crises elsewhere. Competitive individualism, market pressures, and dismantled social welfare programs leave U.S. residents feeling precarious —even if we embrace liberatory ideals.

In light of these material conditions, MRAP wanted to broaden political dialogs about equality and justice. At first, we focused on transferring communication skills—one and two-day workshops. We soon realized that we needed ongoing working relationships to test strategies, build infrastructure and shared conceptual frameworks. But it took years to find the funds to run a more sustained program. Foundations—even when they liked our work—wanted us to ‘scale up’ fast (one national foundation asked us to take on 14 cities). In contrast, we saw building viable working relations as labor-intensive and slow. One U.S. federal agency offered hefty funding for proposals to “bridge the digital divide.” MRAP filed a book-length application with ten community partner organizations, eight in communities of color. The agency responded positively to MRAP’s plan, they urged us to resubmit but asked that we dump our partners and replace them with mainstream charities, preferably statewide.

And so the constraints tightened. Government and foundations’ preference for quick gains could marginalize (again) the very partners MRAP formed to support. To support ourselves, we could take day jobs, but this limited our availability. Over and over, we found—at least in the U.S. context—talk of addressing power inequalities far exceeded public will and deeds. Few mainstream institutions would commit the labor, skill, and time to reduce institutionalized power inequalities. Nor did they appreciate that developing shared lessons from practical experiences is labor intensive. (Wissenbach notes a number of these obstacles).

Despite all of the above, MRAP and our partners had victories. One neighborhood collaboration took over local political offices; another defeated an attempt to shut down an important community school; others passed legislation; and made common cause with the Occupy Movement to challenge the demonization of poor people in America. We won…sometimes. More often, we lost but lived to fight another day. And we helped document the ups and downs of our social movements. It was enormous fun even when it was really hard. As the designated holders and tellers of these histories, MRAP participants deepened our understanding of the macro-mezzo-micro interplay of political, social, economic, and cultural power.

From hundreds of conversations, dozens of collaborations, and gigabytes of notes, case studies, and foundation proposals, came a handful of collaborations that advanced our understanding of how U.S. movement organizations synchronize communication, political strategizing, coalition building, and leader and organizational development, and how groups integrate learning into ongoing campaigns.

We have begun to upload MRAP’s work at www.mrap.info. But those pursuing a transformed critical research tradition, should acknowledge that the academy has resisted grounded practice, and that the best critical reflections were often led by activists outside the academy rooted in communities directly facing power inequalities. In light of this, Wissenbach’s insistence that communities directly affected “be at the table” becomes an absolute.

Let me turn to Critical Communication Studies more specifically. To maximize publishing, U.S. scholars tend to communicate within, not across, disciplines. Anxious regarding slowing their productivity, they tend to avoid the unpredictability of practical work. For their part, the civic tech networks and communities facing inequalities find themselves competing for resources, a competition that can undermine the very collaborations they want to build. Even if resources are located, efforts may fade if a grant ends or a government changes hands.

So while I welcome the call for researchers to join practitioners in designing mutually beneficial projects, I want to do it right and that may mean do it slow. First off, who is the “we/us” mentioned twenty times by Wissenbach (or an equal number of times by me)? We need a real “we”: transforming institutional practices and priorities whether in academic or communication systems is a collective process. An aggregate of individuals even if they share common values does not constitute “us,” social movements as dialogic communities that consider, test, and unite around strategies. (As Wissenbach underscores, “we” need to shift power, and this requires shared strategies, efficient use of sustainable resources, and a capacity to learn from experience).

In short, transforming scholarly research from individual to collective models will take movement building. A first step may be recognizing that “we” needs to be built. Calling “we” a social construction does not mean it’s unreal; it means it’s our job to make it real.

Conclusion

I share Wissenbach’s respect for past and present efforts to lessen social inequalities via communication empowerment. I agree that “only inclusive communities can really translate inclusive technology approaches and, consequently, inclusive governance.” And I know that this will be hard to achieve. Progress may lie ahead but precarity and heavy work lie ahead as well. A beloved friend says to me these days, “Getting old is not for the faint of heart.” Neither is movement building.

 

Bibliography:

Howley, K. (2005). Community media: people, places, and communication technologies. Cambridge, UK ; New York : Cambridge University Press.

Kavada, A. (2010). Email lists and participatory democracy in the European social forum. Media, Culture & Society, 32(3), 355. doi: 10.1080/13691180802304854

Kavada, A. (2013). Internet cultures and protest movements: The cultural links between strategy, organizing and online communication. In B. Cammaerts, A. Mattoni & P.

McCurdy (Eds.), Mediation and protest movements (pp. 75–94). Bristol, England: Intellect.

Kidd, D., Barker-Plummer, B., & Rodriguez, C. (2005). Media democracy from the ground up: mapping communication practices in the counter public sphere. Report to the Social Science Research Council. New York

Kidd, D., Rodriguez, C., & Stein, L. (2009). Making our media: Global initiatives toward a democratic public sphere. Cresskill: Hampton Press.

Lentz, R. G., & Oden, M. D. (2001). Digital divide or digital opportunity in the Mississippi Delta region of the US. Telecommunications policy, 25(5), 291-313.

Lentz, R. G. Regulation as Linguistic Engineering. (2011). The Handbook of Global Media and Communication Policy, 432-448. IN Mansell, R., & Raboy, M. (Eds.) (Vol. 6). John Wiley & Sons.

Magallanes-Blanco, C., & Pérez-Bermúdez, J. A. (2009). Citizens’ publications that empower: social change for the homeless. Development in practice, 19(4-5), 654-664.

Mattoni, A. (2016). Media practices and protest politics: How precarious workers mobilise. Routledge.

Mattoni, A., & Treré, E. (2014). Media practices, mediation processes, and mediatization in the study of social movements. Communication theory, 24(3), 252-271.

Milan, S. (2009). Four steps to community media as a development tool. Development in Practice, 19(4-5), 598-609.

Rubin, N. (2002). Highlander media justice gathering final report. New Market, TN: Highlander Research and Education Center.

Treré, E. and Magallanes-Blanco, C. (2015) Battlefields, Experiences, Debates: Latin American Struggles and Digital Media Resistance, International Journal of Communication 9: 3652–366.

[blog] #Data4Good, Part II: A necessary debate

By Miren Gutiérrez*
In the context of the Cambridge Analytica scandal, fake news, the use of personal data for propagandistic purposes and mass surveillance, the Postgraduate Programme “Data analysis, research and communication” proposed a singular debate on how the (big) data infrastructure and other technologies can serve to improve people’s lives and the environment. The discussion was conceived as the second part of an ongoing conversation that started in Amsterdam with the Data for the Social good conference in November 2017.

We understand that four communities converge in the realisation of data projects with social impact: organisations that transfer skills, create platforms and tools and generate opportunities; the catalysts, which provide the funds and the means; those that produce data journalism, and the data activists. However, on rare occasions we see them debate together in public. Last April 12, at the headquarters of the Deusto Business School in Madrid, we met with representatives of these four communities, namely:

file

(From left to right, see picture), Adolfo Antón Bravo, head of the DataLab at Medialab-Prado, where he has led the experimentation, production and dissemination of projects around the data culture and the promotion of open data. Adolfo has also been representative of the Open Knowledge Foundation Spain, a catalyst organisation dedicated to finance and promote data projects, among others.

Mar Cabra, a well-known investigative journalist specialising in data analysis, who has been in charge of the Data and Research Unit of the International Consortium of Investigative Journalists (ICIJ), winner of the 2017 Pulitzer Prize with the investigation known as “The Papers of Panama”.

Juan Carlos Alonso, designer at Vizzuality, an organisation that offers applications that help to understand data through its visualisation better and comprehend global processes such as deforestation, disaster preparedness, the global flow of trade in agricultural products or action against climate change around the world.

Ignacio Jovtis, head of Research and Policies of Amnesty International Spain. AI uses testimonies, digital cartography, data and satellite photography to denounce and produce evidence of human rights abuses, for example in the war in Syria and the military appropriation of Rohingya land in Myanmar.

And Juanlu Sánchez, another well-known journalist, co-founder and deputy director of eldiario.es, who specialises in digital content, new media and independent journalism. Based on data analysis, he has led and collaborated in various investigative stories rocking Spain, such as the Bankia scandal.

The prestigious illustrator Jorge Martín facilitated the conversation with a 3.5×1 m mural summarising the main issues tackled by the panellists and the audience.

deusto

The conference’s formula was not conventional, as the panellists were asked not to offer a typical presentation, but to engage in a dialogue with the audience, most of whom belonged to the four communities mentioned earlier, representing NGOs, foundations, research centres and news media organisations.

Together, we talked about:

• the secret of successful data projects combining a “nose for a good story”, legwork (including hanging out in bars) and data in sufficient quantity and quality;
• the need to merge wetware and algorithms;
• the skills gaps within organisations;
• the absolute necessity to collaborate to tackle datasets and issues that are too big to handle alone;
• the demand to engage funders at all level –from individuals to foundations— to make these projects possible;
• the advantages of a good visualisation for both analysis and communication of findings;
• where and how to obtain data, when public data is not public much less open;
• the need for projects of any nature to have real social impact and shape policy;
• the combination of analogic methodologies (i.e. interviews, testimonies, documents) with data-based methodologies (i.e. satellite imagery, interactive cartography and statistics), and how this is disrupting humanitarianism, human rights and environmental campaigning and newsrooms;
• the need to integrate paper archives (i.e. using optical recognition systems) to incorporate the past into the present;
• the magic of combining seemingly unrelated datasets;
• the imperative to share not only datasets but also code, so others can contribute to the conversation, for example exploring venues that were not apparent to us;
• the importance of generating social communities around projects;
• the blurring of lines separating journalism, activism and research when it comes to data analysis;
• the experiences of using crowds, not only to gather data but also to analyse them.

Cases and issues discussed included Amnesty’s “troll patrol”, an initiative to assign digital volunteers to analyse abusive tweets aimed at women, and investigation on the army appropriation of Rohingyas’ land in Myanmar based on satellite imagery; Trase, a Vizzuality project that tracks agricultural trade flows (including commodities such as soy, beef and palm oil), amazingly based both on massive digitalised datasets and the paper trail left by commodities in ports; the “Panama papers”, and the massive collaborative effort that involved analysing 2,6 terabytes of data, and 109 media outlets in 76 countries; the successful diario.es business model, based on data and investigative journalism and supported by subscribers who believe in independent reporting; and the Datalab’s workshops, focused on data journalism and visualisation, which have been going on for six years now and have given birth to projects still active today.

The main conclusions could be summarised as follows:

1) the human factor –wetware— is as essential for the success of data projects with social impact as software and hardware, since technology alone is not a magic bullet;
2) the collaboration of different actors from the four communities with different competencies and resources is essential for these projects to be successful and to have an impact; and
3) a social transformation is also needed within non-profit and media organisations so that the culture of the data spreads far and away, and the data infrastructure is maximised for the transformation of the whole society and the conservation of nature.

* Dr Miren Gutiérrez is the director of the postgraduate Programme “Data analysis, research and communication” at the University of Deusto and a Lecturer on Communication. She is also a Research Associate at Datactive.

Para exercer plenamente a cidadania, é preciso conhecer os filtros virtuais (Época Negócios)

Stefania was commissioned an article by the Brazilian business magazine Época Negócios. In sum, she argues that “estar ciente dos elementos que moldam profundamente nossos universos de informação é um passo fundamental para deixarmos de ser prisioneiros da internet”. Continue reading the article in Portuguese online. Here you can read the original in English.

Why personalization algorithms are ultimately bad for you (and what to do about it)

Stefania Milan

I like bicycles. I often search online for bike accessories, clothing, and bike races. As a result, the webpages I visit as well as my Facebook wall often feature ads related to biking. The same goes for my political preferences, or my last search for the cheapest flight or the next holiday destination. This information is (usually) relevant to me. Sometimes I click on the banner; largely, I ignore it. Most of the cases, I hardly notice it but process and “absorb” it as part of “my” online reality. This unsolicited yet relevant content contributes to make me feel “at home” in my wanderings around the web. I feel amongst my peers.

Behind the efforts to carefully target web content to our preferences are personalization algorithms. Personalization algorithms are at the core of social media platforms, dating apps, and generally of most of the websites we visit, including news sites. They make us see the world as we want to see it. By forging a specific reality for each individual, they silently and subtly shape customized “information diets”.

Our life, both online and offline, is increasingly dependent on algorithms. They shape our way of life, helping us find a ride on Uber or hip, fast food delivery on Foodora. They might help us finding a job (or losing it), and locating a partner for the night or for life on Tinder. They mediate our news consumption and the delivery of state services. But what are they, and how can they do their magic? Algorithms can be seen like a recipe for baking an apple tart: in the same way in which the grandma’s recipe tells us, step by step, what to do to make it right, in computing algorithms tell the machine what to do with data, namely how to calculate or process it, and how to make sense of it and act upon it. As forms of automated reasoning, they are usually written by humans, however they operate into the realm of artificial intelligence: with the ability to train themselves over time, they might eventually “take up” their own life, sort to speak.

The central role played by algorithms in our life should be of concern, especially if we conceive of the digital as complementary to our offline self. Today, our social dimension is simultaneously embedded and (re)produced by technical settings. But algorithms, proprietary and opaque, are invisible to end users: their outcome is visible (e.g., the manipulated content that shows up on one’s customized interface), but it bears no indication of having been manipulated, because algorithms leave no trace and “exist” only when operational. Nevertheless, they do create rules for social interaction and these rules indirectly shape the way we see, understand and interact with the world around us. And far from being neutral, they are deeply political in nature, designed by humans with certain priorities and agendas.

While there are many types of algorithms, what affects us most today are probably personalization algorithms. They mediate our web experience, easing our choices by giving us information which is in tune with our clicking habits—and thus, supposedly, preferences.

They make sure the information we are fed is relevant to us, selecting it on the basis of our prior search history, social graph, gender and location, and generally speaking about all the information we directly on unwillingly make available online. But because they are invisible to the eyes of users, most of us are largely unaware this personalization is even happening. We believe we see “the real world”, yet it is just one of the many possible realities. This contributes to envelop us in what US internet activist and entrepreneur Eli Pariser called the “filter bubble”— that is to saythe intellectual isolation caused by algorithms constantly guessing what we might like or not, based on the ‘image’ they have of us. In other words, personalization algorithms might eventually reduce our ability to make informed choices, as the options we are presented with and exposed to are limited and repetitive.

Why should we care, if all of this eventually is convenient and makes our busy life easier and more pleasant?

First of all, this is ultimately surveillance, be it corporate or institutional. Data is constantly collected about us and our preferences, and it ends up “standing in” for the individual, who is made to disappear in favoir of a representation which can be effortlessly classified and manipulated.“When you stare into the Internet, the Internet stares back into you”, once tweeted digital rights advocate @Cattekwaad. The web “stares back” by tracking our behaviours and preferences, and profiling each of us in categories ready for classification and targeted marketing. We might think of the Panopticon, a circular building designed in mid-19thcentury by the philosopher Jeremy Bentham as “a new mode of obtaining power of mind over mind” and intended to serve as prison. In this special penal institute, a single guard would be effortlessly able to observe all inmates without them being aware of the condition of permanent surveillance they are subjected to.

But there is a fundamental difference between the idea of the Panopticon and today’s surveillance ecosystem. The jailbirds of the internet age are not only aware of the constant scrutiny they are exposed to; they actively and enthusiastically participate in generation of data, prompted by the imperative to participate of social media platforms. In this respect, as the UK sociologist Roy Boyne explained, the data collection machines of personalization algorithms can then be seen as post-Panopticon structures, whereby the model rooted on coercion have been replaced by the mechanisms of seduction in the age of big data. The first victim of personalization algorithms is our privacy, as we seem to be keen to sacrifice freedom (including the freedom to be exposed to various opinions and the freedom from the attention of others) to the altar of the current aggressive personalized marketing in favour of convenience and functionality.

The second victim of personalization algorithms is diversity, of both opinions and preferences, and the third and ultimate casualty is democracy. While this might sound like an exaggerated claim, personalization algorithms dramatically—and especially, silently—reduce our exposure to different ideas and attitudes, helping us to reinforce our own and allowing us to disregard any other as “non-existent”. In other words, the “filter bubble” created by personalization algorithms isolates us in our own comfort zone, preventing us from accessing and evaluating the viewpoints of others.

The hypothesis of the existence of a filter bubble has been extensively tested. On the occasion of the recent elections in Argentina, last October, Italian hacker Claudio Agosti in collaboration with the World Wide Web Foundation, conducted a research using facebook.tracking.exposed,a software intend to “increase transparency behind personalization algorithms, so that people can have more effective control of their online Facebook experience and more awareness of the information to which they are exposed.”

The team rana controlled experiment with nine profiles created ad hoc, creating a sort of “lab experiment” in which profiles were artificially polarized (e.g., maintaining some variables constant, each profile “liked” different items). Not only did the data confirmed the existence of a filter bubble; it showed a dangerous reinforcement effect which Agosti termed “algorithm extremism”.

What can we do about all this? This question has two answers. The first is easy but uncomfortable. The second is a strategy for the long run and calls for an active role.

Let’s start from the easy. We ultimately retain a certain degree of human (and democratic) agency: in any given moment, we can choose to opt out. To be sure, erasing our Facebook account doesn’t do the trick of protecting our long-eroded privacy: the company has the right to retain our data, as per Terms of Service, the long, convoluted legal document—a contract, that is—we all sign to but rarely read. With the “exit” strategy we lose in contacts, friendships, joyful exchange and we are no longer able to sneak in the life of others, but we gain in privacy and, perhaps, reclaim our ability to think autonomously. I bet not many of you will do this after reading this article—I haven’t myself found the courage to disengage entirely from my leisurely existence on social media platforms.

But there is good news. As the social becomes increasingly entrenched in its algorithmic fabric, there is a second option, a sort of survival strategy for the long run. We can learn to live with and deal withalgorithms. We can familiarize with their presence, engaging in a self-reflexive exercise that questions what they show us in any given interface and why. If understandably not all of us might be inclined to learn the ropes of programming, “knowing” the algorithms that so much affect us is a fundamental step to be able to fully exercise our citizenship in the age of big data. “Knowing” here means primarily making the acquaintance with their occurrence and function, and questioning the fact that being turned into a pile of data is almost an accepted fact of life these days. Because being able to think with one’s own head today, means also questioning the algorithms that so much shape our information worlds.