Category: show on landing page

annual DATACTIVE PhD Colloquium, May 4th

Date: Tomorrow, May 4th 13.30

Location: Oudemanhuispoort 4-6, Amsterdam, room OMHP-E0.12

Tomorrow we will have our yearly PhD colloquium, a moment to showcase our work and receive feedback. You’re invited to join us.

This year’s guests, acting as respondents, are Marlies Glasius (Amsterdam School for Social Science Research) and Annalisa Pellizza (University of Twente). Our new postdoctoral fellow Fabien Cante will be also be in attendance.

The program is as follows:

(13:30 – 14:15) Niels ten Oever: “The evolving notion of the public interest in the Internet architecture”

(14:20 – 15:05) Kersti Wissenbach: “Accounting for Power in a Datafied World: A Social Movement Approach to Civic Tech Activism”

(15:10 – 15:25) Coffee Break

(15:25 – 16:10) Becky Kazansky: “Infrastructures of Anticipation: civil society strategies in an age of ubiquitous surveillance”

(16:15 – 17:00) Guillen Torres: “Empowering information activists through institutional resistance”.

 

Welcome to two new team members: Hoang & Fabien

DATACTIVE is happy to welcome two new team members!

Fabien Cante will join us as a postdoc, mostly to help with empirical research. Fabien is interested in media as contested infrastructures of city life. His PhD (London School of Economics, 2018) work was grounded in Abidjan, Côte d’Ivoire; he hopes to continue asking what datafication means in an African metropolis. In addition to academic work, Fabien is comms officer for the Migrants’ Rights Network and active in neighbourhood struggles in South London.

IMG_20180422_171539

 

We are also happy to have Hoang joining us to help us out with our empirical research practices as a part of her rMA studies.

Tu Quynh Hoang has a BA in Professional Communication from RMIT University. Concerned about human rights issues in Asia, she moved from working in media companies to doing research on Internet controls and citizens’ media. She is currently studying towards a Research MA in Media Studies at the University of Amsterdam.

Quynh profile photo

Welcome both, we are very much looking forward to working with you!

[blog 2/3] Designing the city by numbers? Digital quantification regimes of cycling mobility 1

 

 

This is the second of three blog posts of the series ‘Designing the city by numbers? Bottom-up initiatives for data-driven urbanism in Santiago de Chile’, by Martín Tironi and Matías Valderrama Barragán. Find the first post hereStay tuned: the next episode will appear next Friday, May 4th!

 

Over the past two years, we have been studying cases that specifically involve digital quantification and urban cycling in the city of Santiago. Because of the multiple benefits to the environment, urban congestion, and citizens’ health, urban cycling has been characterized as a “green” and “sustainable” form of mobility, highly attractive for smart cities initiatives. Under this trend, various digital devices and self-tracking apps have been developed for quantifying and expand urban cycling. The numbers and data generated by such an array of technologies have more recently been reframed as valuable crowdsourced information to inform and guide decisions on urban planning and promoting citizen demands. In this sense, data-driven initiatives seem to promote a spirit where citizens themselves appear as the central actors of urban planning thanks to the development of these civic technologies. In contrast, we explore why we should remain sceptical of how such data-driven initiatives adopt what can appear to be bottom-up approaches. We should remain critically vigilant of how such moves can be used to promote market-driven technological adoption and low-efforts forms of citizenship instead.

RUBI: Let the bikes speak for themselves

Our first example is the case of RUBI, Urban Bike Tracker device in Spanish, which we examined more in detail in a recently published paper in the journal Environment and Planning D. This device records the routes taken by cyclists anonymously in a georeferenced database that is later processed on a web platform (RubiApp) to obtain numbers, metrics and visualizations of the users. It was developed in 2014 by a young engineering student as his undergraduate thesis. At that time, he started a bottom-up project called Stgo2020, in order to invite cyclists of Santiago to voluntarily participate in the collection of data about their everyday trips, and with that, challenging the status quo in urban planning and allowing cyclists to act as “co-designers” of their own city. The project achieved the collection of data from a hundred volunteer cyclists, generating graphics, tables and heat maps about urban cycling. This information was shared later with the Transportation Office hoping that it would help to make data-driven decisions about future cycling lanes -but he never knew if the data was used in some way.

Because of the academic origin of RUBI, the entire development of the device was based on a strongly scientific narrative on how to achieve a “representative” and “clean” sample of cyclists’ mobility. So, the developer decided to design a hardware that could be differentiated from apps like STRAVA and wearables technologies that would depend on expensive technologies and data plan, presenting strong biases in his opinion. This scientific narrative marked the whole design and materiality of the RUBI. The first prototypes were large, fragile and very much dependent on the human user in several respects. In fact, the engineer behind the device playfully drew a human face on the first prototype. But several problems emerged with these first versions. For example, users continually forgot to turn it on or off when necessary, some users subvert and appropriate the functioning of the technology in unexpected ways, and particular problems emerged from the GPS of the device itself. These emerging breakdowns from the everyday entanglement of cyclists, devices, bicycles and urban spaces, produced “erroneous”, “stupid” or “absurd” data for the engineer, that we call it as “idiotic data” in our paper based on Isabelle Stengers conceptual character of the idiot, which slow down and put in question the “clean” collection of data intended for the project. To confront the emergence of idiotic data, new sensors, algorithms and automated functions were aggregated to give the device a greater “smartness” to operate as an autonomous and independent entity, outside of human control. In the process, the device shift to a literal “black box” ensuring little interaction with the cyclists and the environment as possible, and as a result, the practice of quantifying the urban cycling become more unnoticed and effortless for cyclists.

During 2016, the RUBI device scaled up to other cities using new business models, losing its bottom-up nature. The company RubiCo was created and reached agreements with local governments and international consulting agencies like the Inter-American Development Bank, to map the use of public bicycle rental systems -even without the notice of users in some cases. Giving the device “true intelligence” was not only a precautionary “solution” to idiotic data, but it was mobilized to add value and solidity to the regime compared to the competition. In contrast to other self-tracking technologies (apps, wearables, etc.), RubiCo focuses on control the biases and noise of the sample on cyclists’ mobility, constituting RUBI interaction with the bike as an authentic “moving laboratory” that captures georeferenced data precisely and objectively, using the words of RUBI’s developer.

Stay tuned for the final posts in this series for more on the development of the KAPPO pro-cycling smartphone game and its outcomes in Santiago.

 

 

1. This text is based on a presentation at the Workshop “Designing people by numbers” held in Pontificia Universidad Católica in November 2017, with the participation of Celia Lury

 

About the authors

Martín Tironi is Associate Professor, School of Design at the Pontifical Catholic University of Chile. He holds a PhD from Centre de Sociologie de l’Innovation (CSI), École des Mines de Paris, where he also did post-doctorate studies. He received his Master degree in Sociology at the Université Paris Sorbonne V and his BA in Sociology at the Pontifical Catholic University of Chile. Now he’s doing a visiting Fellow (2018) in Centre of Invention and Social Proces, Goldsmiths, University of London [email: martin.tironi@uc.cl]

Matías Valderrama Barragán is a sociologist with a Master in Sociology from the Pontifical Catholic University of Chile. He is currently working in research projects about digital transformation of organizations and datafication of individuals and environments in Chile. [email:mbvalder@uc.cl]

NOW OUT! Special issue on ‘data activism’ of Krisis: Journal for Contemporary Philosophy

DATACTIVE is proud to announce the publication of the special issue on ‘data activism’ of Krisis: Journal for Contemporary Philosophy. Edited by Stefania Milan and Lonneke van der Velden, the special issue features six articles by Jonathan Gray, Helen Kennedy, Lina Dencik, Stefan Baack, Miren Gutierrez, Leah Horgan and Paul Dourish; an essay by, and three book reviews. The journal is open access; you can read and download the article from http://krisis.eu.

Issue 1, 2018: Data Activism
Digital data increasingly plays a central role in contemporary politics and public life. Citizen voices are increasingly mediated by proprietary social media platforms and are shaped by algorithmic ranking and re-ordering, but data informs how states act, too. This special issue wants to shift the focus of the conversation. Non-governmental organizations, hackers, and activists of all kinds provide a myriad of ‘alternative’ interventions, interpretations, and imaginaries of what data stands for and what can be done with it.

Jonathan Gray starts off this special issue by suggesting how data can be involved in providing horizons of intelligibility and organising social and political life. Helen Kennedy’s contribution advocates for a focus on emotions and everyday lived experiences with data. Lina Dencik puts forward the notion of ‘surveillance realism’ to explore the pervasiveness of contemporary surveillance and the emergence of alternative imaginaries. Stefan Baack investigates how data are used to facilitate civic engagement. Miren Gutiérrez explores how activists can make use of data infrastructures such as databases, servers, and algorithms. Finally, Leah Horgan and Paul Dourish critically engage with the notion of data activism by looking at everyday data work in a local administration. Further, this issue features an interview with Boris Groys by Thijs Lijster, whose work Über das Neue enjoys its 25th anniversary last year. Lastly, three book reviews illuminate key aspects of datafication. Patricia de Vries reviews Metahavens’ Black Transparency; Niels van Doorn writes on Platform Capitalism by Nick Srnicek and Jan Overwijk comments on The Entrepeneurial Self by Ulrich Bröckling.

[blog] #Data4Good, Part II: A necessary debate

By Miren Gutiérrez*
In the context of the Cambridge Analytica scandal, fake news, the use of personal data for propagandistic purposes and mass surveillance, the Postgraduate Programme “Data analysis, research and communication” proposed a singular debate on how the (big) data infrastructure and other technologies can serve to improve people’s lives and the environment. The discussion was conceived as the second part of an ongoing conversation that started in Amsterdam with the Data for the Social good conference in November 2017.

We understand that four communities converge in the realisation of data projects with social impact: organisations that transfer skills, create platforms and tools and generate opportunities; the catalysts, which provide the funds and the means; those that produce data journalism, and the data activists. However, on rare occasions we see them debate together in public. Last April 12, at the headquarters of the Deusto Business School in Madrid, we met with representatives of these four communities, namely:

file

(From left to right, see picture), Adolfo Antón Bravo, head of the DataLab at Medialab-Prado, where he has led the experimentation, production and dissemination of projects around the data culture and the promotion of open data. Adolfo has also been representative of the Open Knowledge Foundation Spain, a catalyst organisation dedicated to finance and promote data projects, among others.

Mar Cabra, a well-known investigative journalist specialising in data analysis, who has been in charge of the Data and Research Unit of the International Consortium of Investigative Journalists (ICIJ), winner of the 2017 Pulitzer Prize with the investigation known as “The Papers of Panama”.

Juan Carlos Alonso, designer at Vizzuality, an organisation that offers applications that help to understand data through its visualisation better and comprehend global processes such as deforestation, disaster preparedness, the global flow of trade in agricultural products or action against climate change around the world.

Ignacio Jovtis, head of Research and Policies of Amnesty International Spain. AI uses testimonies, digital cartography, data and satellite photography to denounce and produce evidence of human rights abuses, for example in the war in Syria and the military appropriation of Rohingya land in Myanmar.

And Juanlu Sánchez, another well-known journalist, co-founder and deputy director of eldiario.es, who specialises in digital content, new media and independent journalism. Based on data analysis, he has led and collaborated in various investigative stories rocking Spain, such as the Bankia scandal.

The prestigious illustrator Jorge Martín facilitated the conversation with a 3.5×1 m mural summarising the main issues tackled by the panellists and the audience.

deusto

The conference’s formula was not conventional, as the panellists were asked not to offer a typical presentation, but to engage in a dialogue with the audience, most of whom belonged to the four communities mentioned earlier, representing NGOs, foundations, research centres and news media organisations.

Together, we talked about:

• the secret of successful data projects combining a “nose for a good story”, legwork (including hanging out in bars) and data in sufficient quantity and quality;
• the need to merge wetware and algorithms;
• the skills gaps within organisations;
• the absolute necessity to collaborate to tackle datasets and issues that are too big to handle alone;
• the demand to engage funders at all level –from individuals to foundations— to make these projects possible;
• the advantages of a good visualisation for both analysis and communication of findings;
• where and how to obtain data, when public data is not public much less open;
• the need for projects of any nature to have real social impact and shape policy;
• the combination of analogic methodologies (i.e. interviews, testimonies, documents) with data-based methodologies (i.e. satellite imagery, interactive cartography and statistics), and how this is disrupting humanitarianism, human rights and environmental campaigning and newsrooms;
• the need to integrate paper archives (i.e. using optical recognition systems) to incorporate the past into the present;
• the magic of combining seemingly unrelated datasets;
• the imperative to share not only datasets but also code, so others can contribute to the conversation, for example exploring venues that were not apparent to us;
• the importance of generating social communities around projects;
• the blurring of lines separating journalism, activism and research when it comes to data analysis;
• the experiences of using crowds, not only to gather data but also to analyse them.

Cases and issues discussed included Amnesty’s “troll patrol”, an initiative to assign digital volunteers to analyse abusive tweets aimed at women, and investigation on the army appropriation of Rohingyas’ land in Myanmar based on satellite imagery; Trase, a Vizzuality project that tracks agricultural trade flows (including commodities such as soy, beef and palm oil), amazingly based both on massive digitalised datasets and the paper trail left by commodities in ports; the “Panama papers”, and the massive collaborative effort that involved analysing 2,6 terabytes of data, and 109 media outlets in 76 countries; the successful diario.es business model, based on data and investigative journalism and supported by subscribers who believe in independent reporting; and the Datalab’s workshops, focused on data journalism and visualisation, which have been going on for six years now and have given birth to projects still active today.

The main conclusions could be summarised as follows:

1) the human factor –wetware— is as essential for the success of data projects with social impact as software and hardware, since technology alone is not a magic bullet;
2) the collaboration of different actors from the four communities with different competencies and resources is essential for these projects to be successful and to have an impact; and
3) a social transformation is also needed within non-profit and media organisations so that the culture of the data spreads far and away, and the data infrastructure is maximised for the transformation of the whole society and the conservation of nature.

* Dr Miren Gutiérrez is the director of the postgraduate Programme “Data analysis, research and communication” at the University of Deusto and a Lecturer on Communication. She is also a Research Associate at Datactive.

[blog 1/3] Designing the city by numbers? Introduction: Hope for the data-driven city

This is the first of three blog posts of the series ‘Designing the city by numbers? Bottom-up initiatives for data-driven urbanism in Santiago de Chile’, by Martín Tironi and Matías Valderrama Barragán. Stay tuned: the next episode will appear next Friday, April 27!

The digital has invaded contemporary cities in Latin America, transforming ways of knowing, planning and governing urban life. Parallel to the spread of sensors, networks and digital devices of all kinds in cities in the Global North, they are increasingly becoming part of urban landscapes in cities in the Global South under Smart City initiatives. Because of this, vast quantities of digital data are produced in increasingly ubiquitous and invisible ways. The “datafication” or the growing translation of multiple phenomena into the format of computable data have been pronounced by various scholars in the North as propelling a revolution or large-scale epochal change in contemporary life (Mayer-Schönberger and Cukier, 2013; Kitchin, 2014) in which digital devices and the data collection would allow better self-knowledge and “smarter” decision-making across varied domains.

To examine the impacts of such hyped expectations and promises in Chile, from the Smart Citizen Project we have been studying different cases of Smart City and data-driven initiatives, focusing on how the idea of designing the city by digital numbers has permeated local governments in Santiago de Chile. Public officials and urban planners are being increasingly convinced that planning and governance will be better by quantifying urban variables and promoting decision making not only guided or informed but driven by digital data, algorithms and automated analytics -instead of prejudices, emotions or ideologies. In this “dataism” (van Dijck, 2014), it is believed that the data simply “speak for themselves” in a fantasy of immediacy and neutrality.

But perhaps the most innovative part of data-driven Smart City initiatives we’ve observed are the means by which they also promise to open a new era of experimentationand testing for citizen participation, amplifying notions like of ‘urban laboratory,’ ‘living lab,’ ‘pilot projects,’ ‘open innovation,’ and so on. Thanks to digital technologies, the assumption goes, a “democratization of policymaking” that might reduce the state’s monopoly on government decision-making (Esty, 2004; Esty & Rushing, 2007) might at last be realized, producing a greater “symmetry” or “horizontalization” between governors and the governed (Crawford & Goldsmith, 2014). This, however, depends on citizens’ willingness to function as sensors of their own cities, generating and “sharing” relevant and real-time geographic information about their behaviours and needs, which would be used by urban planners and public officials for their decisions (Goldsmith & Crawford, 2014; Goodchild, 2007).

Our work from the Smart Citizen Project at the Pontifical Catholic University of Chileunderscores the importance of nottaking as given any sort of homogeneous or universal “datafication” process and problematize how data-driven and smart governance are enacted –not without problems and breakdowns- in each location. Thus this series of three blog posts stresses how we must start instead by considering how multiple quantification practices are running at the same time, and how each one can present multiple purposes and meanings which can only be addressed on the basis of their heterogeneous contexts of materialization. Moreover, we explore how we are witnessing an increased diversity of what we call as “digital quantification regimes” produced from the South that aim to position themselves above existing technologies of the North in the market, and achieve an agreement that their data records are the most “participatory”, “representative”, or “accurate” bases for decision-making. Therefore, we must begin to explore the various suppositions, designs, political rationalities and scripts that these regimes establish in their diverse spheres of action under such growing “citizen” driven data initiatives in the South. What kind of practice-ontologies (Gabrys, 2016) might be produced through “citizen” driven data initiatives? At the same time, we believe that the “experimental” and “citizen” grammar that is increasingly infused into Smart City and data-driven initiatives in the South must be critically examined both in their actual development and forms of involvement. How the experimental grammar of smart projects is reconfiguring the idea of participation and government in the urban space?

So stay tuned for the next posts in this series for more on RUBI Urban bike tracker project and the KAPPO pro-cycling smart phone game in Santiago.

 

Cited works

Esty, D. C. & Rushing, R. (2007). Governing by the Numbers: The Promise of Data-Driven Policymaking in the Information Age. Center for American Progress,5, 21.

Gabrys, J. “Citizen Sensing: Recasting Digital Ontologies through Proliferating Practices.”Theorizing the Contemporary, Cultural Anthropology website, March 24, 2016.

Espeland, W. N., & Stevens, M. L. (2008). A sociology of quantification. European Journal of Sociology/Archives Européennes de Sociologie, 49(3), 401-436.

Esty, D. C. & Rushing, R. (2007). Governing by the Numbers: The Promise of Data-Driven Policymaking in the Information Age. Center for American Progress, 5, 21.

Goldsmith, S. & Crawford, S. (2014). The responsive city: engaging communities through data-smart governance. San Francisco, CA: Jossey-Bass, a Wiley Brand.

Goodchild, M. F. (2007). Citizens as sensors: The world of volunteered geography.GeoJournal, 69(4), 211-221.

Kitchin, R. (2014). The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences.London: Sage.

Mayer-Schönberger, V. and Cuckier, K. (2013).  Big Data: A revolution that will transform how we live, work, and think. New York: Houghton Mifflin Harcourt.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and secular belief. Surveillance & Society, 12(2), 197-208.

 

About the authors

Martín Tironi is Associate Professor, School of Design at the Pontifical Catholic University of Chile. He holds a PhD from Centre de Sociologie de l’Innovation (CSI), École des Mines de Paris, where he also did post-doctorate studies. He received his Master degree in Sociology at the Université Paris Sorbonne V and his BA in Sociology at the Pontifical Catholic University of Chile. Now he’s doing a visiting Fellow (2018) in Centre of Invention and Social Proces, Goldsmiths, University of London [email: martin.tironi@uc.cl]

Matías Valderrama Barragán is a sociologist with a Master in Sociology from the Pontifical Catholic University of Chile. He is currently working in research projects about digital transformation of organizations and datafication of individuals and environments in Chile. [email:mbvalder@uc.cl]

 

Stefania discusses data, citizenship and democracy in Lisbon, Bologna & Fribourg

On April 12, Stefania will give a talk on the politics of code and data at the ISCTE – Instituto Universitário de Lisboa, in Lisbon, Portugal.

On April 23, she will be in Bologna, Italy, at the School of Advanced International Studies of Johns Hopkins University. She will present her thoughts on ‘Citizenship Re-invented: The Evolution of Politics in the Datafied Society’.

Finally, on April 30 Stefania will lecture at the University of Fribourg, in Switzerland, upon invitation of Prof. Regula Haenggli. The lecture is entitled ‘Digitalization as a challenge to democracy: Possibilities of self-organization, emancipation, and autonomy’.

Para exercer plenamente a cidadania, é preciso conhecer os filtros virtuais (Época Negócios)

Stefania was commissioned an article by the Brazilian business magazine Época Negócios. In sum, she argues that “estar ciente dos elementos que moldam profundamente nossos universos de informação é um passo fundamental para deixarmos de ser prisioneiros da internet”. Continue reading the article in Portuguese online. Here you can read the original in English.

Why personalization algorithms are ultimately bad for you (and what to do about it)

Stefania Milan

I like bicycles. I often search online for bike accessories, clothing, and bike races. As a result, the webpages I visit as well as my Facebook wall often feature ads related to biking. The same goes for my political preferences, or my last search for the cheapest flight or the next holiday destination. This information is (usually) relevant to me. Sometimes I click on the banner; largely, I ignore it. Most of the cases, I hardly notice it but process and “absorb” it as part of “my” online reality. This unsolicited yet relevant content contributes to make me feel “at home” in my wanderings around the web. I feel amongst my peers.

Behind the efforts to carefully target web content to our preferences are personalization algorithms. Personalization algorithms are at the core of social media platforms, dating apps, and generally of most of the websites we visit, including news sites. They make us see the world as we want to see it. By forging a specific reality for each individual, they silently and subtly shape customized “information diets”.

Our life, both online and offline, is increasingly dependent on algorithms. They shape our way of life, helping us find a ride on Uber or hip, fast food delivery on Foodora. They might help us finding a job (or losing it), and locating a partner for the night or for life on Tinder. They mediate our news consumption and the delivery of state services. But what are they, and how can they do their magic? Algorithms can be seen like a recipe for baking an apple tart: in the same way in which the grandma’s recipe tells us, step by step, what to do to make it right, in computing algorithms tell the machine what to do with data, namely how to calculate or process it, and how to make sense of it and act upon it. As forms of automated reasoning, they are usually written by humans, however they operate into the realm of artificial intelligence: with the ability to train themselves over time, they might eventually “take up” their own life, sort to speak.

The central role played by algorithms in our life should be of concern, especially if we conceive of the digital as complementary to our offline self. Today, our social dimension is simultaneously embedded and (re)produced by technical settings. But algorithms, proprietary and opaque, are invisible to end users: their outcome is visible (e.g., the manipulated content that shows up on one’s customized interface), but it bears no indication of having been manipulated, because algorithms leave no trace and “exist” only when operational. Nevertheless, they do create rules for social interaction and these rules indirectly shape the way we see, understand and interact with the world around us. And far from being neutral, they are deeply political in nature, designed by humans with certain priorities and agendas.

While there are many types of algorithms, what affects us most today are probably personalization algorithms. They mediate our web experience, easing our choices by giving us information which is in tune with our clicking habits—and thus, supposedly, preferences.

They make sure the information we are fed is relevant to us, selecting it on the basis of our prior search history, social graph, gender and location, and generally speaking about all the information we directly on unwillingly make available online. But because they are invisible to the eyes of users, most of us are largely unaware this personalization is even happening. We believe we see “the real world”, yet it is just one of the many possible realities. This contributes to envelop us in what US internet activist and entrepreneur Eli Pariser called the “filter bubble”— that is to saythe intellectual isolation caused by algorithms constantly guessing what we might like or not, based on the ‘image’ they have of us. In other words, personalization algorithms might eventually reduce our ability to make informed choices, as the options we are presented with and exposed to are limited and repetitive.

Why should we care, if all of this eventually is convenient and makes our busy life easier and more pleasant?

First of all, this is ultimately surveillance, be it corporate or institutional. Data is constantly collected about us and our preferences, and it ends up “standing in” for the individual, who is made to disappear in favoir of a representation which can be effortlessly classified and manipulated.“When you stare into the Internet, the Internet stares back into you”, once tweeted digital rights advocate @Cattekwaad. The web “stares back” by tracking our behaviours and preferences, and profiling each of us in categories ready for classification and targeted marketing. We might think of the Panopticon, a circular building designed in mid-19thcentury by the philosopher Jeremy Bentham as “a new mode of obtaining power of mind over mind” and intended to serve as prison. In this special penal institute, a single guard would be effortlessly able to observe all inmates without them being aware of the condition of permanent surveillance they are subjected to.

But there is a fundamental difference between the idea of the Panopticon and today’s surveillance ecosystem. The jailbirds of the internet age are not only aware of the constant scrutiny they are exposed to; they actively and enthusiastically participate in generation of data, prompted by the imperative to participate of social media platforms. In this respect, as the UK sociologist Roy Boyne explained, the data collection machines of personalization algorithms can then be seen as post-Panopticon structures, whereby the model rooted on coercion have been replaced by the mechanisms of seduction in the age of big data. The first victim of personalization algorithms is our privacy, as we seem to be keen to sacrifice freedom (including the freedom to be exposed to various opinions and the freedom from the attention of others) to the altar of the current aggressive personalized marketing in favour of convenience and functionality.

The second victim of personalization algorithms is diversity, of both opinions and preferences, and the third and ultimate casualty is democracy. While this might sound like an exaggerated claim, personalization algorithms dramatically—and especially, silently—reduce our exposure to different ideas and attitudes, helping us to reinforce our own and allowing us to disregard any other as “non-existent”. In other words, the “filter bubble” created by personalization algorithms isolates us in our own comfort zone, preventing us from accessing and evaluating the viewpoints of others.

The hypothesis of the existence of a filter bubble has been extensively tested. On the occasion of the recent elections in Argentina, last October, Italian hacker Claudio Agosti in collaboration with the World Wide Web Foundation, conducted a research using facebook.tracking.exposed,a software intend to “increase transparency behind personalization algorithms, so that people can have more effective control of their online Facebook experience and more awareness of the information to which they are exposed.”

The team rana controlled experiment with nine profiles created ad hoc, creating a sort of “lab experiment” in which profiles were artificially polarized (e.g., maintaining some variables constant, each profile “liked” different items). Not only did the data confirmed the existence of a filter bubble; it showed a dangerous reinforcement effect which Agosti termed “algorithm extremism”.

What can we do about all this? This question has two answers. The first is easy but uncomfortable. The second is a strategy for the long run and calls for an active role.

Let’s start from the easy. We ultimately retain a certain degree of human (and democratic) agency: in any given moment, we can choose to opt out. To be sure, erasing our Facebook account doesn’t do the trick of protecting our long-eroded privacy: the company has the right to retain our data, as per Terms of Service, the long, convoluted legal document—a contract, that is—we all sign to but rarely read. With the “exit” strategy we lose in contacts, friendships, joyful exchange and we are no longer able to sneak in the life of others, but we gain in privacy and, perhaps, reclaim our ability to think autonomously. I bet not many of you will do this after reading this article—I haven’t myself found the courage to disengage entirely from my leisurely existence on social media platforms.

But there is good news. As the social becomes increasingly entrenched in its algorithmic fabric, there is a second option, a sort of survival strategy for the long run. We can learn to live with and deal withalgorithms. We can familiarize with their presence, engaging in a self-reflexive exercise that questions what they show us in any given interface and why. If understandably not all of us might be inclined to learn the ropes of programming, “knowing” the algorithms that so much affect us is a fundamental step to be able to fully exercise our citizenship in the age of big data. “Knowing” here means primarily making the acquaintance with their occurrence and function, and questioning the fact that being turned into a pile of data is almost an accepted fact of life these days. Because being able to think with one’s own head today, means also questioning the algorithms that so much shape our information worlds.

 

 

 

 

 

[blog] Critical reflections on FAT* 2018: a historical idealist perspective

Author: Sebastian Benthall, Research Scientist at NYU Steinhardt and PhD Candidate UC Berkeley School of Information.

In February, 2018, the inaugural 2018 FAT* conference was held in New York City:

The FAT* Conference 2018 is a two-day event that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems. This inaugural conference builds on success of prior workshops like FAT/ML, FAT/Rec, DAT, Ethics in NLP, and others.

FAT stands for “Fairness, Accountability, Transparency”, and the asterisk, pronounced “star”, is a wildcard character, which indicates that the conference ranges more widely that earlier workshops it succeeds, such as FAT/ML (ML meaning, “machine learning“), FAT/Rec (Rec meaning “recommender systems“). You might conclude from the amount of geekery in the title and history of the conference that FAT* is a computer science conference.

You would be half right. Other details reveal that the conference has a different, broader agenda. It was held at New York University’s Law School, and many of the committee chairs are law professors, not computer science professors. The first keynote speaker, Latanya Sweeney, argued that technology is the new policy as more and more decisions are delegated to automated systems. The responsibility of governance, it seems, is falling to the creators of artificial intelligence. The keynote speaker on the second day was Prof. Deborah Hellman, who provided a philosophical argument for why discrimination is morally wrong. This opened into a conversation about the relationship between random fate and justice with computer scientist Cynthia Dwork. The other speakers in the program in one way or another grappled with the problem of how to responsibly wield technological power over society.

It was a successful conference and it has great promise as venue for future work. It has this promise because it has been set up to expand intellectually beyond the confines of the current state of discourse around accountability and automation. This post is about the tensions within FAT* that make it intellectually dynamic. FAT* reflects the conditions of our a particular historical, cultural, and economic moment. The contention of this post is that the community involved in the conference has the opportunity to transcend that moment if they encounter its own contradictions head-on through praxis.

One significant tendency among the research at FAT* was the mathematization of ethics. Exemplified by Menon and Williamson’s “The cost of fairness in binary classification” (2018) (winner of a best paper award at the conference), many researchers come to FAT* to translate ethical injunctions, and the tradeoffs between them, into mathematical expressions. This striking intellectual endeavor sits at the center of a number of controversies between the humanities and sciences that have been going on for decades and continue today.

As has been long recognized in the foundational theory of computer science, computational algorithms are powerful because the are logically equivalent to the processes of mathematical proof. Algorithms, in the technical sense of the term, can be no more and no less powerful than mathematics itself. It has long been a concern that a world controlled by algorithms would be an amoral one; in his 1947 book Eclipse of Reason, Max Horkheimer argued that the increasing use of formal reason (which includes mathematics and computation) for pragmatic purposes would lead to a world dominated by industrial power that was indifferent to human moral considerations of what is right or good. Hannah Arendt, in The Human Condition (1959), wrote about the power of scientists who spoke in obscure mathematical language and were therefore beyond the scrutiny of democratic politics. Because mathematics is universal, it is unable to express political interests, which arise from people’s real, particular situations.

We live in a strikingly different time from the mid-20th century. Ethical concerns with the role of algorithms in society have been brought to trained computer scientists, and their natural and correct inclination has been to determine the mathematical form of the concern. Many of these scholars would sincerely like to design a better system.

Perhaps disappointingly, all the great discoveries in foundations of computing are impossibility results: the Halting Problem, the No Free Lunch theorem, etc. And it is no different in the field of Fairness in Machine Learning. What computer scientists have discovered is that life isn’t, and can’t be, fair, because “fairness” has several different definitions (twenty-one at last count) that are incompatible with each other (Hardt et al., 2016; Kleinberg et al., 2016). Because there are inherent tradeoffs to different conceptions of fairness and any one definition will allocate outcomes differently for different kinds of people, the question of what fairness is has now been exposed as an inherently political question with no compelling scientific answer.

Naturally, computer scientists are not the first to discover this. What’s happened is that it is their turn to discover this eternal truth because in this historical moment computer science is the scientific discipline that is most emblematic of power. This is because the richest and most powerful companies, the ones almost everybody depends on daily, are technology companies, and these companies project the image that their success is do mainly to the scientific genius of their early employees and the quality of the technology that is at their operational core.

The problem is that computer science as scientific discipline has very little to do with why large technology companies have so much power and sometimes abuse that power. These companies are much more than their engineers; they also include designers, product managers, salespeople, public relations people, and of course executives and shareholders. As sociotechnical organizations, they are most responsive to the profit motive, government regulations, and consumer behavior. Even if being fair was technically possible, they would still be businesses with very non-technical reasons for being unfair or unaccountable.

Perhaps because these large companies are so powerful, few of the papers at the conference critiqued them directly. Instead, the focus was often on the software systems used by municipal governments. These were insightful and important papers. Barabas et al.’s paper questioned the assumptions motivating much of the inquiry around “fairness in machine learning” by delving into the history and ideology of actuarial risk assessment in criminal sentencing. Chouldechova et al.’s case study in the workings of a child mistreatment hotline (winner of a best paper award) was a realistic and balanced study of the challenges of operating an algorithmic risk assessment system in municipal social services. At its best, FAT* didn’t look much like a computer science conference at all, even when the speakers and authors had computer science training. At its best, FAT* was grappling towards something new.

Some of this grappling is awkward. Buolamwini and Gebru presented a technically and politically interesting study of how commercially available facial recognition technologies underperform on women, on darker-skinned people, and intersectionally on darker-skinned women. In addition to presenting their results, the speakers proudly described how some the facial recognition companies responded to their article by improving the accuracy of their technology. For some at the conference, this was a victory for fairer representation and accountability of facial recognition technology that was otherwise built to favor lighter skinned men. But others found it difficult to celebrate the improved effectiveness of a technology for automated surveillance. Out of context, it’s impossible to know whether this technology does good or ill to those wearing the faces it recognizes. What was presented as a form of activism against repressive or marginalizing political forces may just as well have been playing into their hands.

This political ambiguity was glossed over, not resolved. And therein lay the crux of the political problem at the heart of FAT*: it’s full of well-intentioned people trying to discover technical band-aids to what are actually systemic social and economic problems. Their intentions and their technical contributions are both laudable. But there was something ideologically fishy going on, a fishiness reflective of a broader historical moment. Nancy Fraser (2016) has written about the phenomenon of progressive neoliberalism, an ideology that sounds like an oxymoron but in fact reflects the alliance between the innovation sector and identity-based activist movements. Fraser argues that progressive neoliberalism has been a hegemonic force until very recently. This year FAT*, with its mainly progressive sense of Fairness and Accountability and arguably neoliberal emphasis on computational solutions, was a throwback to what for many at the conference was a happier political time. I hope that next year’s conference takes a cue from Fraser and is more critical of the zeitgeist.

For now, as form of activism that changes things for the better, this year’s conference largely fell short because it would not address the systemic elephants in the room. A dialectical sublation is necessary and imminent. For it to do this effectively, the conference may need to add another letter to its name, representing another value. Michael Veale has suggested that the conference add an “R”, for reflexivity, perhaps a nod to the cherished value of critical qualitative scholars, who are clearly welcome in the room. However, if the conference is to realize its highest potential, it should add a “J”, for justice, and see what the bright minds of computer science think of that.

References

Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on Fairness, Accountability and Transparency. 2018.

Chouldechova, Alexandra, et al. “A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions.” Conference on Fairness, Accountability and Transparency. 2018.

Fraser, Nancy. “Progressive neoliberalism versus reactionary populism: A choice that feminists should refuse.” NORA-Nordic Journal of Feminist and Gender Research 24.4 (2016): 281-284.

Hardt, Moritz, Eric Price, and Nati Srebro. “Equality of opportunity in supervised learning.” Advances in Neural Information Processing Systems. 2016.

Hellman, Deborah. “Indirect Discrimination and the Duty to Avoid Compounding Injustice.” (2017).

Horkheimer, Max. “Eclipse of Reason. 1947.” New York: Continuum (1974).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Miren Gutierrez presents “Datos para la transformación social” (Madrid, April 12)

DATACTIVE Research Associate Miren Gutierrez organised a follow-up of the ‘Data for the Social Good’ event (Amsterdam, November 2017). The debate will take place in Madrid on Thursday the 12th of April. You can check out the impressive line-up in the description of the event (in Spanish).

Cuándo: 12 de abril, jueves, de 16:00 a 18:00

Dónde: Sede de la Deusto Business School, calle Castelló, 76, Madrid

Cuatro comunidades confluyen con frecuencia en la realización de proyectos de datos con impacto social: las organizaciones que transfieren habilidades, crean plataformas y herramientas, y generan oportunidades de encuentro; las catalizadoras, que proporcionan los fondos y los medios; las que producen periodismo de datos, y las activistas. Sin embargo, en pocas ocasiones las vemos debatir juntas en público.

Te proponemos una conferencia, organizada por el Programa “Análisis, investigación y comunicación de Datos” de la Universidad de Deusto, que sienta en un panel a representantes de estos cuatro grupos para hablar de cómo pueden los datos ayudar a una transformación social en favor de las personas y el medioambiente, qué oportunidades de colaboración existen y qué otras están por crearse.

Habla con nosotros/as Mar Cabra, una conocida periodista de investigación y especialista en análisis de datos que ha estado al frente de la Unidad de Datos e Investigación del Consorcio Internacional de Periodistas de Investigación, ganador del premio Pulitzer de 2017 con la investigación conocida como “Los papeles de Panamá”.

Ignacio Jovtis es el Responsable de Investigación y Políticas de Amnistía Internacional en España. AI usa testimonios, cartografía digital datos y fotografía satelitales para denunciar y producir evidencias de abusos de los derechos humanos en la guerra en Siria, de la apropiación militar de tierras en pueblos rohingyas y sobre la crisis de refugiados en el Mediterráneo.

También nos acompaña Juan Carlos Alonso, Diseñador de Vizzuality, una organización creada para hacer del diseño de datos un impulsor del cambio. Vizzuality ofrece aplicaciones que ayudan a la mejor comprensión de los datos a través de su visualización para comprender procesos como la deforestación, la preparación para los desastres, el flujo mundial del comercio de productos agrícolas o la acción contra el cambio climático en todo el mundo.

Juanlu Sánchez es otro conocido periodista. Cofundador y subdirector de eldiario.es, está especializado en contenidos digitales, nuevos medios y fórmulas de sostenibilidad para el periodismo independiente como el modelo de socios de eldiario.es. Ha dirigido y colaborado en diversas investigaciones basadas en datos, como por ejemplo la de las tarjetas black de Bankia.

Adolfo Antón es el Responsable del DataLab del Medialab-Prado, desde donde ha dirigido la experimentación, producción y divulgación de proyectos en torno a la cultura de los datos y el fomento de los datos abiertos. Adolfo ha sido representante del Open Knowledge Foundation España, una organización dedicada a financiar y fomentar los proyectos de datos, entre otros.

Modera Miren Gutiérrez, directora del Programa de postgrado “Análisis, investigación y comunicación de datos” e investigadora de la Universidad de Deusto. Miren está por publicar un libro titulado Data activism and social change, precisamente sobre los datos y la transformación social. La conferencia será recogida en imágenes y compartida por el reconocido facilitador gráfico Jorge Martin, quien tomará nota de las propuestas e ideas planteadas por los/as panelistas y participantes.

Tanto si quieres saber qué se está haciendo con los datos para mejorar el mundo como si quieres imaginar qué puedes hacer tú, te invitamos a participar en este debate que no pretende ser una conferencia al uso, sino un diálogo interactivo, abierto, dinámico y participativo entre todos/as los/as presentes.

Entrada libre hasta completar aforo.