Category: show on landing page

[blog 1/3] Designing the city by numbers? Introduction: Hope for the data-driven city

This is the first of three blog posts of the series ‘Designing the city by numbers? Bottom-up initiatives for data-driven urbanism in Santiago de Chile’, by Martín Tironi and Matías Valderrama Barragán. Stay tuned: the next episode will appear next Friday, April 27!

The digital has invaded contemporary cities in Latin America, transforming ways of knowing, planning and governing urban life. Parallel to the spread of sensors, networks and digital devices of all kinds in cities in the Global North, they are increasingly becoming part of urban landscapes in cities in the Global South under Smart City initiatives. Because of this, vast quantities of digital data are produced in increasingly ubiquitous and invisible ways. The “datafication” or the growing translation of multiple phenomena into the format of computable data have been pronounced by various scholars in the North as propelling a revolution or large-scale epochal change in contemporary life (Mayer-Schönberger and Cukier, 2013; Kitchin, 2014) in which digital devices and the data collection would allow better self-knowledge and “smarter” decision-making across varied domains.

To examine the impacts of such hyped expectations and promises in Chile, from the Smart Citizen Project we have been studying different cases of Smart City and data-driven initiatives, focusing on how the idea of designing the city by digital numbers has permeated local governments in Santiago de Chile. Public officials and urban planners are being increasingly convinced that planning and governance will be better by quantifying urban variables and promoting decision making not only guided or informed but driven by digital data, algorithms and automated analytics -instead of prejudices, emotions or ideologies. In this “dataism” (van Dijck, 2014), it is believed that the data simply “speak for themselves” in a fantasy of immediacy and neutrality.

But perhaps the most innovative part of data-driven Smart City initiatives we’ve observed are the means by which they also promise to open a new era of experimentationand testing for citizen participation, amplifying notions like of ‘urban laboratory,’ ‘living lab,’ ‘pilot projects,’ ‘open innovation,’ and so on. Thanks to digital technologies, the assumption goes, a “democratization of policymaking” that might reduce the state’s monopoly on government decision-making (Esty, 2004; Esty & Rushing, 2007) might at last be realized, producing a greater “symmetry” or “horizontalization” between governors and the governed (Crawford & Goldsmith, 2014). This, however, depends on citizens’ willingness to function as sensors of their own cities, generating and “sharing” relevant and real-time geographic information about their behaviours and needs, which would be used by urban planners and public officials for their decisions (Goldsmith & Crawford, 2014; Goodchild, 2007).

Our work from the Smart Citizen Project at the Pontifical Catholic University of Chileunderscores the importance of nottaking as given any sort of homogeneous or universal “datafication” process and problematize how data-driven and smart governance are enacted –not without problems and breakdowns- in each location. Thus this series of three blog posts stresses how we must start instead by considering how multiple quantification practices are running at the same time, and how each one can present multiple purposes and meanings which can only be addressed on the basis of their heterogeneous contexts of materialization. Moreover, we explore how we are witnessing an increased diversity of what we call as “digital quantification regimes” produced from the South that aim to position themselves above existing technologies of the North in the market, and achieve an agreement that their data records are the most “participatory”, “representative”, or “accurate” bases for decision-making. Therefore, we must begin to explore the various suppositions, designs, political rationalities and scripts that these regimes establish in their diverse spheres of action under such growing “citizen” driven data initiatives in the South. What kind of practice-ontologies (Gabrys, 2016) might be produced through “citizen” driven data initiatives? At the same time, we believe that the “experimental” and “citizen” grammar that is increasingly infused into Smart City and data-driven initiatives in the South must be critically examined both in their actual development and forms of involvement. How the experimental grammar of smart projects is reconfiguring the idea of participation and government in the urban space?

So stay tuned for the next posts in this series for more on RUBI Urban bike tracker project and the KAPPO pro-cycling smart phone game in Santiago.

 

Cited works

Esty, D. C. & Rushing, R. (2007). Governing by the Numbers: The Promise of Data-Driven Policymaking in the Information Age. Center for American Progress,5, 21.

Gabrys, J. “Citizen Sensing: Recasting Digital Ontologies through Proliferating Practices.”Theorizing the Contemporary, Cultural Anthropology website, March 24, 2016.

Espeland, W. N., & Stevens, M. L. (2008). A sociology of quantification. European Journal of Sociology/Archives Européennes de Sociologie, 49(3), 401-436.

Esty, D. C. & Rushing, R. (2007). Governing by the Numbers: The Promise of Data-Driven Policymaking in the Information Age. Center for American Progress, 5, 21.

Goldsmith, S. & Crawford, S. (2014). The responsive city: engaging communities through data-smart governance. San Francisco, CA: Jossey-Bass, a Wiley Brand.

Goodchild, M. F. (2007). Citizens as sensors: The world of volunteered geography.GeoJournal, 69(4), 211-221.

Kitchin, R. (2014). The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences.London: Sage.

Mayer-Schönberger, V. and Cuckier, K. (2013).  Big Data: A revolution that will transform how we live, work, and think. New York: Houghton Mifflin Harcourt.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and secular belief. Surveillance & Society, 12(2), 197-208.

 

About the authors

Martín Tironi is Associate Professor, School of Design at the Pontifical Catholic University of Chile. He holds a PhD from Centre de Sociologie de l’Innovation (CSI), École des Mines de Paris, where he also did post-doctorate studies. He received his Master degree in Sociology at the Université Paris Sorbonne V and his BA in Sociology at the Pontifical Catholic University of Chile. Now he’s doing a visiting Fellow (2018) in Centre of Invention and Social Proces, Goldsmiths, University of London [email: martin.tironi@uc.cl]

Matías Valderrama Barragán is a sociologist with a Master in Sociology from the Pontifical Catholic University of Chile. He is currently working in research projects about digital transformation of organizations and datafication of individuals and environments in Chile. [email:mbvalder@uc.cl]

 

Stefania discusses data, citizenship and democracy in Lisbon, Bologna & Fribourg

On April 12, Stefania will give a talk on the politics of code and data at the ISCTE – Instituto Universitário de Lisboa, in Lisbon, Portugal.

On April 23, she will be in Bologna, Italy, at the School of Advanced International Studies of Johns Hopkins University. She will present her thoughts on ‘Citizenship Re-invented: The Evolution of Politics in the Datafied Society’.

Finally, on April 30 Stefania will lecture at the University of Fribourg, in Switzerland, upon invitation of Prof. Regula Haenggli. The lecture is entitled ‘Digitalization as a challenge to democracy: Possibilities of self-organization, emancipation, and autonomy’.

Para exercer plenamente a cidadania, é preciso conhecer os filtros virtuais (Época Negócios)

Stefania was commissioned an article by the Brazilian business magazine Época Negócios. In sum, she argues that “estar ciente dos elementos que moldam profundamente nossos universos de informação é um passo fundamental para deixarmos de ser prisioneiros da internet”. Continue reading the article in Portuguese online. Here you can read the original in English.

Why personalization algorithms are ultimately bad for you (and what to do about it)

Stefania Milan

I like bicycles. I often search online for bike accessories, clothing, and bike races. As a result, the webpages I visit as well as my Facebook wall often feature ads related to biking. The same goes for my political preferences, or my last search for the cheapest flight or the next holiday destination. This information is (usually) relevant to me. Sometimes I click on the banner; largely, I ignore it. Most of the cases, I hardly notice it but process and “absorb” it as part of “my” online reality. This unsolicited yet relevant content contributes to make me feel “at home” in my wanderings around the web. I feel amongst my peers.

Behind the efforts to carefully target web content to our preferences are personalization algorithms. Personalization algorithms are at the core of social media platforms, dating apps, and generally of most of the websites we visit, including news sites. They make us see the world as we want to see it. By forging a specific reality for each individual, they silently and subtly shape customized “information diets”.

Our life, both online and offline, is increasingly dependent on algorithms. They shape our way of life, helping us find a ride on Uber or hip, fast food delivery on Foodora. They might help us finding a job (or losing it), and locating a partner for the night or for life on Tinder. They mediate our news consumption and the delivery of state services. But what are they, and how can they do their magic? Algorithms can be seen like a recipe for baking an apple tart: in the same way in which the grandma’s recipe tells us, step by step, what to do to make it right, in computing algorithms tell the machine what to do with data, namely how to calculate or process it, and how to make sense of it and act upon it. As forms of automated reasoning, they are usually written by humans, however they operate into the realm of artificial intelligence: with the ability to train themselves over time, they might eventually “take up” their own life, sort to speak.

The central role played by algorithms in our life should be of concern, especially if we conceive of the digital as complementary to our offline self. Today, our social dimension is simultaneously embedded and (re)produced by technical settings. But algorithms, proprietary and opaque, are invisible to end users: their outcome is visible (e.g., the manipulated content that shows up on one’s customized interface), but it bears no indication of having been manipulated, because algorithms leave no trace and “exist” only when operational. Nevertheless, they do create rules for social interaction and these rules indirectly shape the way we see, understand and interact with the world around us. And far from being neutral, they are deeply political in nature, designed by humans with certain priorities and agendas.

While there are many types of algorithms, what affects us most today are probably personalization algorithms. They mediate our web experience, easing our choices by giving us information which is in tune with our clicking habits—and thus, supposedly, preferences.

They make sure the information we are fed is relevant to us, selecting it on the basis of our prior search history, social graph, gender and location, and generally speaking about all the information we directly on unwillingly make available online. But because they are invisible to the eyes of users, most of us are largely unaware this personalization is even happening. We believe we see “the real world”, yet it is just one of the many possible realities. This contributes to envelop us in what US internet activist and entrepreneur Eli Pariser called the “filter bubble”— that is to saythe intellectual isolation caused by algorithms constantly guessing what we might like or not, based on the ‘image’ they have of us. In other words, personalization algorithms might eventually reduce our ability to make informed choices, as the options we are presented with and exposed to are limited and repetitive.

Why should we care, if all of this eventually is convenient and makes our busy life easier and more pleasant?

First of all, this is ultimately surveillance, be it corporate or institutional. Data is constantly collected about us and our preferences, and it ends up “standing in” for the individual, who is made to disappear in favoir of a representation which can be effortlessly classified and manipulated.“When you stare into the Internet, the Internet stares back into you”, once tweeted digital rights advocate @Cattekwaad. The web “stares back” by tracking our behaviours and preferences, and profiling each of us in categories ready for classification and targeted marketing. We might think of the Panopticon, a circular building designed in mid-19thcentury by the philosopher Jeremy Bentham as “a new mode of obtaining power of mind over mind” and intended to serve as prison. In this special penal institute, a single guard would be effortlessly able to observe all inmates without them being aware of the condition of permanent surveillance they are subjected to.

But there is a fundamental difference between the idea of the Panopticon and today’s surveillance ecosystem. The jailbirds of the internet age are not only aware of the constant scrutiny they are exposed to; they actively and enthusiastically participate in generation of data, prompted by the imperative to participate of social media platforms. In this respect, as the UK sociologist Roy Boyne explained, the data collection machines of personalization algorithms can then be seen as post-Panopticon structures, whereby the model rooted on coercion have been replaced by the mechanisms of seduction in the age of big data. The first victim of personalization algorithms is our privacy, as we seem to be keen to sacrifice freedom (including the freedom to be exposed to various opinions and the freedom from the attention of others) to the altar of the current aggressive personalized marketing in favour of convenience and functionality.

The second victim of personalization algorithms is diversity, of both opinions and preferences, and the third and ultimate casualty is democracy. While this might sound like an exaggerated claim, personalization algorithms dramatically—and especially, silently—reduce our exposure to different ideas and attitudes, helping us to reinforce our own and allowing us to disregard any other as “non-existent”. In other words, the “filter bubble” created by personalization algorithms isolates us in our own comfort zone, preventing us from accessing and evaluating the viewpoints of others.

The hypothesis of the existence of a filter bubble has been extensively tested. On the occasion of the recent elections in Argentina, last October, Italian hacker Claudio Agosti in collaboration with the World Wide Web Foundation, conducted a research using facebook.tracking.exposed,a software intend to “increase transparency behind personalization algorithms, so that people can have more effective control of their online Facebook experience and more awareness of the information to which they are exposed.”

The team rana controlled experiment with nine profiles created ad hoc, creating a sort of “lab experiment” in which profiles were artificially polarized (e.g., maintaining some variables constant, each profile “liked” different items). Not only did the data confirmed the existence of a filter bubble; it showed a dangerous reinforcement effect which Agosti termed “algorithm extremism”.

What can we do about all this? This question has two answers. The first is easy but uncomfortable. The second is a strategy for the long run and calls for an active role.

Let’s start from the easy. We ultimately retain a certain degree of human (and democratic) agency: in any given moment, we can choose to opt out. To be sure, erasing our Facebook account doesn’t do the trick of protecting our long-eroded privacy: the company has the right to retain our data, as per Terms of Service, the long, convoluted legal document—a contract, that is—we all sign to but rarely read. With the “exit” strategy we lose in contacts, friendships, joyful exchange and we are no longer able to sneak in the life of others, but we gain in privacy and, perhaps, reclaim our ability to think autonomously. I bet not many of you will do this after reading this article—I haven’t myself found the courage to disengage entirely from my leisurely existence on social media platforms.

But there is good news. As the social becomes increasingly entrenched in its algorithmic fabric, there is a second option, a sort of survival strategy for the long run. We can learn to live with and deal withalgorithms. We can familiarize with their presence, engaging in a self-reflexive exercise that questions what they show us in any given interface and why. If understandably not all of us might be inclined to learn the ropes of programming, “knowing” the algorithms that so much affect us is a fundamental step to be able to fully exercise our citizenship in the age of big data. “Knowing” here means primarily making the acquaintance with their occurrence and function, and questioning the fact that being turned into a pile of data is almost an accepted fact of life these days. Because being able to think with one’s own head today, means also questioning the algorithms that so much shape our information worlds.

 

 

 

 

 

[blog] Critical reflections on FAT* 2018: a historical idealist perspective

Author: Sebastian Benthall, Research Scientist at NYU Steinhardt and PhD Candidate UC Berkeley School of Information.

In February, 2018, the inaugural 2018 FAT* conference was held in New York City:

The FAT* Conference 2018 is a two-day event that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems. This inaugural conference builds on success of prior workshops like FAT/ML, FAT/Rec, DAT, Ethics in NLP, and others.

FAT stands for “Fairness, Accountability, Transparency”, and the asterisk, pronounced “star”, is a wildcard character, which indicates that the conference ranges more widely that earlier workshops it succeeds, such as FAT/ML (ML meaning, “machine learning“), FAT/Rec (Rec meaning “recommender systems“). You might conclude from the amount of geekery in the title and history of the conference that FAT* is a computer science conference.

You would be half right. Other details reveal that the conference has a different, broader agenda. It was held at New York University’s Law School, and many of the committee chairs are law professors, not computer science professors. The first keynote speaker, Latanya Sweeney, argued that technology is the new policy as more and more decisions are delegated to automated systems. The responsibility of governance, it seems, is falling to the creators of artificial intelligence. The keynote speaker on the second day was Prof. Deborah Hellman, who provided a philosophical argument for why discrimination is morally wrong. This opened into a conversation about the relationship between random fate and justice with computer scientist Cynthia Dwork. The other speakers in the program in one way or another grappled with the problem of how to responsibly wield technological power over society.

It was a successful conference and it has great promise as venue for future work. It has this promise because it has been set up to expand intellectually beyond the confines of the current state of discourse around accountability and automation. This post is about the tensions within FAT* that make it intellectually dynamic. FAT* reflects the conditions of our a particular historical, cultural, and economic moment. The contention of this post is that the community involved in the conference has the opportunity to transcend that moment if they encounter its own contradictions head-on through praxis.

One significant tendency among the research at FAT* was the mathematization of ethics. Exemplified by Menon and Williamson’s “The cost of fairness in binary classification” (2018) (winner of a best paper award at the conference), many researchers come to FAT* to translate ethical injunctions, and the tradeoffs between them, into mathematical expressions. This striking intellectual endeavor sits at the center of a number of controversies between the humanities and sciences that have been going on for decades and continue today.

As has been long recognized in the foundational theory of computer science, computational algorithms are powerful because the are logically equivalent to the processes of mathematical proof. Algorithms, in the technical sense of the term, can be no more and no less powerful than mathematics itself. It has long been a concern that a world controlled by algorithms would be an amoral one; in his 1947 book Eclipse of Reason, Max Horkheimer argued that the increasing use of formal reason (which includes mathematics and computation) for pragmatic purposes would lead to a world dominated by industrial power that was indifferent to human moral considerations of what is right or good. Hannah Arendt, in The Human Condition (1959), wrote about the power of scientists who spoke in obscure mathematical language and were therefore beyond the scrutiny of democratic politics. Because mathematics is universal, it is unable to express political interests, which arise from people’s real, particular situations.

We live in a strikingly different time from the mid-20th century. Ethical concerns with the role of algorithms in society have been brought to trained computer scientists, and their natural and correct inclination has been to determine the mathematical form of the concern. Many of these scholars would sincerely like to design a better system.

Perhaps disappointingly, all the great discoveries in foundations of computing are impossibility results: the Halting Problem, the No Free Lunch theorem, etc. And it is no different in the field of Fairness in Machine Learning. What computer scientists have discovered is that life isn’t, and can’t be, fair, because “fairness” has several different definitions (twenty-one at last count) that are incompatible with each other (Hardt et al., 2016; Kleinberg et al., 2016). Because there are inherent tradeoffs to different conceptions of fairness and any one definition will allocate outcomes differently for different kinds of people, the question of what fairness is has now been exposed as an inherently political question with no compelling scientific answer.

Naturally, computer scientists are not the first to discover this. What’s happened is that it is their turn to discover this eternal truth because in this historical moment computer science is the scientific discipline that is most emblematic of power. This is because the richest and most powerful companies, the ones almost everybody depends on daily, are technology companies, and these companies project the image that their success is do mainly to the scientific genius of their early employees and the quality of the technology that is at their operational core.

The problem is that computer science as scientific discipline has very little to do with why large technology companies have so much power and sometimes abuse that power. These companies are much more than their engineers; they also include designers, product managers, salespeople, public relations people, and of course executives and shareholders. As sociotechnical organizations, they are most responsive to the profit motive, government regulations, and consumer behavior. Even if being fair was technically possible, they would still be businesses with very non-technical reasons for being unfair or unaccountable.

Perhaps because these large companies are so powerful, few of the papers at the conference critiqued them directly. Instead, the focus was often on the software systems used by municipal governments. These were insightful and important papers. Barabas et al.’s paper questioned the assumptions motivating much of the inquiry around “fairness in machine learning” by delving into the history and ideology of actuarial risk assessment in criminal sentencing. Chouldechova et al.’s case study in the workings of a child mistreatment hotline (winner of a best paper award) was a realistic and balanced study of the challenges of operating an algorithmic risk assessment system in municipal social services. At its best, FAT* didn’t look much like a computer science conference at all, even when the speakers and authors had computer science training. At its best, FAT* was grappling towards something new.

Some of this grappling is awkward. Buolamwini and Gebru presented a technically and politically interesting study of how commercially available facial recognition technologies underperform on women, on darker-skinned people, and intersectionally on darker-skinned women. In addition to presenting their results, the speakers proudly described how some the facial recognition companies responded to their article by improving the accuracy of their technology. For some at the conference, this was a victory for fairer representation and accountability of facial recognition technology that was otherwise built to favor lighter skinned men. But others found it difficult to celebrate the improved effectiveness of a technology for automated surveillance. Out of context, it’s impossible to know whether this technology does good or ill to those wearing the faces it recognizes. What was presented as a form of activism against repressive or marginalizing political forces may just as well have been playing into their hands.

This political ambiguity was glossed over, not resolved. And therein lay the crux of the political problem at the heart of FAT*: it’s full of well-intentioned people trying to discover technical band-aids to what are actually systemic social and economic problems. Their intentions and their technical contributions are both laudable. But there was something ideologically fishy going on, a fishiness reflective of a broader historical moment. Nancy Fraser (2016) has written about the phenomenon of progressive neoliberalism, an ideology that sounds like an oxymoron but in fact reflects the alliance between the innovation sector and identity-based activist movements. Fraser argues that progressive neoliberalism has been a hegemonic force until very recently. This year FAT*, with its mainly progressive sense of Fairness and Accountability and arguably neoliberal emphasis on computational solutions, was a throwback to what for many at the conference was a happier political time. I hope that next year’s conference takes a cue from Fraser and is more critical of the zeitgeist.

For now, as form of activism that changes things for the better, this year’s conference largely fell short because it would not address the systemic elephants in the room. A dialectical sublation is necessary and imminent. For it to do this effectively, the conference may need to add another letter to its name, representing another value. Michael Veale has suggested that the conference add an “R”, for reflexivity, perhaps a nod to the cherished value of critical qualitative scholars, who are clearly welcome in the room. However, if the conference is to realize its highest potential, it should add a “J”, for justice, and see what the bright minds of computer science think of that.

References

Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on Fairness, Accountability and Transparency. 2018.

Chouldechova, Alexandra, et al. “A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions.” Conference on Fairness, Accountability and Transparency. 2018.

Fraser, Nancy. “Progressive neoliberalism versus reactionary populism: A choice that feminists should refuse.” NORA-Nordic Journal of Feminist and Gender Research 24.4 (2016): 281-284.

Hardt, Moritz, Eric Price, and Nati Srebro. “Equality of opportunity in supervised learning.” Advances in Neural Information Processing Systems. 2016.

Hellman, Deborah. “Indirect Discrimination and the Duty to Avoid Compounding Injustice.” (2017).

Horkheimer, Max. “Eclipse of Reason. 1947.” New York: Continuum (1974).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Miren Gutierrez presents “Datos para la transformación social” (Madrid, April 12)

DATACTIVE Research Associate Miren Gutierrez organised a follow-up of the ‘Data for the Social Good’ event (Amsterdam, November 2017). The debate will take place in Madrid on Thursday the 12th of April. You can check out the impressive line-up in the description of the event (in Spanish).

Cuándo: 12 de abril, jueves, de 16:00 a 18:00

Dónde: Sede de la Deusto Business School, calle Castelló, 76, Madrid

Cuatro comunidades confluyen con frecuencia en la realización de proyectos de datos con impacto social: las organizaciones que transfieren habilidades, crean plataformas y herramientas, y generan oportunidades de encuentro; las catalizadoras, que proporcionan los fondos y los medios; las que producen periodismo de datos, y las activistas. Sin embargo, en pocas ocasiones las vemos debatir juntas en público.

Te proponemos una conferencia, organizada por el Programa “Análisis, investigación y comunicación de Datos” de la Universidad de Deusto, que sienta en un panel a representantes de estos cuatro grupos para hablar de cómo pueden los datos ayudar a una transformación social en favor de las personas y el medioambiente, qué oportunidades de colaboración existen y qué otras están por crearse.

Habla con nosotros/as Mar Cabra, una conocida periodista de investigación y especialista en análisis de datos que ha estado al frente de la Unidad de Datos e Investigación del Consorcio Internacional de Periodistas de Investigación, ganador del premio Pulitzer de 2017 con la investigación conocida como “Los papeles de Panamá”.

Ignacio Jovtis es el Responsable de Investigación y Políticas de Amnistía Internacional en España. AI usa testimonios, cartografía digital datos y fotografía satelitales para denunciar y producir evidencias de abusos de los derechos humanos en la guerra en Siria, de la apropiación militar de tierras en pueblos rohingyas y sobre la crisis de refugiados en el Mediterráneo.

También nos acompaña Juan Carlos Alonso, Diseñador de Vizzuality, una organización creada para hacer del diseño de datos un impulsor del cambio. Vizzuality ofrece aplicaciones que ayudan a la mejor comprensión de los datos a través de su visualización para comprender procesos como la deforestación, la preparación para los desastres, el flujo mundial del comercio de productos agrícolas o la acción contra el cambio climático en todo el mundo.

Juanlu Sánchez es otro conocido periodista. Cofundador y subdirector de eldiario.es, está especializado en contenidos digitales, nuevos medios y fórmulas de sostenibilidad para el periodismo independiente como el modelo de socios de eldiario.es. Ha dirigido y colaborado en diversas investigaciones basadas en datos, como por ejemplo la de las tarjetas black de Bankia.

Adolfo Antón es el Responsable del DataLab del Medialab-Prado, desde donde ha dirigido la experimentación, producción y divulgación de proyectos en torno a la cultura de los datos y el fomento de los datos abiertos. Adolfo ha sido representante del Open Knowledge Foundation España, una organización dedicada a financiar y fomentar los proyectos de datos, entre otros.

Modera Miren Gutiérrez, directora del Programa de postgrado “Análisis, investigación y comunicación de datos” e investigadora de la Universidad de Deusto. Miren está por publicar un libro titulado Data activism and social change, precisamente sobre los datos y la transformación social. La conferencia será recogida en imágenes y compartida por el reconocido facilitador gráfico Jorge Martin, quien tomará nota de las propuestas e ideas planteadas por los/as panelistas y participantes.

Tanto si quieres saber qué se está haciendo con los datos para mejorar el mundo como si quieres imaginar qué puedes hacer tú, te invitamos a participar en este debate que no pretende ser una conferencia al uso, sino un diálogo interactivo, abierto, dinámico y participativo entre todos/as los/as presentes.

Entrada libre hasta completar aforo.

[blog] Cloud communities and the materiality of the digital (GLOBALCIT project, EUI)

cropped-GlobalCitggp-logo

This invited blog post originally appeared in the forum ‘Cloud Communities: The Dawn of Global Citizenship?’ of the GLOBALCIT project (European University Institute). It is part of an interesting multidisciplinary conversation accessible from the GLOBALCIT website. I wish to thank Rainer Baubock and Liav Orgad for the invitation to contribute to the debate. 

Cloud communities and the materiality of the digital

By Stefania Milan (University of Amsterdam)

As a digital sociologist, I have always found ‘classical’ political scientists and lawyers a tad too reluctant to embrace the idea that digital technology is a game changer in so many respects. In the debate spurred by Liav Orgad’s provocative thoughts on blockchain-enabled cloud communities, I am particularly fascinated by the tension between techno-utopianism on the one hand (above all, Orgad and Primavera De Filippi), and socio-legal realism on the other (e.g., Rainer Bauböck, Michael Blake, Lea Ypi, Jelena Dzankic, Dimitry Kochenov). I find myself somewhere in the middle. In what follows, I take a sociological perspective to explain why there is something profoundly interesting in the notion of cloud communities, why however little of it is really new, and why the obstacles ahead are bigger than we might like to think. The point of departure for my considerations is a number of experiences in the realm of transnational social movements and governance: what we can learn from existing experiments that might help us contextualize and rethink cloud communities?

Three problems with Orgad’s argument

To start with, while I sympathise with Orgad’s provocative claims, I cannot but notice that what he deems new in cloud communities—namely the global dimension of political membership and its networked nature—is indeed rather old. Since the 1990s, transnational social movements for global justice have offered non-territorial forms of political membership—not unlike those described as cloud communities. Similar to cloud communities, these movements were the manifestation of political communities based on consent, gathered around shared interests and only minimally rooted in physical territories corresponding to nation states (see, e.g., Tarrow, 2005). In the fall of 2011 I observed with earnest interest the emergence of yet another global wave of contention: the so-called Occupy mobilisation. As a sociologist of the web, I set off in search for a good metaphor to capture the evolution of organised collective action in the age of social media, and the obvious candidate was… the cloud. In a series of articles (see, for example, here and here) and book chapters (e.g., here and here), I developed my theory of ‘cloud protesting’, intended to capture how the algorithmic environment of social media alters the dynamics of organized collective action. In light of my empirical work, I agree with Bauböck, who acknowledges that cloud communities might have something to do with the “expansion of civil society, of international organizations, or of traditional territorial polities into cyberspace”. He also points out how, sadly, people can express their political views – and, I would add, engage in disruptive actions, as happens at some fringes of the movement for global justice – only because “a secure territorial citizenship” protects their exercise of fundamental rights, such as freedom of expression and association. Hence the questions a sociologist might ask: do we really need the blockchain to enable the emergence of cloud communities? If, as I argue, the existence of “international legal personas” is not a pre-requisite for the establishment of cloud communities, what would the creation of “international legal personas” add to the picture?[1]

Secondly, while I understand why a blockchain-enabled citizenship system would make life easier for the many who do not have access to a regular passport, I am wary of its “institutionalisation”, on account of the probable discrepancies between the ideas (and the mechanisms) associated with a Westphalian state and those of politically active activists and radical technologists alike. On the one hand, citizens interested in “advanced” forms of political participation (e.g., governance and the making of law) might not necessarily be inclined to form a state-like entity. For example, many accounts of the so-called “movement for global justice” (McDonald, 2006; della Porta & Tarrow, 2005) show how “official” membership and affiliation is often not required, not expected and especially not considered desirable. Activism today is characterised by a dislike and distrust of the state, and a tendency to privilege flexible, multiple identities (e.g., Bennett & Segerberg, 2013; Juris, 2012; Milan, 2013). On the other hand, the “radical technologists” behind the blockchain project are animated by values—an imaginaire (Flichy, 2007)—deeply distinct from that of the state (see, e.g., Reijers & Coeckelbergh, 2018). While the blockchain technology is enabled by a complex constellation of diverse actors, it is legitimate to ask whether it is possible to bend a technology built with an “underlying philosophy of distributed consensus, open source, transparency and community” with the goal to “be highly disruptive”(Walport, 2015)… to serve similar purposes as those of states?

Thirdly, Orgad’s argument falls short of a clear description of what the ‘cloud’ stands for in his notion of cloud communities. When thinking about ‘clouds’, as a metaphor and a technical term, we cannot but think of cloud computing, a “key force in the changing international political economy” (Mosco, 2014, p. 1) of our times, which entails a process of centralisation of software and hardware allowing users to reduce costs by sharing resources. The cloud metaphor, I argued elsewhere (Milan, 2015), is an apt one as it exposes a fundamental ambivalence of contemporary processes of “socio-legal decentralisation”. While claiming distance from the values and dynamics of the neoliberal state, a project of building blockchain-enabled communities still relies on commercially-owned infrastructure to function.

Precisely to reflect on this ambiguity, my most recent text on cloud protesting interrogates the materiality of the cloud. We have long lived in the illusion that the internet was a space free of geography. Yet, as IR scholar Ron Deibert argued, “physical geography is an essential component of cyberspace: Where technology is located is as important as what it is” (original italics). The Snowden revelations, to name just one, have brought to the forefront the role of the national state in—openly or covertly—setting the rules of user interactions online. What’s more, we no longer can blame the state alone, but the “surveillant assemblage” of state and corporations (Murakami Wood, 2013). To me, the big absent in this debate is the private sector and corporate capital. De Filippi briefly mentioned how the “new communities of kinship” are anchored in “a variety of online platforms”. However, what Orgav’s and partially also Bauböck’s contributions underscore is the extent to which intermediation by private actors stands in the way of creating a real alternative to the state—or at least the fulfilment of certain dreams of autonomy, best represented today by the fascination for blockchain technology. Bauböck rightly notes that “state and corporations… will find ways to instrumentalise or hijack cloud communities for their own purposes”. But there is more to that: the infrastructure we use to enable our interpersonal exchanges and, why not, the blockchain, are owned and controlled by private interests subjected to national laws. They are not merely neutral pipes, as Dumbrava reminds us.

Self-governance in practice: A cautionary tale

To be sure, many experiments allow “individuals the option to raise their voice … in territorial communities to which they do not physically belong”, as beautifully put by Francesca Strumia. Internet governance is a case in point. Since the early days of the internet, cyberlibertarian ideals, enshrined for instance in the ‘Declaration of Independence of Cyberspace’ by late JP Barlow, have attributed little to no role to governments—both in deciding the rules for the ‘new’ space as well as the citizenship of its users (read: the right to participate in the space and in the decision-making about the rules governing it). In those early flamboyant narratives, cyberspace was to be a space where users—but really engineers above all—would translate into practice their wildest dreams in matter of self-governance, self-determination and, to some extent, fairness. While cyberlibertarian views have been appropriated by both conservative (anti-state) and progressive forces alike, some of their founding principles have spilled over to real governance mechanisms—above all the governance of standards and protocols by the Internet Engineering Task Force (IETF), and the management of the the Domain Name System (DNS) by the Internet Corporation for Assigned Names and Numbers (ICANN).[2] Here I focus on the latter, where I have been active for about four years (2014-2017).

ICANN is organized in constituencies of stakeholders, including contracted parties (the ‘middlemen’, that is to say registries and registrars that on a regional base allocate and manage on behalf of ICANN the names and numbers, and whose relationship with ICANN is regulated by contract), non-contracted parties (corporations doing business on the DNS, e.g. content or infrastructure providers) and non-commercial internet users (read: us). ICANN’s proceedings are fully recorded and accessible from its website; its public meetings, thrice a year and rotating around the globe, are open to everyone who wants to walk in. Governments are represented in a sort of United Nations-style entity called the Government Advisory Committee. While corporate interests are well-represented by an array of professional lobbyists, the Non-Commercial Stakeholder Group (NCSG), which stands in for civil society,[3] is a mix and match of advocates of various extraction, expertise and nationality: internet governance academics, nongovernmental organisations promoting freedom of expression, and independent individuals who take an interest in the functioning of the logical layer of the internet.

The 2016 transition of the stewardship over the DNS from the US Congress to the “global multistakeholder community” has achieved a dream unique in its kind, straight out of the cyberlibertarian vision of the early days: the technical oversight of the internet[4] is in the hands of the people who make and use it, and the (advisory) role of the state is marginal. Accountability now rests solely within the community behind ICANN, which envisioned (and is still implementing) a complex system of checks and balances to allow the various stakeholder voices to be fairly represented. No other critical infrastructure is regulated by its own users. To build on Orgad’s reasoning, the community around ICANN is a cloud community, which operates by voluntary association and consensus [5],[5] and is entitled to produce “governance and the creation of law”.[6]

But the system is far from perfect. Let’s look at how the so-called civil society is represented, focusing on one such entity, the NCSG. Firstly, given that everyone can participate, the variety of views represented is enormous, and often hinders the ability of the constituency to be effective in policy negotiations. Yet, the size of the group is relatively small: at the time of writing, the Non-Commercial User Constituency (the bigger one among the two that form the NCSG) comprises “538 members from 161 countries, including 118 noncommercial organizations and 420 individuals”, making it the largest constituency within ICANN: this is nothing when compared to the global internet population it serves, confirming, as Dzankic argues, that “direct democracy is not necessarily conducive to broad participation in decision-making”. Secondly, ICANN policy-making is highly technical and specialised; the learning curve is dramatically steep. Thirdly, to be effective, the amount of time a civil society representative should spend on ICANN is largely incompatible with regular daily jobs; civil society cannot compete with corporate lobbyists. Fourthly, with ICANN meetings rotating across the globe, one needs to be on the road for at least a month per year, with considerable personal and financial costs.[7] In sum, while participation is in principle open to everyone, informed participation has much higher access barriers, which have to do with expertise, time, and financial resources (see, e.g., Milan & Hintz, 2013).

As a result, we observe a number of dangerous distortions of political representation. For example, when only the highly motivated participate, the views and “imaginaries” represented are often at the opposite ends of the spectrum (cf., Milan, 2014). Only the most involved really partake in decision-making, in a mechanism which is well known in sociology: the “tyranny of structurelessness” (Freeman, 1972), which is typical of participatory, consensus-based organising. The extreme personalisation of politics that we observe within civil society at ICANN—a small group of long-term advocates with high personal stakes—yields also another similar mechanism, known as “the tyranny of emotions” (Polletta, 2002), by which the most invested, independently of the suitability of their curricula vitae, end up assuming informal leadership roles—and, as the case of ICANN shows, even in presence of formal and carefully weighted governance structures. Decision-making is thus based on a sort of “microconsensus” within small decision-making cliques (Gastil, 1993).[8] To make things worse, ICANN is increasingly making exceptions to its own, community-established rules, largely under the pressure of corporations as well as law enforcement: for example, the corporation has recently been accused of bypassing consensus policy-making through voluntary agreements ad private contracting.

Why not (yet?): On new divides and bad players

In conclusion, while I value the possibilities the blockchain technology opens for experimentation as much as Primavera De Filippi, I do not believe it will really solve our problems in the short to middle-term. Rather, as it is always with technology because of its inherent political nature (cf., Bijker, Hughes, & Pinch, 2012), new conflicts will emerge—and they will concern both its technical features and its governance.

Earlier contributors to this debate have raised important concerns which are worth listening to. Besides Bauböck’s concerns over the perils for democracy represented by a consensus-based, self-governed model, endorsed also by Blake, I want to echo Lea Ypi’s reminder of the enormous potential for exclusion embedded in technologies, as digital skills (but also income) are not equally distributed across the globe. For the time being, a citizenship model based on blockchain technology would be for the elites only, and would contribute to create new divides and to amplify existing ones. The first fundamental step towards the cloud communities envisioned by Orgad would thus see the state stepping in (once again) and being in charge of creating appropriate data and algorithmic literacy programmes whose scope is out of reach for corporations and the organised civil society alike.

There is more to that, however. The costs to our already fragile ecosystem of the blockchain technology are on the rise along with its popularity. These infrastructures are energy-intensive: talking about the cryptocurrency Bitcoin, tech magazine Motherboard estimated that each transaction consumes 215 Kilowatt-hour of electricity—the equivalent of the weekly consumption of an American household. A world built on blockchain would have a vast environmental footprint (see also Mosco, 2014). Once again, the state might play a role in imposing adequate regulation mindful of the environmental costs of such programs.

But I do not intend to glorify the role of the state. On the contrary, I believe we should also watch out for any attempts by the state to curb innovation. The relatively brief history of digital technology, and even more that of the internet, is awash with examples of late but extremely damaging state interventions. As soon as a given technology performs roles or produces information that are of interest to the state (e.g., interpersonal communications), the state wants to jump in, and often does so in pretty clumsy ways. The recent surveillance scandals have abundantly shown how state powers firmly inhabit the internet (cf., Deibert, 2009; Deibert, Palfrey, Rohozinski, & Zittrain, 2010; Lyon, 2015)—and, as the Cambridge Analytica case reminds us, so do corporate interests. Moreover, the two are, more often than not, dangerously aligned.

I do not intend, with my cautionary tales, to hinder any imaginative effort to explore the possibilities offered by blockchain to rethink how we understand and practice citizenship today. The case of Estonia shows that different models based on alternative infrastructure are possible, at least on the small scale and in presence of a committed state. As scholars we ought to explore those possibilities. Much work is needed, however, before we can proclaim the blockchain revolution.

References

Bennett, L. W., & Segerberg, A. (2013). The Logic of Connective Action Digital Media and the Personalization of Contentious Politics. Cambridge, UK: Cambridge University Press.

Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (2012). The Social Construction of Technological Systems. New Direction in the Sociology and History of Technology. Cambridge, MA and London, England: MIT Press.

Deibert, R. J. (2009). The geopolitics of internet control: censorship, sovereignty, and cyberspace. In A. Chadwick & P. N. Howard (Eds.), The Routledge Handbook of Internet Politics (pp. 323–336). London: Routledge.

Deibert, R. J., Palfrey, J. G., Rohozinski, R., & Zittrain, J. (Eds.). (2010). Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace. Cambridge, MA: MIT Press.

della Porta, D., & Tarrow, S. (Eds.). (2005). Transnational Protest and Global Activism. Lanham, MD: Rowman & Littlefield.

Flichy, P. (2007). The internet imaginaire. Cambridge, Mass.: MIT Press.

Freeman, J. (1972). The Tyranny of Structurelessness.

Gastil, J. (1993). Democracy in Small Groups. Participation, Decision Making & Communication. Philadelphia, PA and Gabriola Island, BC: New Society Publishers.

Juris, J. S. (2012). Reflections on #Occupy Everywhere: Social Media, Public Space, and Emerging Logics of Aggregation. American Ethnologist, 39(2), 259–279.

Lyon, D. (2015). Surveillance After Snowden. Cambridge and Malden, MA: Polity Press.

McDonald, K. (2006). Global Movements: Action and Culture. Malden, MA and Oxford: Blackwell.

Milan, S. (2013). WikiLeaks, Anonymous, and the exercise of individuality: Protesting in the cloud. In B. Brevini, A. Hintz, & P. McCurdy (Eds.), Beyond WikiLeaks: Implications for the Future of Communications, Journalism and Society (pp. 191–208). Basingstoke, UK: Palgrave Macmillan.

Milan, S. (2015). When Algorithms Shape Collective Action: Social Media and the Dynamics of Cloud Protesting. Social Media + Society, 1(1).

Milan, S., & Hintz, A. (2013). Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Internet & Policy, 5, 7–26.

Milan, S., & ten Oever, N. (2017). Coding and encoding rights in internet infrastructure. Internet Policy Review, 6(1).

Mosco, V. (2014). To the Cloud: Big Data in a Turbulent World. New York: Paradigm Publishers.

Murakami Wood, D. (2013). What Is Global Surveillance?: Towards a Relational Political Economy of the Global Surveillant Assemblage. Geoforum, 49, 317–326.

Polletta, F. (2002). Freedom Is an Endless Meeting: Democracy in American Social Movements. Chicago: University of Chicago Press.

Reijers, W., & Coeckelbergh, M. (2018). The Blockchain as a Narrative Technology: Investigating the Social Ontology and Normative Configurations of Cryptocurrencies. Philosophy & Technology, 31(1), 103–130.

Tarrow, S. (2005). The New Transnational Activism. New York: Cambridge University.

Walport, M. (2015). Distributed Ledger Technology: Beyond blockchain. London: UK Government Office for Science. London: UK Government Office for Science.

Notes:

[1] I am aware that there is a fundamental drawback in social movements when compared to cloud communities: unlike the latter, the former are not rights providers. However, these are the questions one could ask taking a sociological perspective.

[2] The system of unique identifiers of the DNS comprises the so-called “names”, standing in for domain names (e.g., www.eui.eu), and “numbers”, or Internet Protocol (IP) addresses (e.g., the “machine version” of the domain name that a router for example can understand). The DNS can be seen as a sort of “phone book” of the internet.

[3] Technically, of the DNS, which is only a portion of what we call “the internet”, although the most widely used one.

[4] Civil society representation in ICANN is more complex than what is described here. The NCSG is composed of two (litigious) constituencies, namely the Non-Commercial User Constituency (NCUC) and the Non-Profit Operational Concerns (NPOC). In addition, “non-organised” internet users can elect their representatives in the At-Large Advisory Committee (ALAC), organised on a regional basis. The NCSG, however, is the only one who directly contributes to policy-making.

[5] ICANN is both a nonprofit corporation registered under Californian law, and a community of volunteers who set the rules for the management of the logical layer of the internet by consensus. See also the ICANN Bylaws (last updated in August 2017).

[6] This should at least in part address Post’s doubts about the ability of a political community to govern those outside of its jurisdiction. One might argue that internet users are, perhaps unwillingly or simply unconsciously, within the “jurisdiction” of ICANN. I do believe, however, that the case of ICANN is an interesting one for its being in between the two “definitions” of political communities.

[7] ICANN allocates consistent but not sufficient resources to support civil society participation in its policymaking. These include travel bursaries and accommodation costs and fellowship programs for induction of newcomers.

[8] Although a quantitative analysis of the stickiness of participation in relation to discursive change reveals a more nuanced picture (see, for example, Milan & ten Oever, 2017).

 

[blog] Tech, data and social change: A plea for cross-disciplinary engagement, historical memory, and … Critical Community Studies

Kersti R. Wissenbach | March 2018

It has been a while since I first got my feet into the universe of technology and socio-political change. Back then, coming from a critical development studies and communication science background, I was fascinated by the role community radio could play in fostering dialogue among communities in remote areas, and between those communities and their government representatives.

My journey started in the early 2000s, in the most remote parts of Upper West Ghana, with Radio Progress, a small community radio station doing a great job in embracing diversity. Single feature mobile phones were about to become a thing in the country and the radio started to experiment with call-in programs for engaging its citizens in live discussions with local politicians. Before, radio volunteers would drive to the different villages in order to collect people’s concerns, and only then bring those recorded voices back into a studio-based discussion with invited politicians. The community could merely listen in as their concerns were discussed. With the advent of mobile phones, people suddenly could do more than just passively listen to the responses: finally they could engage in real-time dialogue with their representatives, hearing their own voices on air. Typically, people were gathering with family and other community members during the call-in hours to voice their concerns collectively. Communities would not only raise concerns, but also share positive experiences with local representatives following up on their requests. These stories encouraged neighbouring communities to also get involved in the call-in programs to raise their concerns and needs to be addressed.

Fast forward to today and much has changed on the ‘tech for social change’ horizon, at least if we listen to donor agendas and the dominant discourses in the field and in the academia. But what has really changed is largely one thing: the state of technology [1]. In the space of two decades, our enthusiasm, and donor attention, fixed on the ubiquity of mobile technologies, followed by online (crowdsourcing) platforms, social media, everything data (oh, wait … BIG data), and blockchain technology.

Whilst much of what has changed in these regards over the last few decades can be bundled under the Information and Communication for Development (ICT4D) label, one aspect seems to remain constant: change, if it is meant to happen and last, has to be rooted in the contexts and needs of those it intends to address. This is the ultimate ingredient for direct and inclusive engagement of the so-called civil society. Like a cake that needs yeast to rise, no matter whether we add chocolate or lemon, socio-political change in the interest of the people requires the buy-in of the people, no matter what tech is on the menu at a certain moment in time, and in a certain place of the world.

We have learnt many lessons along the way, and we had to sometimes learn them the hard way. Some are condensed in initiatives such as the Principles for Digital Development, a living set of principles helping practitioners engaging with the role of technologies in social or political change programs to learn from past experiences, in order to avoid falling into the same traps – be it of technological, political, and/or ethical nature.

We have observed an upsurge in ‘civic’ users of technologies for facilitating people’s direct engagement in governance, coupled with an emphasis on ‘open government models’. Much of this work emerged in parallel to or from earlier ICT4D experiences, and largely taps into the same funding structures. The lessons learned should be a shared heritage in the field. With various early programs coming to an end, this transnational community of well-intended practitioners, many of which have been involved in what we have earlier called ICT4D work, is now reflecting on the effectiveness of technology in promoting civil society participation in governance dynamics. What puzzles me year after year, however, is how practitioners of civic tech and open government, currently producing ‘first lessons learned’ on the effectiveness of technology in civil society participation in governance, are largely reproducing what we already know, and thus lessons we should have learnt. As critical as I am towards project work driven by traditional development cooperation, all this leaves me wondering what is novel, if anything, in these newest networks – largely breathing from the same funding pots.

New developments in the tech field do not liberate us from the responsibility to learn from what has already been learned – and build on it. The lessons learnt in decades of development communication and ICT4D works evidently cut across technological innovations, and apply to mobile technology as much as to the blockchain. Most importantly: different socio-political contexts call for personalized solutions, given the challenges remain distinct and increase in complexity, as we can see in the growing literature on critical data studies (see e.g. Dalton et al., 2016; Kitchin and Lauriault, 2014).

The critical role of proactive communities, their contexts and needs in fostering social or political change has been discussed since decades. Besides, as the Radio Progress anecdote shows, it applies across technologies. Sadly, once again, the dominant civic tech discourse seems to keep departing from the ‘tech’ rather than the ‘civic’. Analyses start off from the technology-in-governance side, rather than from the much-needed critical discourse of the fundamental role of power in governance: how it is constructed, reproduced, and distributed.

Departing from the aseptic end of the spectrum confines us to a tech-centric perspective, with all the limitations highlighted since the early days of Communication for Social Change and ICT4D critique. Instead, we should reflect on how power structures are seeded and nourished from within the very same communities. This relates to issues such as geographical as much as skill-related biases, originating patterns of exclusion that no technology alone can solve. Those biases are then reproduced, not solved, by technological solutions which aim would be, instead, to enable inclusive forms of governance.

For the civic tech field to move forward, we should move beyond an emphasis on feedback allocation and end-users ultimately centring on the technological component; we should instead adopt a broader perspective in which we recognise the user not merely as a tech consumer/adopter, but as a complex being embedded in civil society networks and power structures. We, therefore, should ask critical questions beyond technology and about communities instead; we should ask ourselves, for example, how to best integrate people’s needs and backgrounds across all stages of civic tech programs. Such a perspective should include a critical examination of who the driving forces of the civic tech community are and how they do subsequently affect decision-making on the development of infrastructures. What is crucial to understand, I argue, is that only inclusive communities can really translate inclusive technology approaches and, consequently, inclusive governance.

From the perspective of an academic observer, a disciplinary evolution is in order too, if we are to capture, understand, and critically contribute to these dynamics. The proposed shift of focus from the ‘tech’ to the ‘civic’ should be mirrored in the literature with a new sub-field, which we may call Critical Community Studies. Emerging at the crossroad of disciplines such as Social Movement Studies, Communication for Social Change, and Critical Data Studies, Critical Community Studies would encourage to taking the community as an entry point in the study of technology for social change. This means, in a case such as the civic tech community, addressing issues such as internal diversity, inclusiveness of decision-making processes, etc. and ways of different ways of engaging people. It also relates to the roots of decisions made in civic tech projects, and in how far those communities, supposed to benefit from certain decisions, have a seat on the table. More generally, Critical Community Studies should invite to critically reflect on the concept of inclusion, both for practitioner agendas and academic frameworks. It would also encourage us to contextualize, take a step back and ask difficult questions, departing from critical development and communication studies (see e.g. Enghel, 2014; Freire, 1968; Rodriguez, 2016) , while taking a feminist perspective (see e.g. Haraway, 1988; Mol, 1999).

Since such a disciplinary evolution cannot but happen in dialogue with existing approaches and thinkers, I would wish to see this post to evolve into a vibrant, cross-disciplinary conversation on how a Critical Community Studies could look like.

 

I would like to thank Stefania Milan for very valuable and in-depth feedback and insights whilst writing this post.

 

 

Cited work

Dalton CM, Taylor L and Thatcher (alphabetical) J (2016) Critical Data Studies: A dialog on data and space. Big Data & Society 3(1): 2053951716648346. DOI: 10.1177/2053951716648346.

Enghel F (2014) Communication, Development, and Social Change: Future Alternatives. In: Global communication: new agendas in communication. Routledge, pp. 129–141.

Freire P (1968) Pedagogy of the Oppressed. New York: Herder and Herder.

Haraway D (1988) Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies 14(3): 575–599. DOI: 10.2307/3178066.

Kitchin R and Lauriault T (2014) Towards Critical Data Studies: Charting and Unpacking Data Assemblages and Their Work. ID 2474112, SSRN Scholarly Paper. Rochester, NY: Social Science Research Network. Available at: https://papers.ssrn.com/abstract=2474112 (accessed 19 March 2018).

Mol A (1999) Ontological politics. A word and some questions. The Sociological Review 47(S1): 74–89. DOI: 10.1111/j.1467-954X.1999.tb03483.x.

Rodriguez C (2016) Human agency and media praxis: Re-centring alternative and community media research. Journal of Alternative and Community Media 1(0): 36–38.

 

I am consciously not using the innovation term here since I truly believe that innovation can only be what truly features into people’s contexts and needs. Innovation, then, is not to be confused with the latest tech advancement or hype.

[blog] Facebook newsfeed changes: Three hypotheses to look into the future

Image: Vincenzo Cosenza

In this blog post, DATACTIVE research associate Antonio Martella is looking forward to the consequences of Facebook’s news feed modifications as result of larger corporate policy changes. He investigates and discusses implications through three hypotheses: 1) the divide between the attention-rich and the attention poor will grow 2) increasing engagement with peer-created content will tighten the filter bubble aspect of networking and 3) the “new” news feed will have a negative impact on users’ mood.

Guest Author: Antonio Martella

On November 11th, 2017, Facebook has announced that the user timeline will change in January 2018. In their words:

“With this update, we will also prioritize posts that spark conversations and meaningful interactions between people. To do this, we will predict which posts you might want to interact with your friends about and show these posts higher in the feed. These are posts that inspire back-and-forth discussion in the comments and posts that you might want to share and react to – whether that’s a post from a friend seeking advice, a friend asking for recommendations for a trip, or a news article or video prompting lots of discussions. […] We will also prioritize posts from friends and family over public content, consistent with our News Feed values.” (Newsroom Facebook 2018)

Any modification in the feed algorithm will have many consequences, and these are not equally predictable. Facebook is a very complicated environment, semi-public in nature and not only related to friendship management. In fact, as the Pew Research Center reported last September, 67% of Americans consume news over social media. This pattern seems to apply to the European news consumption too, where youngsters are exposed to news mostly in a social media context rather than television or newspaper. Indeed, as the Reuters Institute’s Digital News Report 2017 shows, many users follow others because of the news they share.

According to the Pew Research Report, Facebook surpasses other social media as a source of news consumption. This is partially due to the large userbase Facebook has, and partially because news is actually interwoven with people’s timelines. The Digital News Report also shows that exposure to news in Facebook is often incidental; a direct result of news shared by other users, a wide range of news companies that are followed, etc. Notwithstanding, we need to keep in mind that exposure to any content in social media or search engines is algorithm-driven.

Following these considerations, there are several possible consequences to the Facebook news feed changes. This blogpost invests into three probable implication, being

  1. the divide between the attention-rich and the attention poor will grow;
  2. continuous personalisation;
  3. negative impact on users’ mood

1. The divide between the attention-rich and the attention poor will grow

All pages and groups that share content on Facebook will lose visibility and revenues that come from users reading their posts, clicking their links, and visiting their websites1. It’s easy to guess that those who want to remain visible have two choices: either pay more for Facebook ads in order to make their posts visible; or create more engaging content. But the generated engagement in Facebook is deeply connected with the number of followers. This will probably increase the gap between attention reach and attention poor, which is in line with the observed Matthew effect (Merton, 1968) that rules many patterns and practices online (Barabasi, 2013) and in social media.

In fact, many aspects of the society both online and offline are governed by the preferential attachment process that stays behind the so-called “Matthew effect” or the “80/20 rule”. Hence, the more connection you have the more visible you are, and the more new connections you would get as a consequence. This principle can easily be illustrated by the fact that famous websites and people tend to have more followers on social media. But the other way around is equally true: the fewer connection you have, the less attention you would get. In conclusion, contents produced by people or organizations with less power/resources and with lower budgets will decrease in visibility.

2. Continuous personalisation

The second consequence of the news feed change deals with the kind of content that will be dominant in users’ feeds. According to Mark Zuckerberg, content produced and shared by “friends and family” will be more visible in all Facebook timelines. But a news feed dominated by friends’ posts could arguably exacerbate two negative social media aspects, previously expressed through notions of the filter bubbles and the echo chamber. Online social networks developed in social media platforms are strongly based on homophily (Barberà, 2014; Aiello et al 2012) meaning that users connect with others who share similar interests, values, political views, etc. This typical behaviour is also found in offline social networks (McPherson, Smith-Lovin, Cook, 2001), and shows its most problematic characteristics when focusing on information diffusion.

On the one hand, this change will foster the filter bubble in which we are all involved. In fact, filter bubbles (Pariser, 2011) are the result of users’ activities on the web: social media algorithms which continuously learn from every users’ clicks and likes2. On the other hand, more homophily in social media due to the prevalence of “friends and family contents” could easily sustain the echo chamber effect. This phenomenon preceded social media platforms, for like-minded people love to talk to each other fostering their opinions and biases. However, in social media, it is easier to avoid a contrasting point of views, values, or interests as a consequence of the self-selection of “friends”, pages, and groups. Indeed, as research has highlighted, there is a user tendency to promote their favourite narratives and to form polarised groups on Facebook (Quattrociocchi, Scala, Sunstein 2016; Bakshy, Messing, Adamic, 2015) even though it is not a clear and deterministic process (Barberà et al. 2015).

Based on these last considerations, another outcome of news feed changes will be a growth in the visibility of friends’ opinions and points of view. This will most probably result in more polarised information flow in users’ news feeds and a limited number of different point of views and professional (or semi-professional) content. In practice this means that if we would think about a contested news like glyphosate and cancer causation, we have to take in account that information sources will be more socially driven; the chance to read a different point of views and professional news will be smaller than before.

3. Negative impact on users’ mood

The news feed changes will probably influence the mood of billions of people in an inscrutable way. One can say that a news feed more populated by friend’s content would have a negative impact on happiness. According to Mark Zuckerberg “the research shows that when we use social media to connect with people we care about, it can be good for our well-being”. In fact, according to an experiment conducted on users timeline (Kramer, Guillory, Hancock, 2013) content on the users’ timeline does indeed influences their mood. As many researchers have shown, personal feelings (happiness, depression, etc.) flow through offline social networks (Fowler, Christakis, 2008) and their representation in online environments seems to share similar diffusion patterns. In other words: moods contagiously spread online. And in extension, recent scholarly and non-scholarly work shows that scrolling through your Facebook feed can have a negative impact on well-being (Shakya, Christakis, 2017)3. Lastly, it has been demonstrated that the constant bombardment of everyone’s news, biases the attempt to provide the best representation of the self and it seems to have a negative impact on happiness.

Questions to ask

Throughout the hypothesis, I have tried to show some real-life aspects that might be affected by the important changes on Facebook algorithms. As Facebook stated, there are around 2 billion active users on its platform monthly.

These statements subsequently evoke two questions:

  1. Can these changes be made by a private company without any form of public discussion?
  2. Is it our democratic right to scrutinize algorithms as organiser of public space?

Further information on how Facebook algorithms work can be found here: an interesting article edited by Share Lab that has tried to shed some light on what is behind this platform.

 

References

Aiello, Luca Maria, Barrat, Alain, Schifanella, Rossano, Cattuto, Ciro, Markines, Benjamin, Menczer, Filippo. 2012. Friendship prediction and homophily in social media. ACM Trans. Web 6, 2, Article 9, 33 p. 66.

Bakshy, Eytan, Messing, Solomon, Adamic, Lada A. 2015.Exposure to ideologically diverse news and opinion on Facebook in Science 05 Jun 2015: Vol. 348, Issue 6239, pp. 1130-1132.

Pariser, Eli, 2012, The Filter Bubble: What The Internet Is Hiding From You, Penguin: London.

Quattrociocchi, Walter, Scala, Antonio, Sunstein, Cass R. 2013. Echo Chambers on Facebook. Available at SSRN: https://ssrn.com/abstract=2795110.

Shakya, Holly B., Christakis, Nicholas A. 2017. Association of Facebook Use With Compromised Well-Being: A Longitudinal Study in American Journal of Epidemiology, 185:3, pp. 203–211.

Rogers, Richard, 2015. Digital Methods for Web Research, in Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource (ed. Scott, Roberts; Buchmann, Marlis C.; Kosslyn Stephan), Wiley & Sons: New York

 

  1. For example, this is exactly what happened to the blog LittleThings. This blog had to shut down a month after the news feed change due to the web traffic drop.
  2. This is already happening as an Italian experiment on Facebook have partially shown during the last Italian election (linkunfortunately only in Italian). According to this experiment, Facebook news feed shows different kind of content and media (photo, video, web links) based on likes, comment and shares of each user. Indeed, according to Facebook statements, proposed content will be more based on each user’s intention to interact (algorithmically predicted) fostering the visibility of tailored content.
  3. For example «Liking others’ content and clicking links posted by friends were consistently related to compromised well-being, whereas the number of status updates was related to reports of diminished mental health» (Shakya, Christakis, 2017, p. 210).

 

On the author: Antonio is a PhD candidate in Political Science at the University of Pisa. His research focus is political leaders populism in social media. His approach coincides with the Digital Methods for Web Research recommendations (Rogers, 2015), and he is particularly interested in social media algorithms and their effects.

luncheon seminar with Angela Daly (March 21, 1 pm)

On Tuesday March 21th, DATACTIVE will host an informal luncheon seminar with socio-legal scholar and activist Angela Daly (Queensland University of Technology & Tilburg Institute for Law, Technology and Society). You are welcome to join! Angela will give a presentation titled ‘reflections on socio-legal studies (and activism) of data’. For more info, you can find Angela’s paper ‘Data and Fundamental Rights’ (2017) here, and visit Angela’s website: https://angeladaly.com/

Bio

I am a socio-legal scholar of technology with interest in the Internet, 3D Printing and renewable energy. I am Vice Chancellor’s Research Fellow in Queensland University of Technology’s Faculty of Law, and a research associate at the Tilburg Institute for Law, Technology and Society (TILT) in the Netherlands. My books, Socio-Legal Aspects of the 3D Printing Revolution, published by Palgrave Macmillan, and Private Power, Online Information Flows and EU Law: Mind the Gap, published by Hart, are out now!