Author: Stefania

Stefania discusses data, citizenship and democracy in Lisbon, Bologna & Fribourg

On April 12, Stefania will give a talk on the politics of code and data at the ISCTE – Instituto Universitário de Lisboa, in Lisbon, Portugal.

On April 23, she will be in Bologna, Italy, at the School of Advanced International Studies of Johns Hopkins University. She will present her thoughts on ‘Citizenship Re-invented: The Evolution of Politics in the Datafied Society’.

Finally, on April 30 Stefania will lecture at the University of Fribourg, in Switzerland, upon invitation of Prof. Regula Haenggli. The lecture is entitled ‘Digitalization as a challenge to democracy: Possibilities of self-organization, emancipation, and autonomy’.

Para exercer plenamente a cidadania, é preciso conhecer os filtros virtuais (Época Negócios)

Stefania was commissioned an article by the Brazilian business magazine Época Negócios. In sum, she argues that “estar ciente dos elementos que moldam profundamente nossos universos de informação é um passo fundamental para deixarmos de ser prisioneiros da internet”. Continue reading the article in Portuguese online. Here you can read the original in English.

Why personalization algorithms are ultimately bad for you (and what to do about it)

Stefania Milan

I like bicycles. I often search online for bike accessories, clothing, and bike races. As a result, the webpages I visit as well as my Facebook wall often feature ads related to biking. The same goes for my political preferences, or my last search for the cheapest flight or the next holiday destination. This information is (usually) relevant to me. Sometimes I click on the banner; largely, I ignore it. Most of the cases, I hardly notice it but process and “absorb” it as part of “my” online reality. This unsolicited yet relevant content contributes to make me feel “at home” in my wanderings around the web. I feel amongst my peers.

Behind the efforts to carefully target web content to our preferences are personalization algorithms. Personalization algorithms are at the core of social media platforms, dating apps, and generally of most of the websites we visit, including news sites. They make us see the world as we want to see it. By forging a specific reality for each individual, they silently and subtly shape customized “information diets”.

Our life, both online and offline, is increasingly dependent on algorithms. They shape our way of life, helping us find a ride on Uber or hip, fast food delivery on Foodora. They might help us finding a job (or losing it), and locating a partner for the night or for life on Tinder. They mediate our news consumption and the delivery of state services. But what are they, and how can they do their magic? Algorithms can be seen like a recipe for baking an apple tart: in the same way in which the grandma’s recipe tells us, step by step, what to do to make it right, in computing algorithms tell the machine what to do with data, namely how to calculate or process it, and how to make sense of it and act upon it. As forms of automated reasoning, they are usually written by humans, however they operate into the realm of artificial intelligence: with the ability to train themselves over time, they might eventually “take up” their own life, sort to speak.

The central role played by algorithms in our life should be of concern, especially if we conceive of the digital as complementary to our offline self. Today, our social dimension is simultaneously embedded and (re)produced by technical settings. But algorithms, proprietary and opaque, are invisible to end users: their outcome is visible (e.g., the manipulated content that shows up on one’s customized interface), but it bears no indication of having been manipulated, because algorithms leave no trace and “exist” only when operational. Nevertheless, they do create rules for social interaction and these rules indirectly shape the way we see, understand and interact with the world around us. And far from being neutral, they are deeply political in nature, designed by humans with certain priorities and agendas.

While there are many types of algorithms, what affects us most today are probably personalization algorithms. They mediate our web experience, easing our choices by giving us information which is in tune with our clicking habits—and thus, supposedly, preferences.

They make sure the information we are fed is relevant to us, selecting it on the basis of our prior search history, social graph, gender and location, and generally speaking about all the information we directly on unwillingly make available online. But because they are invisible to the eyes of users, most of us are largely unaware this personalization is even happening. We believe we see “the real world”, yet it is just one of the many possible realities. This contributes to envelop us in what US internet activist and entrepreneur Eli Pariser called the “filter bubble”— that is to saythe intellectual isolation caused by algorithms constantly guessing what we might like or not, based on the ‘image’ they have of us. In other words, personalization algorithms might eventually reduce our ability to make informed choices, as the options we are presented with and exposed to are limited and repetitive.

Why should we care, if all of this eventually is convenient and makes our busy life easier and more pleasant?

First of all, this is ultimately surveillance, be it corporate or institutional. Data is constantly collected about us and our preferences, and it ends up “standing in” for the individual, who is made to disappear in favoir of a representation which can be effortlessly classified and manipulated.“When you stare into the Internet, the Internet stares back into you”, once tweeted digital rights advocate @Cattekwaad. The web “stares back” by tracking our behaviours and preferences, and profiling each of us in categories ready for classification and targeted marketing. We might think of the Panopticon, a circular building designed in mid-19thcentury by the philosopher Jeremy Bentham as “a new mode of obtaining power of mind over mind” and intended to serve as prison. In this special penal institute, a single guard would be effortlessly able to observe all inmates without them being aware of the condition of permanent surveillance they are subjected to.

But there is a fundamental difference between the idea of the Panopticon and today’s surveillance ecosystem. The jailbirds of the internet age are not only aware of the constant scrutiny they are exposed to; they actively and enthusiastically participate in generation of data, prompted by the imperative to participate of social media platforms. In this respect, as the UK sociologist Roy Boyne explained, the data collection machines of personalization algorithms can then be seen as post-Panopticon structures, whereby the model rooted on coercion have been replaced by the mechanisms of seduction in the age of big data. The first victim of personalization algorithms is our privacy, as we seem to be keen to sacrifice freedom (including the freedom to be exposed to various opinions and the freedom from the attention of others) to the altar of the current aggressive personalized marketing in favour of convenience and functionality.

The second victim of personalization algorithms is diversity, of both opinions and preferences, and the third and ultimate casualty is democracy. While this might sound like an exaggerated claim, personalization algorithms dramatically—and especially, silently—reduce our exposure to different ideas and attitudes, helping us to reinforce our own and allowing us to disregard any other as “non-existent”. In other words, the “filter bubble” created by personalization algorithms isolates us in our own comfort zone, preventing us from accessing and evaluating the viewpoints of others.

The hypothesis of the existence of a filter bubble has been extensively tested. On the occasion of the recent elections in Argentina, last October, Italian hacker Claudio Agosti in collaboration with the World Wide Web Foundation, conducted a research using facebook.tracking.exposed,a software intend to “increase transparency behind personalization algorithms, so that people can have more effective control of their online Facebook experience and more awareness of the information to which they are exposed.”

The team rana controlled experiment with nine profiles created ad hoc, creating a sort of “lab experiment” in which profiles were artificially polarized (e.g., maintaining some variables constant, each profile “liked” different items). Not only did the data confirmed the existence of a filter bubble; it showed a dangerous reinforcement effect which Agosti termed “algorithm extremism”.

What can we do about all this? This question has two answers. The first is easy but uncomfortable. The second is a strategy for the long run and calls for an active role.

Let’s start from the easy. We ultimately retain a certain degree of human (and democratic) agency: in any given moment, we can choose to opt out. To be sure, erasing our Facebook account doesn’t do the trick of protecting our long-eroded privacy: the company has the right to retain our data, as per Terms of Service, the long, convoluted legal document—a contract, that is—we all sign to but rarely read. With the “exit” strategy we lose in contacts, friendships, joyful exchange and we are no longer able to sneak in the life of others, but we gain in privacy and, perhaps, reclaim our ability to think autonomously. I bet not many of you will do this after reading this article—I haven’t myself found the courage to disengage entirely from my leisurely existence on social media platforms.

But there is good news. As the social becomes increasingly entrenched in its algorithmic fabric, there is a second option, a sort of survival strategy for the long run. We can learn to live with and deal withalgorithms. We can familiarize with their presence, engaging in a self-reflexive exercise that questions what they show us in any given interface and why. If understandably not all of us might be inclined to learn the ropes of programming, “knowing” the algorithms that so much affect us is a fundamental step to be able to fully exercise our citizenship in the age of big data. “Knowing” here means primarily making the acquaintance with their occurrence and function, and questioning the fact that being turned into a pile of data is almost an accepted fact of life these days. Because being able to think with one’s own head today, means also questioning the algorithms that so much shape our information worlds.

 

 

 

 

 

[blog] Cloud communities and the materiality of the digital (GLOBALCIT project, EUI)

cropped-GlobalCitggp-logo

This invited blog post originally appeared in the forum ‘Cloud Communities: The Dawn of Global Citizenship?’ of the GLOBALCIT project (European University Institute). It is part of an interesting multidisciplinary conversation accessible from the GLOBALCIT website. I wish to thank Rainer Baubock and Liav Orgad for the invitation to contribute to the debate. 

Cloud communities and the materiality of the digital

By Stefania Milan (University of Amsterdam)

As a digital sociologist, I have always found ‘classical’ political scientists and lawyers a tad too reluctant to embrace the idea that digital technology is a game changer in so many respects. In the debate spurred by Liav Orgad’s provocative thoughts on blockchain-enabled cloud communities, I am particularly fascinated by the tension between techno-utopianism on the one hand (above all, Orgad and Primavera De Filippi), and socio-legal realism on the other (e.g., Rainer Bauböck, Michael Blake, Lea Ypi, Jelena Dzankic, Dimitry Kochenov). I find myself somewhere in the middle. In what follows, I take a sociological perspective to explain why there is something profoundly interesting in the notion of cloud communities, why however little of it is really new, and why the obstacles ahead are bigger than we might like to think. The point of departure for my considerations is a number of experiences in the realm of transnational social movements and governance: what we can learn from existing experiments that might help us contextualize and rethink cloud communities?

Three problems with Orgad’s argument

To start with, while I sympathise with Orgad’s provocative claims, I cannot but notice that what he deems new in cloud communities—namely the global dimension of political membership and its networked nature—is indeed rather old. Since the 1990s, transnational social movements for global justice have offered non-territorial forms of political membership—not unlike those described as cloud communities. Similar to cloud communities, these movements were the manifestation of political communities based on consent, gathered around shared interests and only minimally rooted in physical territories corresponding to nation states (see, e.g., Tarrow, 2005). In the fall of 2011 I observed with earnest interest the emergence of yet another global wave of contention: the so-called Occupy mobilisation. As a sociologist of the web, I set off in search for a good metaphor to capture the evolution of organised collective action in the age of social media, and the obvious candidate was… the cloud. In a series of articles (see, for example, here and here) and book chapters (e.g., here and here), I developed my theory of ‘cloud protesting’, intended to capture how the algorithmic environment of social media alters the dynamics of organized collective action. In light of my empirical work, I agree with Bauböck, who acknowledges that cloud communities might have something to do with the “expansion of civil society, of international organizations, or of traditional territorial polities into cyberspace”. He also points out how, sadly, people can express their political views – and, I would add, engage in disruptive actions, as happens at some fringes of the movement for global justice – only because “a secure territorial citizenship” protects their exercise of fundamental rights, such as freedom of expression and association. Hence the questions a sociologist might ask: do we really need the blockchain to enable the emergence of cloud communities? If, as I argue, the existence of “international legal personas” is not a pre-requisite for the establishment of cloud communities, what would the creation of “international legal personas” add to the picture?[1]

Secondly, while I understand why a blockchain-enabled citizenship system would make life easier for the many who do not have access to a regular passport, I am wary of its “institutionalisation”, on account of the probable discrepancies between the ideas (and the mechanisms) associated with a Westphalian state and those of politically active activists and radical technologists alike. On the one hand, citizens interested in “advanced” forms of political participation (e.g., governance and the making of law) might not necessarily be inclined to form a state-like entity. For example, many accounts of the so-called “movement for global justice” (McDonald, 2006; della Porta & Tarrow, 2005) show how “official” membership and affiliation is often not required, not expected and especially not considered desirable. Activism today is characterised by a dislike and distrust of the state, and a tendency to privilege flexible, multiple identities (e.g., Bennett & Segerberg, 2013; Juris, 2012; Milan, 2013). On the other hand, the “radical technologists” behind the blockchain project are animated by values—an imaginaire (Flichy, 2007)—deeply distinct from that of the state (see, e.g., Reijers & Coeckelbergh, 2018). While the blockchain technology is enabled by a complex constellation of diverse actors, it is legitimate to ask whether it is possible to bend a technology built with an “underlying philosophy of distributed consensus, open source, transparency and community” with the goal to “be highly disruptive”(Walport, 2015)… to serve similar purposes as those of states?

Thirdly, Orgad’s argument falls short of a clear description of what the ‘cloud’ stands for in his notion of cloud communities. When thinking about ‘clouds’, as a metaphor and a technical term, we cannot but think of cloud computing, a “key force in the changing international political economy” (Mosco, 2014, p. 1) of our times, which entails a process of centralisation of software and hardware allowing users to reduce costs by sharing resources. The cloud metaphor, I argued elsewhere (Milan, 2015), is an apt one as it exposes a fundamental ambivalence of contemporary processes of “socio-legal decentralisation”. While claiming distance from the values and dynamics of the neoliberal state, a project of building blockchain-enabled communities still relies on commercially-owned infrastructure to function.

Precisely to reflect on this ambiguity, my most recent text on cloud protesting interrogates the materiality of the cloud. We have long lived in the illusion that the internet was a space free of geography. Yet, as IR scholar Ron Deibert argued, “physical geography is an essential component of cyberspace: Where technology is located is as important as what it is” (original italics). The Snowden revelations, to name just one, have brought to the forefront the role of the national state in—openly or covertly—setting the rules of user interactions online. What’s more, we no longer can blame the state alone, but the “surveillant assemblage” of state and corporations (Murakami Wood, 2013). To me, the big absent in this debate is the private sector and corporate capital. De Filippi briefly mentioned how the “new communities of kinship” are anchored in “a variety of online platforms”. However, what Orgav’s and partially also Bauböck’s contributions underscore is the extent to which intermediation by private actors stands in the way of creating a real alternative to the state—or at least the fulfilment of certain dreams of autonomy, best represented today by the fascination for blockchain technology. Bauböck rightly notes that “state and corporations… will find ways to instrumentalise or hijack cloud communities for their own purposes”. But there is more to that: the infrastructure we use to enable our interpersonal exchanges and, why not, the blockchain, are owned and controlled by private interests subjected to national laws. They are not merely neutral pipes, as Dumbrava reminds us.

Self-governance in practice: A cautionary tale

To be sure, many experiments allow “individuals the option to raise their voice … in territorial communities to which they do not physically belong”, as beautifully put by Francesca Strumia. Internet governance is a case in point. Since the early days of the internet, cyberlibertarian ideals, enshrined for instance in the ‘Declaration of Independence of Cyberspace’ by late JP Barlow, have attributed little to no role to governments—both in deciding the rules for the ‘new’ space as well as the citizenship of its users (read: the right to participate in the space and in the decision-making about the rules governing it). In those early flamboyant narratives, cyberspace was to be a space where users—but really engineers above all—would translate into practice their wildest dreams in matter of self-governance, self-determination and, to some extent, fairness. While cyberlibertarian views have been appropriated by both conservative (anti-state) and progressive forces alike, some of their founding principles have spilled over to real governance mechanisms—above all the governance of standards and protocols by the Internet Engineering Task Force (IETF), and the management of the the Domain Name System (DNS) by the Internet Corporation for Assigned Names and Numbers (ICANN).[2] Here I focus on the latter, where I have been active for about four years (2014-2017).

ICANN is organized in constituencies of stakeholders, including contracted parties (the ‘middlemen’, that is to say registries and registrars that on a regional base allocate and manage on behalf of ICANN the names and numbers, and whose relationship with ICANN is regulated by contract), non-contracted parties (corporations doing business on the DNS, e.g. content or infrastructure providers) and non-commercial internet users (read: us). ICANN’s proceedings are fully recorded and accessible from its website; its public meetings, thrice a year and rotating around the globe, are open to everyone who wants to walk in. Governments are represented in a sort of United Nations-style entity called the Government Advisory Committee. While corporate interests are well-represented by an array of professional lobbyists, the Non-Commercial Stakeholder Group (NCSG), which stands in for civil society,[3] is a mix and match of advocates of various extraction, expertise and nationality: internet governance academics, nongovernmental organisations promoting freedom of expression, and independent individuals who take an interest in the functioning of the logical layer of the internet.

The 2016 transition of the stewardship over the DNS from the US Congress to the “global multistakeholder community” has achieved a dream unique in its kind, straight out of the cyberlibertarian vision of the early days: the technical oversight of the internet[4] is in the hands of the people who make and use it, and the (advisory) role of the state is marginal. Accountability now rests solely within the community behind ICANN, which envisioned (and is still implementing) a complex system of checks and balances to allow the various stakeholder voices to be fairly represented. No other critical infrastructure is regulated by its own users. To build on Orgad’s reasoning, the community around ICANN is a cloud community, which operates by voluntary association and consensus [5],[5] and is entitled to produce “governance and the creation of law”.[6]

But the system is far from perfect. Let’s look at how the so-called civil society is represented, focusing on one such entity, the NCSG. Firstly, given that everyone can participate, the variety of views represented is enormous, and often hinders the ability of the constituency to be effective in policy negotiations. Yet, the size of the group is relatively small: at the time of writing, the Non-Commercial User Constituency (the bigger one among the two that form the NCSG) comprises “538 members from 161 countries, including 118 noncommercial organizations and 420 individuals”, making it the largest constituency within ICANN: this is nothing when compared to the global internet population it serves, confirming, as Dzankic argues, that “direct democracy is not necessarily conducive to broad participation in decision-making”. Secondly, ICANN policy-making is highly technical and specialised; the learning curve is dramatically steep. Thirdly, to be effective, the amount of time a civil society representative should spend on ICANN is largely incompatible with regular daily jobs; civil society cannot compete with corporate lobbyists. Fourthly, with ICANN meetings rotating across the globe, one needs to be on the road for at least a month per year, with considerable personal and financial costs.[7] In sum, while participation is in principle open to everyone, informed participation has much higher access barriers, which have to do with expertise, time, and financial resources (see, e.g., Milan & Hintz, 2013).

As a result, we observe a number of dangerous distortions of political representation. For example, when only the highly motivated participate, the views and “imaginaries” represented are often at the opposite ends of the spectrum (cf., Milan, 2014). Only the most involved really partake in decision-making, in a mechanism which is well known in sociology: the “tyranny of structurelessness” (Freeman, 1972), which is typical of participatory, consensus-based organising. The extreme personalisation of politics that we observe within civil society at ICANN—a small group of long-term advocates with high personal stakes—yields also another similar mechanism, known as “the tyranny of emotions” (Polletta, 2002), by which the most invested, independently of the suitability of their curricula vitae, end up assuming informal leadership roles—and, as the case of ICANN shows, even in presence of formal and carefully weighted governance structures. Decision-making is thus based on a sort of “microconsensus” within small decision-making cliques (Gastil, 1993).[8] To make things worse, ICANN is increasingly making exceptions to its own, community-established rules, largely under the pressure of corporations as well as law enforcement: for example, the corporation has recently been accused of bypassing consensus policy-making through voluntary agreements ad private contracting.

Why not (yet?): On new divides and bad players

In conclusion, while I value the possibilities the blockchain technology opens for experimentation as much as Primavera De Filippi, I do not believe it will really solve our problems in the short to middle-term. Rather, as it is always with technology because of its inherent political nature (cf., Bijker, Hughes, & Pinch, 2012), new conflicts will emerge—and they will concern both its technical features and its governance.

Earlier contributors to this debate have raised important concerns which are worth listening to. Besides Bauböck’s concerns over the perils for democracy represented by a consensus-based, self-governed model, endorsed also by Blake, I want to echo Lea Ypi’s reminder of the enormous potential for exclusion embedded in technologies, as digital skills (but also income) are not equally distributed across the globe. For the time being, a citizenship model based on blockchain technology would be for the elites only, and would contribute to create new divides and to amplify existing ones. The first fundamental step towards the cloud communities envisioned by Orgad would thus see the state stepping in (once again) and being in charge of creating appropriate data and algorithmic literacy programmes whose scope is out of reach for corporations and the organised civil society alike.

There is more to that, however. The costs to our already fragile ecosystem of the blockchain technology are on the rise along with its popularity. These infrastructures are energy-intensive: talking about the cryptocurrency Bitcoin, tech magazine Motherboard estimated that each transaction consumes 215 Kilowatt-hour of electricity—the equivalent of the weekly consumption of an American household. A world built on blockchain would have a vast environmental footprint (see also Mosco, 2014). Once again, the state might play a role in imposing adequate regulation mindful of the environmental costs of such programs.

But I do not intend to glorify the role of the state. On the contrary, I believe we should also watch out for any attempts by the state to curb innovation. The relatively brief history of digital technology, and even more that of the internet, is awash with examples of late but extremely damaging state interventions. As soon as a given technology performs roles or produces information that are of interest to the state (e.g., interpersonal communications), the state wants to jump in, and often does so in pretty clumsy ways. The recent surveillance scandals have abundantly shown how state powers firmly inhabit the internet (cf., Deibert, 2009; Deibert, Palfrey, Rohozinski, & Zittrain, 2010; Lyon, 2015)—and, as the Cambridge Analytica case reminds us, so do corporate interests. Moreover, the two are, more often than not, dangerously aligned.

I do not intend, with my cautionary tales, to hinder any imaginative effort to explore the possibilities offered by blockchain to rethink how we understand and practice citizenship today. The case of Estonia shows that different models based on alternative infrastructure are possible, at least on the small scale and in presence of a committed state. As scholars we ought to explore those possibilities. Much work is needed, however, before we can proclaim the blockchain revolution.

References

Bennett, L. W., & Segerberg, A. (2013). The Logic of Connective Action Digital Media and the Personalization of Contentious Politics. Cambridge, UK: Cambridge University Press.

Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (2012). The Social Construction of Technological Systems. New Direction in the Sociology and History of Technology. Cambridge, MA and London, England: MIT Press.

Deibert, R. J. (2009). The geopolitics of internet control: censorship, sovereignty, and cyberspace. In A. Chadwick & P. N. Howard (Eds.), The Routledge Handbook of Internet Politics (pp. 323–336). London: Routledge.

Deibert, R. J., Palfrey, J. G., Rohozinski, R., & Zittrain, J. (Eds.). (2010). Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace. Cambridge, MA: MIT Press.

della Porta, D., & Tarrow, S. (Eds.). (2005). Transnational Protest and Global Activism. Lanham, MD: Rowman & Littlefield.

Flichy, P. (2007). The internet imaginaire. Cambridge, Mass.: MIT Press.

Freeman, J. (1972). The Tyranny of Structurelessness.

Gastil, J. (1993). Democracy in Small Groups. Participation, Decision Making & Communication. Philadelphia, PA and Gabriola Island, BC: New Society Publishers.

Juris, J. S. (2012). Reflections on #Occupy Everywhere: Social Media, Public Space, and Emerging Logics of Aggregation. American Ethnologist, 39(2), 259–279.

Lyon, D. (2015). Surveillance After Snowden. Cambridge and Malden, MA: Polity Press.

McDonald, K. (2006). Global Movements: Action and Culture. Malden, MA and Oxford: Blackwell.

Milan, S. (2013). WikiLeaks, Anonymous, and the exercise of individuality: Protesting in the cloud. In B. Brevini, A. Hintz, & P. McCurdy (Eds.), Beyond WikiLeaks: Implications for the Future of Communications, Journalism and Society (pp. 191–208). Basingstoke, UK: Palgrave Macmillan.

Milan, S. (2015). When Algorithms Shape Collective Action: Social Media and the Dynamics of Cloud Protesting. Social Media + Society, 1(1).

Milan, S., & Hintz, A. (2013). Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Internet & Policy, 5, 7–26.

Milan, S., & ten Oever, N. (2017). Coding and encoding rights in internet infrastructure. Internet Policy Review, 6(1).

Mosco, V. (2014). To the Cloud: Big Data in a Turbulent World. New York: Paradigm Publishers.

Murakami Wood, D. (2013). What Is Global Surveillance?: Towards a Relational Political Economy of the Global Surveillant Assemblage. Geoforum, 49, 317–326.

Polletta, F. (2002). Freedom Is an Endless Meeting: Democracy in American Social Movements. Chicago: University of Chicago Press.

Reijers, W., & Coeckelbergh, M. (2018). The Blockchain as a Narrative Technology: Investigating the Social Ontology and Normative Configurations of Cryptocurrencies. Philosophy & Technology, 31(1), 103–130.

Tarrow, S. (2005). The New Transnational Activism. New York: Cambridge University.

Walport, M. (2015). Distributed Ledger Technology: Beyond blockchain. London: UK Government Office for Science. London: UK Government Office for Science.

Notes:

[1] I am aware that there is a fundamental drawback in social movements when compared to cloud communities: unlike the latter, the former are not rights providers. However, these are the questions one could ask taking a sociological perspective.

[2] The system of unique identifiers of the DNS comprises the so-called “names”, standing in for domain names (e.g., www.eui.eu), and “numbers”, or Internet Protocol (IP) addresses (e.g., the “machine version” of the domain name that a router for example can understand). The DNS can be seen as a sort of “phone book” of the internet.

[3] Technically, of the DNS, which is only a portion of what we call “the internet”, although the most widely used one.

[4] Civil society representation in ICANN is more complex than what is described here. The NCSG is composed of two (litigious) constituencies, namely the Non-Commercial User Constituency (NCUC) and the Non-Profit Operational Concerns (NPOC). In addition, “non-organised” internet users can elect their representatives in the At-Large Advisory Committee (ALAC), organised on a regional basis. The NCSG, however, is the only one who directly contributes to policy-making.

[5] ICANN is both a nonprofit corporation registered under Californian law, and a community of volunteers who set the rules for the management of the logical layer of the internet by consensus. See also the ICANN Bylaws (last updated in August 2017).

[6] This should at least in part address Post’s doubts about the ability of a political community to govern those outside of its jurisdiction. One might argue that internet users are, perhaps unwillingly or simply unconsciously, within the “jurisdiction” of ICANN. I do believe, however, that the case of ICANN is an interesting one for its being in between the two “definitions” of political communities.

[7] ICANN allocates consistent but not sufficient resources to support civil society participation in its policymaking. These include travel bursaries and accommodation costs and fellowship programs for induction of newcomers.

[8] Although a quantitative analysis of the stickiness of participation in relation to discursive change reveals a more nuanced picture (see, for example, Milan & ten Oever, 2017).

 

BigBang hackaton in London, March 17-18

This weekend the DATACTIVE team will be joining the IETF101 hackathon to work on quantitative mailing-list analysis software. The Internet Engineering Taskforce (IETF) is the oldest and most important Internet standard setting body. The discussions and decisions of the IETF have fundamentally shaped the Internet. All IETF mailing-lists and output documents are publicly available. They represent a true treasure for digital sociologist to understand how the Internet infrastructure and architecture developed over time. To facilitate this analysis DATACTIVE has been contributing to the development of BigBang, a Python-based automated quantitative mailinglists tool. Armed with almost 40 gigabyte worth of data in the form of plain text files, we are eager to boldly discover what no one has discovered before. By the way, we still have some (open) issues, feel free to contribute on Github 🙂

fresh out of the press: Political Agency, Digital Traces, and Bottom-Up Data Practices

The article ‘Political Agency, Digital Traces, and Bottom-Up Data Practices’ by Stefania Milan has been published in the International Journal of Communication, in a special section on ‘Digital Traces in Context’, edited by Andreas Hepp, Andreas Breiter, Thomas N. Friemel. It is open access 🙂

Abstract. This theoretical article explores the bottom-up data practices enacted by individuals and groups in the context of organized collective action. Conversing with critical media theory, the sociology of social movements, and platform studies, it asks how activists largely reliant on social media for their activities can leverage datafication and mobilize social media data in their tactics and narratives. Using the notion of digital traces as a heuristic tool to understand the dynamics between platforms and their users, the article reflects on the concurrent materiality and discursiveness of digital traces and analyzes the evolution of political agency vis-à-vis the datafied self. It contributes to our understanding of “digital traces in context” by foregrounding human agency and the meaning-making activities of individuals and groups. Focusing on the possibilities opened up by digital traces, it considers how activists make sense of the ways in which social media structure their interactions. It shows how digital traces trigger a quest for visibility that is unprecedented in the social movement realm, and how they can function as particular “agency machines.”

Niels and Stefania at the Internet Governance Forum 2017 in Geneva

Niels and Stefania will be in Geneva in mid-December for the 2017 edition of the Internet Governance Forum (IGF), taking place at the United Nations Office at Geneva on December 18-21. The IGF is a global multistakeholder forum that promotes discussions and dialogue about public policy issues related to the Internet. It was convened in 2006 by the United Nations Secretary-General. This year’s will the the IGF’s 12th edition.

Among other things, DATACTIVE will be featured in one of the main sessions, entitled Local interventions, global impacts: How can international, multistakeholder cooperation address internet disruptions, encryption and data flows, and  discussing the impacts that national policy initiatives may have on the global Internet environment and the jurisdictional issues still to be solved (December 18). Stefania will speak in the sub-them of “data flows”. In addition, DATACTIVE is co-hosting, in collaborating with the Data Justice Lab at Cardiff University a workshop entitled “Datafication and Social Justice: What challenges for Internet Governance?” (December 21). 

Stefania keynotes at workshop on slow computing, Maynooth, Ireland, 14 December

Stefania will take part in ‘Slow computing: A workshop on resistance in the algorithmic age’, organised by Rob Kitchin and Alistair Fraser and hosted by the  Programmable City project, the Social Sciences Institute and the Department of Geography of Maynooth University, Ireland. “In line with the parallel concepts of slow food (e.g. Miele & Murdoch 2002) or slow scholarship (Mountz et al 2015), ‘slow computing’ (Fraser 2017) is a provocation to resist”, reads the call for papers. Check out the program and the line-up. Stefania’s presentation is titled “Resist, subvert, accelerate. Towards an ethics of engagement in the age of the computational theocracy”.

Stefania keynotes at LAVITS in Santiago de Chile, November 29

DPGpgHVWAAEbnrCIn November 20-30 Stefania will be in Santiago de Chile for a number of talks. She will keynote at the 5th International Symposium of the Red Latinoamericana de Etudios en Vigilancia, Technología y Sociedad, organised by Universidad de Chile with the non-governmental organization Datos Protegidos, whose theme this year is Vigilancia, Democracia y Privacidad en América Latina: vulnerabilidades y resistencias.

Stefania will also meet digital rights activists and participate in the following events: the workshop ‘Designing people by numbers’ with Celia Lury (Warwick University) at the Pontificia Universidad Catolica de Chile on November 21st; a ‘conversatorio’ with the Red Chilena Estudios Ciencia, Tecnología y Sociedad at the Universidad Diego Portales, on November 23rd; a ‘conversatorio’ with members of the Humanities Faculty, Campus Juan Gomez Millas University of Chile on November 27th.

 

 

Tropicalizing Surveillance: Implementing big data policing in São Paulo, Brazil

By Claudio Altenhain

In 2013, Geraldo Alckmin, governor of São Paulo, announced a major advancement in policing Brazil’s most populous and economically potent union state: His administration had acquired the license for the use of the Domain Awareness System (DAS), a “smart” big data tool developed by Microsoft and deployed by the New York (NY) Police Department in order to fight crime and, most crucially, prevent further terrorist attacks in the United States’ financial capital. When introducing the DAS in the first place, the then NY Mayor Michael Bloomberg sustained that “[w]e’re not your mom and pop’s police department any more. We are in the next century, we are leading the pack” (26:50), thus associating state-of-the-art surveillance technology with an aura of modernity and a sense of local pride. Unsurprisingly, this kind of discourse was echoed when the program was presented to the Brazilian public during Alckmin’s re-election campaign. The corresponding advertisements were not short of grandiose promises: Not only would the software enable the interconnection and integration of hitherto separate police databases; it would also incorporate São Paulo’s vast network of private CCTV systems and, most prominently, automatically detect cases of “suspicious” behavior such as somebody trying to enter private property while wearing a motorcycle helmet. As the system “migrated” from one setting to another, the typology of threats it is supposed to fend off thus shifted accordingly: While in New York, the DAS’s implementation was mostly justified with the persistent, yet somewhat anonymous risk of terrorists attacking out of the blue, in São Paulo Detecta plugs into a well-established common knowledge of potentially “dangerous” subjects, places, and situations typically involving the (mostly dark-skinned) marginal as the ideal perpetrator.

PastedGraphic-2

Significantly, while the DAS’s deployment soon provoked critique amongst civil rights organizations such as the ACLU, Detecta did not stir any similar kind of controversy in São Paulo – in spite of the fact that, until the present day, Brazil did not adopt any comprehensive legislation concerning the protection of personal data. Instead, public debate would soon revolve around the question whether or not the system was fulfilling the promises it was announced with in the first place. Alexandre Padilha, the Workers’ Party’s candidate to challenge Geraldo Alckmin, sustained that Detecta was a failure and did not substantially improve public security; symptomatically enough, other than that his campaign video drew upon exactly the same imagery as his adversary, depicting US-American police technology as a mainstay of his anti-crime policy. Meanwhile, media reports indicated shady business practices when the software was first acquired. Comparing the respective controversies about DAS and Detecta, it thus becomes apparent that they entail distinct notions of the “rogue state”: Whereas in New York, the DAS was criticized for its potential infringement of civil liberties, in São Paulo the case of Detecta would epitomize both incompetence and corruption, while privacy rights and abuses of police power were hardly broached.

PastedGraphic-4Meanwhile, PRODESP, a state-owned agency developing IT solutions for governmental purposes, was charged with “translating” the software so that it would serve the specific needs of São Paulo’s police forces – a process oddly referred to as the system’s “tropicalization” by several of my interview partners, both amongst and beyond São Paulo’s police forces. Given the fact that the government had acquired an “as is” version of the DAS, there was indeed a lot of ground to cover; and despite the accompanying support of Microsoft staff, initial results were mostly unsatisfactory so that Detecta remained dramatically underused, as an inquiry realized by the state’s board of audits soon found out. As I could verify during one of my field trips in late 2016, the version in use back then was indeed running at such a slow pace that it would complicate rather than facilitate police investigations; besides, the few of the “intelligent” CCTV cameras which were up and running would constantly give false alarms since they were unable to distinguish cars half-covered by traffic signs from individuals roaming the streets (the latter constituting a “suspicious” situation demanding review). However, apart from the apparent technical challenges to set up a running network of heterogeneous data bases and thousands of surveillance cameras, Detecta suffers from another major obstacle of a more institutional kind: In Brazil, police forces are divided in two different organizations, the military police (Polícia Militar, PM) and the civil police (Polícia Civil, PC). While the major task of the PC consists in criminal investigation and prosecution, the PM is charged with maintaining “public order”: crime prevention, street policing and accompanying public events, most notably. Since both units thus carry out emphatically distinct policing tasks, they are equipped with their proper data bases; and because their mutual relationship is a somewhat uneasy one to say the least, there is not much of an incentive to share information. By consequence, and although facilitating data exchange has been a crucial motivation to acquire Detecta in the first place, the promise of an IT-driven policing revolution is as yet being thwarted by the prosaic reality of institutional quarrel and mutual distrust which characterizes the coexistence of PC and PM.

While these drawbacks imply that the “real” Detecta is indeed a far cry from the omniscient – indeed, even prescient – policing tool which was originally promised to the electorate, it would be precipitant to squarely (dis)qualify the whole project as a failure. In fact, PRODESP has recently come up with a new, cloud-based version of Detecta – ironically enough, it scarcely resembles the product delivered by Microsoft any longer – which, as I could verify, seems to work a lot faster than its predecessor. The system no longer draws upon “intelligent” cameras supposed to automatically identify “suspicious” behavior, but it does incorporate an increasing number of license plate readers run by DETRAN, the municipal traffic agency. Vehicles can therefore automatically be traced as well as linked to information stored in further data banks such as the criminal record of the owner, or prior occurrences in which the car was involved. Although this may appear as a rather modest progress as measured by the flashy “pre-crime” campaigns disseminated by software giants such as IBM, various police officers I spoke to told me that the latest version did represent a palpable improvement in their daily routine. In this sense, Detecta’s “success” would have to be measured in terms of the incremental changes it triggers in both IT and institutional culture – a process taking place beyond the screen of governmental as well as corporate propaganda put up to convince the general public. It is this, unsurprisingly, the sober point of view shared by most IT experts as well as police insiders I have been talking to in the course of my research, and their perspective deserves attention – especially so if we are to develop a critical understanding of the processes at work here.

PastedGraphic-5There is, however, yet another aspect under which an acritical focus upon Detecta’s “success” (or the relative lack thereof) tends to conceal a set of dynamics which certainly deserves attention – namely, to which extent the program both draws upon as well as fosters the emergence of public-private economies of (in)security in a city which is already heavily marked by stark socio-economic contrasts and the corresponding militarization as well as “citadelization” of urban space. While these tendencies are anything but new, it may be argued that they were further reinforced after João Dória, a party comrade of Geraldo Alckmin, was elected mayor of São Paulo in 2016. A businessman-turned-politician, Dória is both a hi-tech enthusiast and a fierce advocate of public-private partnerships; it was therefore only consequential that he would full-heartedly embrace Detecta and extend it through further projects such as Dronepol, a municipal police unit equipped with drones, and City Câmeras, a further initiative to establish a close-meshed public-private CCTV network. All of these projects rely upon a substantial involvement of non-state actors such as neighborhood boards, trade associations and the corporate sector – and despite official affirmations stating the contrary, it is plain to see how they were being hijacked by private interest groups from their very date of inception. It is in this sense that “success” and “failure” turn into profoundly relative categories unsuitable to orient any kind of critical research – unless they are re-read against the peculiar and irreducibly local genealogies of policing the urban (or, more generally, of governing through (in)security), that is. In any case, the board of audits’ affirmation that Detecta’s implementation “does not yet present effective results for public security” may sound somewhat naïve against this backdrop – emphatically public purposes might never have been at stake anyway here. Revisiting the peculiar notion broached by some of my interview partners, and extending it beyond a purely “technical” concept, it might therefore indeed be argued that the program’s “tropicalization” is not least about its translation from one specific diagram of power into another, a process which may teach us a lot about their respective modes of becoming. Far from sustaining that São Paulo confronts us with a more “archaic” or less “modern” configuration than New York (or any other city of the “global north” for that matter), it might still sensitize us for the contingencies and path-dependencies at stake when an increasingly “global” model of securitizing the urban goes “local”.

Claudio Altenhain is a PhD candidate of the international Doctorate in Cultural and Global Criminology. He is interested in Latin America, science and technology studies, and the anthropology of policing and crime.

 

 

Stefania at the Conférence Erasme-Descartes 2017 “Big Data: toepassingen en uitdagingen”, November 10

Stefania will speak at the Conférence Erasme-Descartes 2017 dedicated to “Big Data/Mégadonnées : usages et enjeux”/“Big Data: toepassingen en uitdagingen”, in Amsterdam on November 10. She will join Eric Leandri (Qwant), Mélanie Peters (Rathenau Institut) and André Vitalis (Centre d’Études sur la Citoyenneté, l’Informatisation et les Libertés) on a roundtable discussing “Big Data, kans of bedreiging voor de maatschappij?”.

The description of the event (in Dutch) is below.

Sinds 2002 dragen de Erasmus-Descartes conferenties bij aan de dialoog en uitwisseling tussen Nederland en Frankrijk. Ze bevorderen de bilaterale samenwerking op nieuwe gebieden.

Met de keuze voor “Big Data” als thema van de 15e Erasmus-Descartes conferentie bouwen we voort op alle onderwerpen die in eerdere edities aan bod zijn gekomen. “Big Data” raakt namelijk aan een groot aantal wetenschappelijke, technologische, economische, industriële, sociale en maatschappelijke kwesties. Ze roepen complexe vragen op, in het bijzonder voor wat betreft het behoud, de relevantie en het gebruik van gegevens. Alle sectoren zijn ermee gemoeid: de deeleconomie, de levenswetenschappen, het infrastructuuronderhoud, de energietransitie enbijvoorbeeld intelligente voertuigen. Veel Franse en Nederlandse industriële spelers hebben dus te maken met “Big Data”.