Category: show on blog page

[blog] Critical reflections on FAT* 2018: a historical idealist perspective

Author: Sebastian Benthall, Research Scientist at NYU Steinhardt and PhD Candidate UC Berkeley School of Information.

In February, 2018, the inaugural 2018 FAT* conference was held in New York City:

The FAT* Conference 2018 is a two-day event that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems. This inaugural conference builds on success of prior workshops like FAT/ML, FAT/Rec, DAT, Ethics in NLP, and others.

FAT stands for “Fairness, Accountability, Transparency”, and the asterisk, pronounced “star”, is a wildcard character, which indicates that the conference ranges more widely that earlier workshops it succeeds, such as FAT/ML (ML meaning, “machine learning“), FAT/Rec (Rec meaning “recommender systems“). You might conclude from the amount of geekery in the title and history of the conference that FAT* is a computer science conference.

You would be half right. Other details reveal that the conference has a different, broader agenda. It was held at New York University’s Law School, and many of the committee chairs are law professors, not computer science professors. The first keynote speaker, Latanya Sweeney, argued that technology is the new policy as more and more decisions are delegated to automated systems. The responsibility of governance, it seems, is falling to the creators of artificial intelligence. The keynote speaker on the second day was Prof. Deborah Hellman, who provided a philosophical argument for why discrimination is morally wrong. This opened into a conversation about the relationship between random fate and justice with computer scientist Cynthia Dwork. The other speakers in the program in one way or another grappled with the problem of how to responsibly wield technological power over society.

It was a successful conference and it has great promise as venue for future work. It has this promise because it has been set up to expand intellectually beyond the confines of the current state of discourse around accountability and automation. This post is about the tensions within FAT* that make it intellectually dynamic. FAT* reflects the conditions of our a particular historical, cultural, and economic moment. The contention of this post is that the community involved in the conference has the opportunity to transcend that moment if they encounter its own contradictions head-on through praxis.

One significant tendency among the research at FAT* was the mathematization of ethics. Exemplified by Menon and Williamson’s “The cost of fairness in binary classification” (2018) (winner of a best paper award at the conference), many researchers come to FAT* to translate ethical injunctions, and the tradeoffs between them, into mathematical expressions. This striking intellectual endeavor sits at the center of a number of controversies between the humanities and sciences that have been going on for decades and continue today.

As has been long recognized in the foundational theory of computer science, computational algorithms are powerful because the are logically equivalent to the processes of mathematical proof. Algorithms, in the technical sense of the term, can be no more and no less powerful than mathematics itself. It has long been a concern that a world controlled by algorithms would be an amoral one; in his 1947 book Eclipse of Reason, Max Horkheimer argued that the increasing use of formal reason (which includes mathematics and computation) for pragmatic purposes would lead to a world dominated by industrial power that was indifferent to human moral considerations of what is right or good. Hannah Arendt, in The Human Condition (1959), wrote about the power of scientists who spoke in obscure mathematical language and were therefore beyond the scrutiny of democratic politics. Because mathematics is universal, it is unable to express political interests, which arise from people’s real, particular situations.

We live in a strikingly different time from the mid-20th century. Ethical concerns with the role of algorithms in society have been brought to trained computer scientists, and their natural and correct inclination has been to determine the mathematical form of the concern. Many of these scholars would sincerely like to design a better system.

Perhaps disappointingly, all the great discoveries in foundations of computing are impossibility results: the Halting Problem, the No Free Lunch theorem, etc. And it is no different in the field of Fairness in Machine Learning. What computer scientists have discovered is that life isn’t, and can’t be, fair, because “fairness” has several different definitions (twenty-one at last count) that are incompatible with each other (Hardt et al., 2016; Kleinberg et al., 2016). Because there are inherent tradeoffs to different conceptions of fairness and any one definition will allocate outcomes differently for different kinds of people, the question of what fairness is has now been exposed as an inherently political question with no compelling scientific answer.

Naturally, computer scientists are not the first to discover this. What’s happened is that it is their turn to discover this eternal truth because in this historical moment computer science is the scientific discipline that is most emblematic of power. This is because the richest and most powerful companies, the ones almost everybody depends on daily, are technology companies, and these companies project the image that their success is do mainly to the scientific genius of their early employees and the quality of the technology that is at their operational core.

The problem is that computer science as scientific discipline has very little to do with why large technology companies have so much power and sometimes abuse that power. These companies are much more than their engineers; they also include designers, product managers, salespeople, public relations people, and of course executives and shareholders. As sociotechnical organizations, they are most responsive to the profit motive, government regulations, and consumer behavior. Even if being fair was technically possible, they would still be businesses with very non-technical reasons for being unfair or unaccountable.

Perhaps because these large companies are so powerful, few of the papers at the conference critiqued them directly. Instead, the focus was often on the software systems used by municipal governments. These were insightful and important papers. Barabas et al.’s paper questioned the assumptions motivating much of the inquiry around “fairness in machine learning” by delving into the history and ideology of actuarial risk assessment in criminal sentencing. Chouldechova et al.’s case study in the workings of a child mistreatment hotline (winner of a best paper award) was a realistic and balanced study of the challenges of operating an algorithmic risk assessment system in municipal social services. At its best, FAT* didn’t look much like a computer science conference at all, even when the speakers and authors had computer science training. At its best, FAT* was grappling towards something new.

Some of this grappling is awkward. Buolamwini and Gebru presented a technically and politically interesting study of how commercially available facial recognition technologies underperform on women, on darker-skinned people, and intersectionally on darker-skinned women. In addition to presenting their results, the speakers proudly described how some the facial recognition companies responded to their article by improving the accuracy of their technology. For some at the conference, this was a victory for fairer representation and accountability of facial recognition technology that was otherwise built to favor lighter skinned men. But others found it difficult to celebrate the improved effectiveness of a technology for automated surveillance. Out of context, it’s impossible to know whether this technology does good or ill to those wearing the faces it recognizes. What was presented as a form of activism against repressive or marginalizing political forces may just as well have been playing into their hands.

This political ambiguity was glossed over, not resolved. And therein lay the crux of the political problem at the heart of FAT*: it’s full of well-intentioned people trying to discover technical band-aids to what are actually systemic social and economic problems. Their intentions and their technical contributions are both laudable. But there was something ideologically fishy going on, a fishiness reflective of a broader historical moment. Nancy Fraser (2016) has written about the phenomenon of progressive neoliberalism, an ideology that sounds like an oxymoron but in fact reflects the alliance between the innovation sector and identity-based activist movements. Fraser argues that progressive neoliberalism has been a hegemonic force until very recently. This year FAT*, with its mainly progressive sense of Fairness and Accountability and arguably neoliberal emphasis on computational solutions, was a throwback to what for many at the conference was a happier political time. I hope that next year’s conference takes a cue from Fraser and is more critical of the zeitgeist.

For now, as form of activism that changes things for the better, this year’s conference largely fell short because it would not address the systemic elephants in the room. A dialectical sublation is necessary and imminent. For it to do this effectively, the conference may need to add another letter to its name, representing another value. Michael Veale has suggested that the conference add an “R”, for reflexivity, perhaps a nod to the cherished value of critical qualitative scholars, who are clearly welcome in the room. However, if the conference is to realize its highest potential, it should add a “J”, for justice, and see what the bright minds of computer science think of that.

References

Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on Fairness, Accountability and Transparency. 2018.

Chouldechova, Alexandra, et al. “A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions.” Conference on Fairness, Accountability and Transparency. 2018.

Fraser, Nancy. “Progressive neoliberalism versus reactionary populism: A choice that feminists should refuse.” NORA-Nordic Journal of Feminist and Gender Research 24.4 (2016): 281-284.

Hardt, Moritz, Eric Price, and Nati Srebro. “Equality of opportunity in supervised learning.” Advances in Neural Information Processing Systems. 2016.

Hellman, Deborah. “Indirect Discrimination and the Duty to Avoid Compounding Injustice.” (2017).

Horkheimer, Max. “Eclipse of Reason. 1947.” New York: Continuum (1974).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

[blog] Cloud communities and the materiality of the digital (GLOBALCIT project, EUI)

cropped-GlobalCitggp-logo

This invited blog post originally appeared in the forum ‘Cloud Communities: The Dawn of Global Citizenship?’ of the GLOBALCIT project (European University Institute). It is part of an interesting multidisciplinary conversation accessible from the GLOBALCIT website. I wish to thank Rainer Baubock and Liav Orgad for the invitation to contribute to the debate. 

Cloud communities and the materiality of the digital

By Stefania Milan (University of Amsterdam)

As a digital sociologist, I have always found ‘classical’ political scientists and lawyers a tad too reluctant to embrace the idea that digital technology is a game changer in so many respects. In the debate spurred by Liav Orgad’s provocative thoughts on blockchain-enabled cloud communities, I am particularly fascinated by the tension between techno-utopianism on the one hand (above all, Orgad and Primavera De Filippi), and socio-legal realism on the other (e.g., Rainer Bauböck, Michael Blake, Lea Ypi, Jelena Dzankic, Dimitry Kochenov). I find myself somewhere in the middle. In what follows, I take a sociological perspective to explain why there is something profoundly interesting in the notion of cloud communities, why however little of it is really new, and why the obstacles ahead are bigger than we might like to think. The point of departure for my considerations is a number of experiences in the realm of transnational social movements and governance: what we can learn from existing experiments that might help us contextualize and rethink cloud communities?

Three problems with Orgad’s argument

To start with, while I sympathise with Orgad’s provocative claims, I cannot but notice that what he deems new in cloud communities—namely the global dimension of political membership and its networked nature—is indeed rather old. Since the 1990s, transnational social movements for global justice have offered non-territorial forms of political membership—not unlike those described as cloud communities. Similar to cloud communities, these movements were the manifestation of political communities based on consent, gathered around shared interests and only minimally rooted in physical territories corresponding to nation states (see, e.g., Tarrow, 2005). In the fall of 2011 I observed with earnest interest the emergence of yet another global wave of contention: the so-called Occupy mobilisation. As a sociologist of the web, I set off in search for a good metaphor to capture the evolution of organised collective action in the age of social media, and the obvious candidate was… the cloud. In a series of articles (see, for example, here and here) and book chapters (e.g., here and here), I developed my theory of ‘cloud protesting’, intended to capture how the algorithmic environment of social media alters the dynamics of organized collective action. In light of my empirical work, I agree with Bauböck, who acknowledges that cloud communities might have something to do with the “expansion of civil society, of international organizations, or of traditional territorial polities into cyberspace”. He also points out how, sadly, people can express their political views – and, I would add, engage in disruptive actions, as happens at some fringes of the movement for global justice – only because “a secure territorial citizenship” protects their exercise of fundamental rights, such as freedom of expression and association. Hence the questions a sociologist might ask: do we really need the blockchain to enable the emergence of cloud communities? If, as I argue, the existence of “international legal personas” is not a pre-requisite for the establishment of cloud communities, what would the creation of “international legal personas” add to the picture?[1]

Secondly, while I understand why a blockchain-enabled citizenship system would make life easier for the many who do not have access to a regular passport, I am wary of its “institutionalisation”, on account of the probable discrepancies between the ideas (and the mechanisms) associated with a Westphalian state and those of politically active activists and radical technologists alike. On the one hand, citizens interested in “advanced” forms of political participation (e.g., governance and the making of law) might not necessarily be inclined to form a state-like entity. For example, many accounts of the so-called “movement for global justice” (McDonald, 2006; della Porta & Tarrow, 2005) show how “official” membership and affiliation is often not required, not expected and especially not considered desirable. Activism today is characterised by a dislike and distrust of the state, and a tendency to privilege flexible, multiple identities (e.g., Bennett & Segerberg, 2013; Juris, 2012; Milan, 2013). On the other hand, the “radical technologists” behind the blockchain project are animated by values—an imaginaire (Flichy, 2007)—deeply distinct from that of the state (see, e.g., Reijers & Coeckelbergh, 2018). While the blockchain technology is enabled by a complex constellation of diverse actors, it is legitimate to ask whether it is possible to bend a technology built with an “underlying philosophy of distributed consensus, open source, transparency and community” with the goal to “be highly disruptive”(Walport, 2015)… to serve similar purposes as those of states?

Thirdly, Orgad’s argument falls short of a clear description of what the ‘cloud’ stands for in his notion of cloud communities. When thinking about ‘clouds’, as a metaphor and a technical term, we cannot but think of cloud computing, a “key force in the changing international political economy” (Mosco, 2014, p. 1) of our times, which entails a process of centralisation of software and hardware allowing users to reduce costs by sharing resources. The cloud metaphor, I argued elsewhere (Milan, 2015), is an apt one as it exposes a fundamental ambivalence of contemporary processes of “socio-legal decentralisation”. While claiming distance from the values and dynamics of the neoliberal state, a project of building blockchain-enabled communities still relies on commercially-owned infrastructure to function.

Precisely to reflect on this ambiguity, my most recent text on cloud protesting interrogates the materiality of the cloud. We have long lived in the illusion that the internet was a space free of geography. Yet, as IR scholar Ron Deibert argued, “physical geography is an essential component of cyberspace: Where technology is located is as important as what it is” (original italics). The Snowden revelations, to name just one, have brought to the forefront the role of the national state in—openly or covertly—setting the rules of user interactions online. What’s more, we no longer can blame the state alone, but the “surveillant assemblage” of state and corporations (Murakami Wood, 2013). To me, the big absent in this debate is the private sector and corporate capital. De Filippi briefly mentioned how the “new communities of kinship” are anchored in “a variety of online platforms”. However, what Orgav’s and partially also Bauböck’s contributions underscore is the extent to which intermediation by private actors stands in the way of creating a real alternative to the state—or at least the fulfilment of certain dreams of autonomy, best represented today by the fascination for blockchain technology. Bauböck rightly notes that “state and corporations… will find ways to instrumentalise or hijack cloud communities for their own purposes”. But there is more to that: the infrastructure we use to enable our interpersonal exchanges and, why not, the blockchain, are owned and controlled by private interests subjected to national laws. They are not merely neutral pipes, as Dumbrava reminds us.

Self-governance in practice: A cautionary tale

To be sure, many experiments allow “individuals the option to raise their voice … in territorial communities to which they do not physically belong”, as beautifully put by Francesca Strumia. Internet governance is a case in point. Since the early days of the internet, cyberlibertarian ideals, enshrined for instance in the ‘Declaration of Independence of Cyberspace’ by late JP Barlow, have attributed little to no role to governments—both in deciding the rules for the ‘new’ space as well as the citizenship of its users (read: the right to participate in the space and in the decision-making about the rules governing it). In those early flamboyant narratives, cyberspace was to be a space where users—but really engineers above all—would translate into practice their wildest dreams in matter of self-governance, self-determination and, to some extent, fairness. While cyberlibertarian views have been appropriated by both conservative (anti-state) and progressive forces alike, some of their founding principles have spilled over to real governance mechanisms—above all the governance of standards and protocols by the Internet Engineering Task Force (IETF), and the management of the the Domain Name System (DNS) by the Internet Corporation for Assigned Names and Numbers (ICANN).[2] Here I focus on the latter, where I have been active for about four years (2014-2017).

ICANN is organized in constituencies of stakeholders, including contracted parties (the ‘middlemen’, that is to say registries and registrars that on a regional base allocate and manage on behalf of ICANN the names and numbers, and whose relationship with ICANN is regulated by contract), non-contracted parties (corporations doing business on the DNS, e.g. content or infrastructure providers) and non-commercial internet users (read: us). ICANN’s proceedings are fully recorded and accessible from its website; its public meetings, thrice a year and rotating around the globe, are open to everyone who wants to walk in. Governments are represented in a sort of United Nations-style entity called the Government Advisory Committee. While corporate interests are well-represented by an array of professional lobbyists, the Non-Commercial Stakeholder Group (NCSG), which stands in for civil society,[3] is a mix and match of advocates of various extraction, expertise and nationality: internet governance academics, nongovernmental organisations promoting freedom of expression, and independent individuals who take an interest in the functioning of the logical layer of the internet.

The 2016 transition of the stewardship over the DNS from the US Congress to the “global multistakeholder community” has achieved a dream unique in its kind, straight out of the cyberlibertarian vision of the early days: the technical oversight of the internet[4] is in the hands of the people who make and use it, and the (advisory) role of the state is marginal. Accountability now rests solely within the community behind ICANN, which envisioned (and is still implementing) a complex system of checks and balances to allow the various stakeholder voices to be fairly represented. No other critical infrastructure is regulated by its own users. To build on Orgad’s reasoning, the community around ICANN is a cloud community, which operates by voluntary association and consensus [5],[5] and is entitled to produce “governance and the creation of law”.[6]

But the system is far from perfect. Let’s look at how the so-called civil society is represented, focusing on one such entity, the NCSG. Firstly, given that everyone can participate, the variety of views represented is enormous, and often hinders the ability of the constituency to be effective in policy negotiations. Yet, the size of the group is relatively small: at the time of writing, the Non-Commercial User Constituency (the bigger one among the two that form the NCSG) comprises “538 members from 161 countries, including 118 noncommercial organizations and 420 individuals”, making it the largest constituency within ICANN: this is nothing when compared to the global internet population it serves, confirming, as Dzankic argues, that “direct democracy is not necessarily conducive to broad participation in decision-making”. Secondly, ICANN policy-making is highly technical and specialised; the learning curve is dramatically steep. Thirdly, to be effective, the amount of time a civil society representative should spend on ICANN is largely incompatible with regular daily jobs; civil society cannot compete with corporate lobbyists. Fourthly, with ICANN meetings rotating across the globe, one needs to be on the road for at least a month per year, with considerable personal and financial costs.[7] In sum, while participation is in principle open to everyone, informed participation has much higher access barriers, which have to do with expertise, time, and financial resources (see, e.g., Milan & Hintz, 2013).

As a result, we observe a number of dangerous distortions of political representation. For example, when only the highly motivated participate, the views and “imaginaries” represented are often at the opposite ends of the spectrum (cf., Milan, 2014). Only the most involved really partake in decision-making, in a mechanism which is well known in sociology: the “tyranny of structurelessness” (Freeman, 1972), which is typical of participatory, consensus-based organising. The extreme personalisation of politics that we observe within civil society at ICANN—a small group of long-term advocates with high personal stakes—yields also another similar mechanism, known as “the tyranny of emotions” (Polletta, 2002), by which the most invested, independently of the suitability of their curricula vitae, end up assuming informal leadership roles—and, as the case of ICANN shows, even in presence of formal and carefully weighted governance structures. Decision-making is thus based on a sort of “microconsensus” within small decision-making cliques (Gastil, 1993).[8] To make things worse, ICANN is increasingly making exceptions to its own, community-established rules, largely under the pressure of corporations as well as law enforcement: for example, the corporation has recently been accused of bypassing consensus policy-making through voluntary agreements ad private contracting.

Why not (yet?): On new divides and bad players

In conclusion, while I value the possibilities the blockchain technology opens for experimentation as much as Primavera De Filippi, I do not believe it will really solve our problems in the short to middle-term. Rather, as it is always with technology because of its inherent political nature (cf., Bijker, Hughes, & Pinch, 2012), new conflicts will emerge—and they will concern both its technical features and its governance.

Earlier contributors to this debate have raised important concerns which are worth listening to. Besides Bauböck’s concerns over the perils for democracy represented by a consensus-based, self-governed model, endorsed also by Blake, I want to echo Lea Ypi’s reminder of the enormous potential for exclusion embedded in technologies, as digital skills (but also income) are not equally distributed across the globe. For the time being, a citizenship model based on blockchain technology would be for the elites only, and would contribute to create new divides and to amplify existing ones. The first fundamental step towards the cloud communities envisioned by Orgad would thus see the state stepping in (once again) and being in charge of creating appropriate data and algorithmic literacy programmes whose scope is out of reach for corporations and the organised civil society alike.

There is more to that, however. The costs to our already fragile ecosystem of the blockchain technology are on the rise along with its popularity. These infrastructures are energy-intensive: talking about the cryptocurrency Bitcoin, tech magazine Motherboard estimated that each transaction consumes 215 Kilowatt-hour of electricity—the equivalent of the weekly consumption of an American household. A world built on blockchain would have a vast environmental footprint (see also Mosco, 2014). Once again, the state might play a role in imposing adequate regulation mindful of the environmental costs of such programs.

But I do not intend to glorify the role of the state. On the contrary, I believe we should also watch out for any attempts by the state to curb innovation. The relatively brief history of digital technology, and even more that of the internet, is awash with examples of late but extremely damaging state interventions. As soon as a given technology performs roles or produces information that are of interest to the state (e.g., interpersonal communications), the state wants to jump in, and often does so in pretty clumsy ways. The recent surveillance scandals have abundantly shown how state powers firmly inhabit the internet (cf., Deibert, 2009; Deibert, Palfrey, Rohozinski, & Zittrain, 2010; Lyon, 2015)—and, as the Cambridge Analytica case reminds us, so do corporate interests. Moreover, the two are, more often than not, dangerously aligned.

I do not intend, with my cautionary tales, to hinder any imaginative effort to explore the possibilities offered by blockchain to rethink how we understand and practice citizenship today. The case of Estonia shows that different models based on alternative infrastructure are possible, at least on the small scale and in presence of a committed state. As scholars we ought to explore those possibilities. Much work is needed, however, before we can proclaim the blockchain revolution.

References

Bennett, L. W., & Segerberg, A. (2013). The Logic of Connective Action Digital Media and the Personalization of Contentious Politics. Cambridge, UK: Cambridge University Press.

Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (2012). The Social Construction of Technological Systems. New Direction in the Sociology and History of Technology. Cambridge, MA and London, England: MIT Press.

Deibert, R. J. (2009). The geopolitics of internet control: censorship, sovereignty, and cyberspace. In A. Chadwick & P. N. Howard (Eds.), The Routledge Handbook of Internet Politics (pp. 323–336). London: Routledge.

Deibert, R. J., Palfrey, J. G., Rohozinski, R., & Zittrain, J. (Eds.). (2010). Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace. Cambridge, MA: MIT Press.

della Porta, D., & Tarrow, S. (Eds.). (2005). Transnational Protest and Global Activism. Lanham, MD: Rowman & Littlefield.

Flichy, P. (2007). The internet imaginaire. Cambridge, Mass.: MIT Press.

Freeman, J. (1972). The Tyranny of Structurelessness.

Gastil, J. (1993). Democracy in Small Groups. Participation, Decision Making & Communication. Philadelphia, PA and Gabriola Island, BC: New Society Publishers.

Juris, J. S. (2012). Reflections on #Occupy Everywhere: Social Media, Public Space, and Emerging Logics of Aggregation. American Ethnologist, 39(2), 259–279.

Lyon, D. (2015). Surveillance After Snowden. Cambridge and Malden, MA: Polity Press.

McDonald, K. (2006). Global Movements: Action and Culture. Malden, MA and Oxford: Blackwell.

Milan, S. (2013). WikiLeaks, Anonymous, and the exercise of individuality: Protesting in the cloud. In B. Brevini, A. Hintz, & P. McCurdy (Eds.), Beyond WikiLeaks: Implications for the Future of Communications, Journalism and Society (pp. 191–208). Basingstoke, UK: Palgrave Macmillan.

Milan, S. (2015). When Algorithms Shape Collective Action: Social Media and the Dynamics of Cloud Protesting. Social Media + Society, 1(1).

Milan, S., & Hintz, A. (2013). Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Internet & Policy, 5, 7–26.

Milan, S., & ten Oever, N. (2017). Coding and encoding rights in internet infrastructure. Internet Policy Review, 6(1).

Mosco, V. (2014). To the Cloud: Big Data in a Turbulent World. New York: Paradigm Publishers.

Murakami Wood, D. (2013). What Is Global Surveillance?: Towards a Relational Political Economy of the Global Surveillant Assemblage. Geoforum, 49, 317–326.

Polletta, F. (2002). Freedom Is an Endless Meeting: Democracy in American Social Movements. Chicago: University of Chicago Press.

Reijers, W., & Coeckelbergh, M. (2018). The Blockchain as a Narrative Technology: Investigating the Social Ontology and Normative Configurations of Cryptocurrencies. Philosophy & Technology, 31(1), 103–130.

Tarrow, S. (2005). The New Transnational Activism. New York: Cambridge University.

Walport, M. (2015). Distributed Ledger Technology: Beyond blockchain. London: UK Government Office for Science. London: UK Government Office for Science.

Notes:

[1] I am aware that there is a fundamental drawback in social movements when compared to cloud communities: unlike the latter, the former are not rights providers. However, these are the questions one could ask taking a sociological perspective.

[2] The system of unique identifiers of the DNS comprises the so-called “names”, standing in for domain names (e.g., www.eui.eu), and “numbers”, or Internet Protocol (IP) addresses (e.g., the “machine version” of the domain name that a router for example can understand). The DNS can be seen as a sort of “phone book” of the internet.

[3] Technically, of the DNS, which is only a portion of what we call “the internet”, although the most widely used one.

[4] Civil society representation in ICANN is more complex than what is described here. The NCSG is composed of two (litigious) constituencies, namely the Non-Commercial User Constituency (NCUC) and the Non-Profit Operational Concerns (NPOC). In addition, “non-organised” internet users can elect their representatives in the At-Large Advisory Committee (ALAC), organised on a regional basis. The NCSG, however, is the only one who directly contributes to policy-making.

[5] ICANN is both a nonprofit corporation registered under Californian law, and a community of volunteers who set the rules for the management of the logical layer of the internet by consensus. See also the ICANN Bylaws (last updated in August 2017).

[6] This should at least in part address Post’s doubts about the ability of a political community to govern those outside of its jurisdiction. One might argue that internet users are, perhaps unwillingly or simply unconsciously, within the “jurisdiction” of ICANN. I do believe, however, that the case of ICANN is an interesting one for its being in between the two “definitions” of political communities.

[7] ICANN allocates consistent but not sufficient resources to support civil society participation in its policymaking. These include travel bursaries and accommodation costs and fellowship programs for induction of newcomers.

[8] Although a quantitative analysis of the stickiness of participation in relation to discursive change reveals a more nuanced picture (see, for example, Milan & ten Oever, 2017).

 

[blog] Tech, data and social change: A plea for cross-disciplinary engagement, historical memory, and … Critical Community Studies

Kersti R. Wissenbach | March 2018

It has been a while since I first got my feet into the universe of technology and socio-political change. Back then, coming from a critical development studies and communication science background, I was fascinated by the role community radio could play in fostering dialogue among communities in remote areas, and between those communities and their government representatives.

My journey started in the early 2000s, in the most remote parts of Upper West Ghana, with Radio Progress, a small community radio station doing a great job in embracing diversity. Single feature mobile phones were about to become a thing in the country and the radio started to experiment with call-in programs for engaging its citizens in live discussions with local politicians. Before, radio volunteers would drive to the different villages in order to collect people’s concerns, and only then bring those recorded voices back into a studio-based discussion with invited politicians. The community could merely listen in as their concerns were discussed. With the advent of mobile phones, people suddenly could do more than just passively listen to the responses: finally they could engage in real-time dialogue with their representatives, hearing their own voices on air. Typically, people were gathering with family and other community members during the call-in hours to voice their concerns collectively. Communities would not only raise concerns, but also share positive experiences with local representatives following up on their requests. These stories encouraged neighbouring communities to also get involved in the call-in programs to raise their concerns and needs to be addressed.

Fast forward to today and much has changed on the ‘tech for social change’ horizon, at least if we listen to donor agendas and the dominant discourses in the field and in the academia. But what has really changed is largely one thing: the state of technology [1]. In the space of two decades, our enthusiasm, and donor attention, fixed on the ubiquity of mobile technologies, followed by online (crowdsourcing) platforms, social media, everything data (oh, wait … BIG data), and blockchain technology.

Whilst much of what has changed in these regards over the last few decades can be bundled under the Information and Communication for Development (ICT4D) label, one aspect seems to remain constant: change, if it is meant to happen and last, has to be rooted in the contexts and needs of those it intends to address. This is the ultimate ingredient for direct and inclusive engagement of the so-called civil society. Like a cake that needs yeast to rise, no matter whether we add chocolate or lemon, socio-political change in the interest of the people requires the buy-in of the people, no matter what tech is on the menu at a certain moment in time, and in a certain place of the world.

We have learnt many lessons along the way, and we had to sometimes learn them the hard way. Some are condensed in initiatives such as the Principles for Digital Development, a living set of principles helping practitioners engaging with the role of technologies in social or political change programs to learn from past experiences, in order to avoid falling into the same traps – be it of technological, political, and/or ethical nature.

We have observed an upsurge in ‘civic’ users of technologies for facilitating people’s direct engagement in governance, coupled with an emphasis on ‘open government models’. Much of this work emerged in parallel to or from earlier ICT4D experiences, and largely taps into the same funding structures. The lessons learned should be a shared heritage in the field. With various early programs coming to an end, this transnational community of well-intended practitioners, many of which have been involved in what we have earlier called ICT4D work, is now reflecting on the effectiveness of technology in promoting civil society participation in governance dynamics. What puzzles me year after year, however, is how practitioners of civic tech and open government, currently producing ‘first lessons learned’ on the effectiveness of technology in civil society participation in governance, are largely reproducing what we already know, and thus lessons we should have learnt. As critical as I am towards project work driven by traditional development cooperation, all this leaves me wondering what is novel, if anything, in these newest networks – largely breathing from the same funding pots.

New developments in the tech field do not liberate us from the responsibility to learn from what has already been learned – and build on it. The lessons learnt in decades of development communication and ICT4D works evidently cut across technological innovations, and apply to mobile technology as much as to the blockchain. Most importantly: different socio-political contexts call for personalized solutions, given the challenges remain distinct and increase in complexity, as we can see in the growing literature on critical data studies (see e.g. Dalton et al., 2016; Kitchin and Lauriault, 2014).

The critical role of proactive communities, their contexts and needs in fostering social or political change has been discussed since decades. Besides, as the Radio Progress anecdote shows, it applies across technologies. Sadly, once again, the dominant civic tech discourse seems to keep departing from the ‘tech’ rather than the ‘civic’. Analyses start off from the technology-in-governance side, rather than from the much-needed critical discourse of the fundamental role of power in governance: how it is constructed, reproduced, and distributed.

Departing from the aseptic end of the spectrum confines us to a tech-centric perspective, with all the limitations highlighted since the early days of Communication for Social Change and ICT4D critique. Instead, we should reflect on how power structures are seeded and nourished from within the very same communities. This relates to issues such as geographical as much as skill-related biases, originating patterns of exclusion that no technology alone can solve. Those biases are then reproduced, not solved, by technological solutions which aim would be, instead, to enable inclusive forms of governance.

For the civic tech field to move forward, we should move beyond an emphasis on feedback allocation and end-users ultimately centring on the technological component; we should instead adopt a broader perspective in which we recognise the user not merely as a tech consumer/adopter, but as a complex being embedded in civil society networks and power structures. We, therefore, should ask critical questions beyond technology and about communities instead; we should ask ourselves, for example, how to best integrate people’s needs and backgrounds across all stages of civic tech programs. Such a perspective should include a critical examination of who the driving forces of the civic tech community are and how they do subsequently affect decision-making on the development of infrastructures. What is crucial to understand, I argue, is that only inclusive communities can really translate inclusive technology approaches and, consequently, inclusive governance.

From the perspective of an academic observer, a disciplinary evolution is in order too, if we are to capture, understand, and critically contribute to these dynamics. The proposed shift of focus from the ‘tech’ to the ‘civic’ should be mirrored in the literature with a new sub-field, which we may call Critical Community Studies. Emerging at the crossroad of disciplines such as Social Movement Studies, Communication for Social Change, and Critical Data Studies, Critical Community Studies would encourage to taking the community as an entry point in the study of technology for social change. This means, in a case such as the civic tech community, addressing issues such as internal diversity, inclusiveness of decision-making processes, etc. and ways of different ways of engaging people. It also relates to the roots of decisions made in civic tech projects, and in how far those communities, supposed to benefit from certain decisions, have a seat on the table. More generally, Critical Community Studies should invite to critically reflect on the concept of inclusion, both for practitioner agendas and academic frameworks. It would also encourage us to contextualize, take a step back and ask difficult questions, departing from critical development and communication studies (see e.g. Enghel, 2014; Freire, 1968; Rodriguez, 2016) , while taking a feminist perspective (see e.g. Haraway, 1988; Mol, 1999).

Since such a disciplinary evolution cannot but happen in dialogue with existing approaches and thinkers, I would wish to see this post to evolve into a vibrant, cross-disciplinary conversation on how a Critical Community Studies could look like.

 

I would like to thank Stefania Milan for very valuable and in-depth feedback and insights whilst writing this post.

 

 

Cited work

Dalton CM, Taylor L and Thatcher (alphabetical) J (2016) Critical Data Studies: A dialog on data and space. Big Data & Society 3(1): 2053951716648346. DOI: 10.1177/2053951716648346.

Enghel F (2014) Communication, Development, and Social Change: Future Alternatives. In: Global communication: new agendas in communication. Routledge, pp. 129–141.

Freire P (1968) Pedagogy of the Oppressed. New York: Herder and Herder.

Haraway D (1988) Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies 14(3): 575–599. DOI: 10.2307/3178066.

Kitchin R and Lauriault T (2014) Towards Critical Data Studies: Charting and Unpacking Data Assemblages and Their Work. ID 2474112, SSRN Scholarly Paper. Rochester, NY: Social Science Research Network. Available at: https://papers.ssrn.com/abstract=2474112 (accessed 19 March 2018).

Mol A (1999) Ontological politics. A word and some questions. The Sociological Review 47(S1): 74–89. DOI: 10.1111/j.1467-954X.1999.tb03483.x.

Rodriguez C (2016) Human agency and media praxis: Re-centring alternative and community media research. Journal of Alternative and Community Media 1(0): 36–38.

 

I am consciously not using the innovation term here since I truly believe that innovation can only be what truly features into people’s contexts and needs. Innovation, then, is not to be confused with the latest tech advancement or hype.

[blog] Facebook newsfeed changes: Three hypotheses to look into the future

Image: Vincenzo Cosenza

In this blog post, DATACTIVE research associate Antonio Martella is looking forward to the consequences of Facebook’s news feed modifications as result of larger corporate policy changes. He investigates and discusses implications through three hypotheses: 1) the divide between the attention-rich and the attention poor will grow 2) increasing engagement with peer-created content will tighten the filter bubble aspect of networking and 3) the “new” news feed will have a negative impact on users’ mood.

Guest Author: Antonio Martella

On November 11th, 2017, Facebook has announced that the user timeline will change in January 2018. In their words:

“With this update, we will also prioritize posts that spark conversations and meaningful interactions between people. To do this, we will predict which posts you might want to interact with your friends about and show these posts higher in the feed. These are posts that inspire back-and-forth discussion in the comments and posts that you might want to share and react to – whether that’s a post from a friend seeking advice, a friend asking for recommendations for a trip, or a news article or video prompting lots of discussions. […] We will also prioritize posts from friends and family over public content, consistent with our News Feed values.” (Newsroom Facebook 2018)

Any modification in the feed algorithm will have many consequences, and these are not equally predictable. Facebook is a very complicated environment, semi-public in nature and not only related to friendship management. In fact, as the Pew Research Center reported last September, 67% of Americans consume news over social media. This pattern seems to apply to the European news consumption too, where youngsters are exposed to news mostly in a social media context rather than television or newspaper. Indeed, as the Reuters Institute’s Digital News Report 2017 shows, many users follow others because of the news they share.

According to the Pew Research Report, Facebook surpasses other social media as a source of news consumption. This is partially due to the large userbase Facebook has, and partially because news is actually interwoven with people’s timelines. The Digital News Report also shows that exposure to news in Facebook is often incidental; a direct result of news shared by other users, a wide range of news companies that are followed, etc. Notwithstanding, we need to keep in mind that exposure to any content in social media or search engines is algorithm-driven.

Following these considerations, there are several possible consequences to the Facebook news feed changes. This blogpost invests into three probable implication, being

  1. the divide between the attention-rich and the attention poor will grow;
  2. continuous personalisation;
  3. negative impact on users’ mood

1. The divide between the attention-rich and the attention poor will grow

All pages and groups that share content on Facebook will lose visibility and revenues that come from users reading their posts, clicking their links, and visiting their websites1. It’s easy to guess that those who want to remain visible have two choices: either pay more for Facebook ads in order to make their posts visible; or create more engaging content. But the generated engagement in Facebook is deeply connected with the number of followers. This will probably increase the gap between attention reach and attention poor, which is in line with the observed Matthew effect (Merton, 1968) that rules many patterns and practices online (Barabasi, 2013) and in social media.

In fact, many aspects of the society both online and offline are governed by the preferential attachment process that stays behind the so-called “Matthew effect” or the “80/20 rule”. Hence, the more connection you have the more visible you are, and the more new connections you would get as a consequence. This principle can easily be illustrated by the fact that famous websites and people tend to have more followers on social media. But the other way around is equally true: the fewer connection you have, the less attention you would get. In conclusion, contents produced by people or organizations with less power/resources and with lower budgets will decrease in visibility.

2. Continuous personalisation

The second consequence of the news feed change deals with the kind of content that will be dominant in users’ feeds. According to Mark Zuckerberg, content produced and shared by “friends and family” will be more visible in all Facebook timelines. But a news feed dominated by friends’ posts could arguably exacerbate two negative social media aspects, previously expressed through notions of the filter bubbles and the echo chamber. Online social networks developed in social media platforms are strongly based on homophily (Barberà, 2014; Aiello et al 2012) meaning that users connect with others who share similar interests, values, political views, etc. This typical behaviour is also found in offline social networks (McPherson, Smith-Lovin, Cook, 2001), and shows its most problematic characteristics when focusing on information diffusion.

On the one hand, this change will foster the filter bubble in which we are all involved. In fact, filter bubbles (Pariser, 2011) are the result of users’ activities on the web: social media algorithms which continuously learn from every users’ clicks and likes2. On the other hand, more homophily in social media due to the prevalence of “friends and family contents” could easily sustain the echo chamber effect. This phenomenon preceded social media platforms, for like-minded people love to talk to each other fostering their opinions and biases. However, in social media, it is easier to avoid a contrasting point of views, values, or interests as a consequence of the self-selection of “friends”, pages, and groups. Indeed, as research has highlighted, there is a user tendency to promote their favourite narratives and to form polarised groups on Facebook (Quattrociocchi, Scala, Sunstein 2016; Bakshy, Messing, Adamic, 2015) even though it is not a clear and deterministic process (Barberà et al. 2015).

Based on these last considerations, another outcome of news feed changes will be a growth in the visibility of friends’ opinions and points of view. This will most probably result in more polarised information flow in users’ news feeds and a limited number of different point of views and professional (or semi-professional) content. In practice this means that if we would think about a contested news like glyphosate and cancer causation, we have to take in account that information sources will be more socially driven; the chance to read a different point of views and professional news will be smaller than before.

3. Negative impact on users’ mood

The news feed changes will probably influence the mood of billions of people in an inscrutable way. One can say that a news feed more populated by friend’s content would have a negative impact on happiness. According to Mark Zuckerberg “the research shows that when we use social media to connect with people we care about, it can be good for our well-being”. In fact, according to an experiment conducted on users timeline (Kramer, Guillory, Hancock, 2013) content on the users’ timeline does indeed influences their mood. As many researchers have shown, personal feelings (happiness, depression, etc.) flow through offline social networks (Fowler, Christakis, 2008) and their representation in online environments seems to share similar diffusion patterns. In other words: moods contagiously spread online. And in extension, recent scholarly and non-scholarly work shows that scrolling through your Facebook feed can have a negative impact on well-being (Shakya, Christakis, 2017)3. Lastly, it has been demonstrated that the constant bombardment of everyone’s news, biases the attempt to provide the best representation of the self and it seems to have a negative impact on happiness.

Questions to ask

Throughout the hypothesis, I have tried to show some real-life aspects that might be affected by the important changes on Facebook algorithms. As Facebook stated, there are around 2 billion active users on its platform monthly.

These statements subsequently evoke two questions:

  1. Can these changes be made by a private company without any form of public discussion?
  2. Is it our democratic right to scrutinize algorithms as organiser of public space?

Further information on how Facebook algorithms work can be found here: an interesting article edited by Share Lab that has tried to shed some light on what is behind this platform.

 

References

Aiello, Luca Maria, Barrat, Alain, Schifanella, Rossano, Cattuto, Ciro, Markines, Benjamin, Menczer, Filippo. 2012. Friendship prediction and homophily in social media. ACM Trans. Web 6, 2, Article 9, 33 p. 66.

Bakshy, Eytan, Messing, Solomon, Adamic, Lada A. 2015.Exposure to ideologically diverse news and opinion on Facebook in Science 05 Jun 2015: Vol. 348, Issue 6239, pp. 1130-1132.

Pariser, Eli, 2012, The Filter Bubble: What The Internet Is Hiding From You, Penguin: London.

Quattrociocchi, Walter, Scala, Antonio, Sunstein, Cass R. 2013. Echo Chambers on Facebook. Available at SSRN: https://ssrn.com/abstract=2795110.

Shakya, Holly B., Christakis, Nicholas A. 2017. Association of Facebook Use With Compromised Well-Being: A Longitudinal Study in American Journal of Epidemiology, 185:3, pp. 203–211.

Rogers, Richard, 2015. Digital Methods for Web Research, in Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource (ed. Scott, Roberts; Buchmann, Marlis C.; Kosslyn Stephan), Wiley & Sons: New York

 

  1. For example, this is exactly what happened to the blog LittleThings. This blog had to shut down a month after the news feed change due to the web traffic drop.
  2. This is already happening as an Italian experiment on Facebook have partially shown during the last Italian election (linkunfortunately only in Italian). According to this experiment, Facebook news feed shows different kind of content and media (photo, video, web links) based on likes, comment and shares of each user. Indeed, according to Facebook statements, proposed content will be more based on each user’s intention to interact (algorithmically predicted) fostering the visibility of tailored content.
  3. For example «Liking others’ content and clicking links posted by friends were consistently related to compromised well-being, whereas the number of status updates was related to reports of diminished mental health» (Shakya, Christakis, 2017, p. 210).

 

On the author: Antonio is a PhD candidate in Political Science at the University of Pisa. His research focus is political leaders populism in social media. His approach coincides with the Digital Methods for Web Research recommendations (Rogers, 2015), and he is particularly interested in social media algorithms and their effects.

BigBang hackaton in London, March 17-18

This weekend the DATACTIVE team will be joining the IETF101 hackathon to work on quantitative mailing-list analysis software. The Internet Engineering Taskforce (IETF) is the oldest and most important Internet standard setting body. The discussions and decisions of the IETF have fundamentally shaped the Internet. All IETF mailing-lists and output documents are publicly available. They represent a true treasure for digital sociologist to understand how the Internet infrastructure and architecture developed over time. To facilitate this analysis DATACTIVE has been contributing to the development of BigBang, a Python-based automated quantitative mailinglists tool. Armed with almost 40 gigabyte worth of data in the form of plain text files, we are eager to boldly discover what no one has discovered before. By the way, we still have some (open) issues, feel free to contribute on Github 🙂

[blog] Internet Archive and Hacker Ethics: Answers to datafication from the hacktivist world

Guest author: Silvia Semenzin

This blogpost looks into the Internet Archive as a case-study to discuss hacktivism as a form of resistance to instances of control on the Internet and the use of data for political and commercial purposes. It argues hacktivism should not only be considered a social movement, but also an emerging culture informed by what may be defined as ‘hacker ethics’, after Pekka Himanen.

The Internet Archive is a free digital library founded in 1996 by Brewster Kahle, a computer engineer at MIT, who created a non-profit project which aims to collect cultural artefacts (books, images, movies, audio, etc.) and internet pages to promote human knowledge. Building a “global brain” can be challenging in the era of datafication and the Information Society, especially because huge amounts of information (and disinformation) are added on the internet continuously. Trying to create a modern version of the Bibliotheca Alexandrina, the Internet Archive’s goal is to make human knowledge accessible to everybody and preserve all kinds of documents. So far, the Internet Archive has digitalized more than 3 million of books, still scanning around 1000 books per day.

The Internet Archive in both strategies and business model seems to appeal to ‘hacker ethics’ as described by the Finnish philosopher Pekka Himanen through the hacker ethics of work, hacker ethics of money, and hacker ethics of the network:

1. For Himanen, ‘Hacker ethics of work’ describes passion in work, the freedom to organize one’s time, and creativity -which is the combination of the first two: for hackers, working with passion is the final purpose. This means that financial motivations are not of primary importance. Instead, they are just a result of work.
2. Benefits are measured in both passionate effort and social value, both features of the ‘hacker ethics of money’. This means that the work of a hacker must be recognized by the hacker community and that it must be accessible and open to everyone. This represents the vision of an open and horizontal model of knowledge, similar to the one at the Academy of Plato, which was based on a continuous and critical debate to reach scientific truths, even though many could argue that, in general, hacker culture is still not that open and horizontal (e.g. hostility to non-white male identities). However, projects such as Internet Archive seem to follow this model of shared knowledge for the sake of science.
3. Finally, the ‘hacker ethics of the network’ refers to the relationship between hackers and the Internet. On the one hand, from this relationship stems the value of the free activity, which indicates the act of defending total freedom of expression on the internet. On the other hand, hackers do also worry about involving everyone in the digital community and make the Network free and accessible to everybody (crypto parties are born as a result of this idea). This value is known as ‘social responsibility’.

Applied to the Internet Archive, it seems to draw on three strategies, in particular, that of participation and anonymity by default and non-profit business model. By doing so, the Internet Archive is defending the freedom of information, a fundamental right that needs protection in both the offline and the online world. To make sure that there is freedom of information, it is necessary to involve as many people as possible in the sharing of knowledge. Anyone can read and upload material to the website, thereby taking part in building a global digital library. Secondly, Freedom of information, freedom of expression and the right to anonymity are built in the Internet Archive by design and align with hacker ethics values. The Internet Archive does not track their user; they do not keep the Internet Protocol (IP) address of readers and make use of a secure web protocol (https). The website does not use data from users, not even for marketing: being a non-profit library, Internet Archive is funded on donations instead of advertising, or the collection/selling of personal data.

In extension, it could be argued that the Internet Archive with these anonymity and participatory practices often opposes larger datafication processes. The processes of datafication of society, recently observed in the rise of platforms and apps, implies that our financial habits, personal communication, movement, social network and political and religious orientation will translate into data. The availability of significant amounts of data raises questions concerning their usage by governments and corporations. Their access to Big Data might have a negative effect on both individuals and communities, by increasingly turning citizens into consumers, thereby sustaining a certain form of control. Fundamental rights such as freedom of speech, freedom of association or right to privacy seem growingly threatened by the collection and analysis of large data sets.

Guided by these values, hacktivists often criticize the use of technology and Big Data as it would go against their ethics, and try to spread hacker ethics using different a kind of action. Among the heterogeneity of hacktivist action, the Internet Archive can represent a good example of hacker ethics, as well as a powerful project born to defend freedom of knowledge and digital rights. These kinds of initiatives are relevant when researching for hacktivism and datafication because they illustrate how hacker ethics may be spreading the awareness concerning issues of datafication.

References

Himanen, P. (2002). Hacker Ethic and the Spirit of the Information Society. Prologue by LinusTorvalds. Destino

Hintz, A., Dencik, L., & Wahl-Jorgensen, K. (2017). Digital Citizenship and Surveillance| Digital Citizenship and Surveillance Society—Introduction. International Journal of Communication, 11, 9.

Internet Archive: Digital Library of Free Books, Movies, Music & Wayback Machine. (n.d.). Retrieved 22 February 2018, from https://archive.org/index.php

Milan, S. & Atton, C. (2015). Hacktivism as a radical media practice. Routledge companion to alternative and community media, 550-560.

Noah C.N. Hampson (2012), ‘Hacktivism: A New Breed of Protest in a Networked World’ 35 B.C. Int’l & Comp. L. Rev. 511, http://lawdigitalcommons.bc.edu/iclr/vol35/iss2/6 accessed 05/02/2018

 

About Silvia

Silvia Semenzin is a DATACTIVE research associate and PhD student in Sociology at the University of Milan. She is currently researching hacktivism and hacker ethics and is interested in the influence that digital technologies have on political action, public debate and citizens’ mobilization as instruments for democracy.

Big Data from the South: The beginning of a conversation we must have

by Stefania Milan and Emiliano Treré

On July 15, 2017 in Cartagena, Colombia, about fifty between academics and activists got together to imagine how ‘Big Data from the South’ would look like. Organized with little resources and much enthusiasm by the two of us* and preceding the annual IAMCR conference in Cartagena, the one-day event was designed to make the move ‘from media to mediations, from datafication to data activism’, as the title suggested. We thought that this beautiful gem of the Caribe, at the geographical margins of a country that has recently started to invent a peaceful future for itself, would be the most appropriate place to pioneer a much-needed conversation about a series of question that has kept both of us busy over the past few years: How would datafication look like seen… ‘upside down’? What questions would we ask? What concepts, theories, methods would we embrace or have to devise? What do we miss if we stick to the mainstream, Western perspective(s)? In this post, we resume the conversation we prompted in Cartagena—looking forward.

Datafication and its discontents: Beyond the West?
Datafication has dramatically altered the way we understand the world around us. Understanding the so-called ‘big data’ means to explore the profound consequences of the computational turn, its consequences on the mainstream epistemology, ontology and ethics, as well as the limitations, errors and biases that affect the gathering, interpretation and access to information on such a large scale. If scholars of various disciplines have started to critically explore the implications of datafication across the social, cultural and political domains, much of this critical scholarship has emerged along a Western axis ideally connecting Silicon Valley, Cambridge, MA and Northern Europe. We believe something is missing in this conversation.

We already know a lot, though. The emerging, composite field of critical data studies, at the intersection of social sciences and the humanities, calls our attention to the potential inequality, discrimination and exclusion harbored by the mechanisms of big data (Gangadharan 2012; Dalton, Taylor, & Thatcher 2016). It reminds us that big data is not merely a technological issue or the flywheel of knowledge, innovation and change, but a ‘mythology’ that we ought to interrogate and critically engage with (e.g., boyd & Crawford 2012; Mosco 2014; Tufekci 2014; van Dijck, 2014). It tells us that, although tinted with the narratives of positivism and modernization and widely praised for their revolutionary possibilities in terms of, e.g., citizen participation, big data are not without risks and threats, as opaque regimes of population management and control have taken central stage (see Andrejevic 2012; Turow 2012; Beer & Burrows 2013; Gillespie 2014; Elmer, Langlois and Redden 2015). The expansion of data mining practices by both corporations and states gives rise to critical questions about systematic surveillance and privacy invasion (Lyon, 2014; Zuboff 2016; Dencik, Hintz and Cable 2016). Critical questions arise also from the ways in which academia and businesses alike relate to big data and datafication: colleagues have questioned the ‘bigness’ of contemporary approaches to data (Kitchin and Laurialt 2014), and encouraged us to pay attention to bottom-up practices (Couldry & Powell 2014) and forms of everyday critical engagement with data (Kennedy and Hill 2017).

But how does datafication unfold in countries with fragile democracies, flimsy economies, impending poverty? Is our conceptual and methodological toolbox able to capture and to understand the dark developments and the amazing creativity emerging at the periphery of the empire? We call for the discontents of datafication to join forces to jointly address these concerns—and generate more critical questions.

From the South(s), moving beyond ‘data universalism’… and the universalism of social theory
We believe that we need to systemically and systematically engage in a dialogue with traditions, epistemologies and experiences that deconstruct the dominance of Western approaches to datafication that fail to recognize the plurality, the diversity, and the cultural richness of the South(s) (see Herrera, Sierra & Del Valle 2016). Like Anita Say Chan (2013), we, too, feel that too many critical approaches are still relying on a kind of ‘digital universalism’ that tends to assimilate the heterogeneity of diverse contexts and to gloss over differences and cultural specificities. We would like to contribute to the ongoing conversations about the urgency of a ‘Southern Theory’ that ‘questions universalism in the field of social theory’. We join Payal Arora in claiming that ‘we need concerted and sustained scholarship on the role and impact of big data on the Global South’ (2016: 1693)—and we go one step further, enlarging the picture to include all the Souths in the plural that inhabit our increasingly complex universe.

As Arora (2016) and Udupa (2015) reminded us, while the majority of the world’s population resides outside the West, we continue to frame key debates on democracy ad surveillance—and the associated demands for alternative models and practices—by means of Western concerns, contexts, user behavior patterns, and theories. While recognizing the key contributions of many of our amazing colleagues (and forgive us if for the sake of brevity we haven’t included you all), we feel that something is missing in the conversation, and that only a collective effort across disciplines, idioms, and research areas can help us to re-consider big data from the South. Our definition of the South is a flexible and expansive one, inspired to the writings of sociologist Boaventura De Sousa Santos (2007 and 2014) who was probably the first to write about the emergence and the urgency of epistemologies from the South against the ‘epistemicide’ of neoliberalism. Firstly, there is the geographical South, i.e. the people, activities, politics and technologies arising literally at the margins of the world as captured in the Mercator map. Secondly, and most importantly, our South is a place of (and a proxy for) resistance, subversion and creativity. We can find countless Souths also in the Global North, as long as people resist injustice and fight for better life conditions against the impending ‘data capitalism’.

Our reflections on ‘big data from the South’ fit within—and hope to feed—the broader process of epistemological re-positioning of the social sciences. We believe we cannot avoid measuring the sociotechnical dynamics of datafication against ‘the historical processes of dispossession, enslavement, appropriation and extraction […] central to the emergence of the modern world’ (Bhambra and de Sousa Santos 2017: 9), pending the risk of making the same mistakes all over again—and the same is to be said for our disciplinary toolbox. As Bhambra and de Sousa Santos acutely observed, ‘if the injustices of the past continue into the present and are in need of repair (and reparation), that reparative work must also be extended to the disciplinary structure that obscure as much as illuminate the path ahead’ (Ibid.).

What would a Southern theory of big data entail, then?

We take the challenge from Say Chan (2013), who reminded us that there are more and other ways than the mainstream to imagine the relation between technology and people. Here we share with you our growing list of the sine-qua-non conditions for thinking datafication from the South. The list is a work in progress, and comes with an explicit invitation to join us in this exercise.

  • Bring agency to the center of the observation of both bottom-up and top-down mechanisms and practices. Taking inspiration from Barbero (1987), we should focus on resistance and the heterogeneity of practices as they relate to datafication—not solely on data and datafication per se.
  • Decolonize our thinking, situating the post-Snowden dynamics of the data capitalism in the specificities of the South. While many elements will be the same, implementations, understandings and consequences might differ. What we already know should not be taken for granted but critically unraveled.
  • Pay attention to the ‘alternative’: alternative practices, alternative imaginaries, alternative epistemologies, alternative methodologies in relation to the adoption, use, and appropriation of big data. Prepare for the unexpected and the unexplored. To be sure: here alternatives are not necessarily subaltern or better, they are simply distinct.
  • Take infrastructure seriously, unpacking the complex flows (of relationships, data, power, money, and counting) they harbor, generate, shape and promote (thanks Anders Fagerjord for the inspiration to think in flows). Situate notions like the platform in the lived experience of distinct geographies.
  • Connect the critical epistemologies of emerging social worlds with the critical politics of social change (thanks Nick Couldry for sharing his thoughts on this matter in Cartagena—everyone watch out for his new book with Ulisses Mejias on ‘Data, Capitalism, and Decolonizing the Internet’).
  • Be mindful and critical of Western-centric concepts and methods. While they do offer a key point of departure, they cannot be taken by default as the (sole) point of arrival when approaching big data from the South.
  • At the same time, be critical of Southern understandings and practices as well, to avoid falling into the assumption that they are inherently different, alternative, or even better and purer forms of knowledge.
  • Be open to the dialogue, in whatever direction it takes us: North-South, South-South, South-North. In the face of much complexity, we can only advance together, entering in a conversation with different epistemologies and approaches.

That said, we would like to encourage our colleagues to embrace more explicitly a political economy perspective, which can help us to take a critical look at the multiple forms of domination that reproduce and perpetuate inequality, discrimination and injustice at all levels. We also advocate for historical approaches able to trace the current unfolding of datafication back to its roots in colonial practices, when applicable (see Arora 2016). We suggest engaging with feminist critiques and ideas around the decolonization of technology. Finally, we like to think of this type of inquiry as inherently ‘engaged’: while adopting the gold standards of solid scientific research, ‘engaged research’ might take sides and, most importantly, is designed to make a difference to the communities we come close to (Milan 2010). Such an approach goes hand in hand with the promotion of critical literacy, whereby even academics look for ways of making information accessible by means of translation into understandable and actionable material, in view of bringing more people to fight for their digital rights.

One example of approaching big data from the South
To make our call more concrete, we offer the example of our own work as one of the many possible ways of turning ‘upside down’ what we know about datafication. Emiliano has been studying the algorithmic manufacturing of consent and the hindering of online dissidence; his work outlines how creative and innovative forms of algorithmic resistance are being forged in Latin America and beyond (Treré 2016). Stefania and her team have been looking at how grassroots data activism (Milan & Gutierrez 2015; Milan 2017), new data epistemologies (Milan and van der Velden 2016) and practices of resistance to massive data collection emerge in the fringe of the ‘surveillance capitalism’ , including for example in the Amazon region (Gutierrez and Milan 2017). But we need to collectively—with you, that is—make a leap forward and rethink also theory, beyond case studies and contingent examples. We are not alone in this effort as many giants can lend us their shoulders. To name but one who inspired our event in Cartagena: almost thirty years ago, the Spanish-Colombian comunicologist Jesús Martín-Barbero urged us to move ‘from media to mediations’, that is, from functionalist media-centered analyses to the exploration of everyday practices of media appropriation through which social actors enact resistance to domination and hegemony (1987). The powerful move he triggered was inherently political: it meant refocusing our gaze from media institutions towards the people and their heterogeneous cultures, looking at how communication was shaped in bars, gyms, markets, squares, families, and the like. Following Barbero, our work has been oriented to making the move from datafication to data activism, examining the diverse ways through which citizens and the organized civil society in the South(s) engage in bottom-up data practices for social change and resist a datafication process that increases oppression and inequality.

Much more remains to be done, and many conversations to be had. Together with Anita Say Chan, we are launching ‘Big Data from the South’, a network of scholars and practitioners interested in bringing this multidisciplinary and multi-language dialogue forward. Join us!

Join the mailing list
Read the call for ‘Big Data from the South’ (Cartagena, 15 July 2017)
Check out the dedicated blog. Stay tuned: we have even commissioned a logo! We plan to start soon publishing guest posts on the topic, in any language people might want to write them. We are looking for your your ideas and provocations: to contribute please shoot an email to TrereE@cardiff.ac.uk and s.milan@uva.nl.

About the authors (and situating white privilege)
Interdisciplinary scholars constantly moving between the study of society and its tech imaginations and tactics, movement, change and contrast have been at the core of our scholarship and of our identity. Southern Europeans migrated North on account of the ethernal malaise of the Italian research system, we like to see ourselves as engaged scholars and to muddle the waters between disciplines and methods. Stefania is Associate Professor of New Media and Digital Cultures at the University of Amsterdam affiliated also with the University of Oslo, and the Principal Investigator of the DATACTIVE project. Emiliano is Lecturer in the School of Journalism, Media and Cultural Studies at Cardiff University where he is also a member of the Data Justice Lab, and a Research Fellow at the COSMOS Center for Social Movement Studies (Italy). Previously, he was an Associate Professor at the Autonomous University of Querétaro, Mexico. Both of us have conducted research and worked in various capacities in a number of Southern contexts. While we write from a privileged observation point, various Souths have crossed our professional and personal lives, instilling curiosity, imposing challenges and occasional suffering, and forcing us to ask ourselves critical questions. We don’t have many answers. Rather, we want this blog post to be the start of a conversation and of an open, collaborative network where different Souths can dialogue, learn and enrich each other.

*The event was made possible by the funding of DATACTIVE/European Research Council and by the generous engagement of Guillén Torres (DATACTIVE) and Fundación Karisma (Bogotá). We also wish to thank for the hospitality the local organizing committee of IAMCR (and Amparo Cadavid from UNIMINUTO in particular).

[blog] Hopes and Fears at SHA2017

Authors: Davide & Jeroen

A few weeks ago, a contingent of the DATACTIVE team attended SHA (Still Hacking Anyway), the periodic worldwide hacker camp hosted in the Netherlands. The great variety of people hanging around included IT pen-testers, system administrators, activists, developers, advocacy groups, journalists -and, of course, hackers. Around 3.300 attendants, 100 gigabit (!) of bandwidth, 320 talks, mixed with lights, music, artifacts of all kind -and a fair amount of drinks- contributed to characterize the gathering as a concrete embodiment of the hackers’ ethos of ‘work&play’.

We had the chance to attend dozens of talks and debates; to participate in the activity of the Technopolitics village TSJA; to interview dozens of participants; to give our own talk on mailing list analysis; to engage in chats, activities, and drinks with plenty of people.

Eager to trigger discussion, we asked ourselves: with this great group of people, why not conduct a small informal survey in the evening hours, exploiting the generally relaxed atmosphere characterizing this moment of the day?

Assisted by a bottle of vodka (to lure into the discussion the more reluctant ;-), we walked around in order to harvest peoples’ “hopes and fears” related to the inexorable process of datafication. After Jonathan Gray, we understand datafication as “[a way] of seeing and engaging with the world by means of digital data” (2016). Its political relevance descends by the fact that “data can also actively participate in the shaping of the world around us” (ibid.). Activists, advocates, techies, hackers and interested citizens are more and more concerned both with the threats and the opportunities that the transformation of every aspect of reality into data brings along. What do people fear the most? What is (if any) their biggest hope?

It is interesting to notice that quite often people would at first have a puzzled reaction: ‘What do you exactly mean?’ and ‘isn’t there a neutral-answer option?’ were frequent instinctive responses. However, while not yet completely fleshed out for the purpose of a poll, the question worked well as a trigger for small discussion and, in many cases, people would then start to recognize quite some fears and hopes they bring, engaging in animated conversations with us.

The fears of SHA participants seem to circulate very much around the general topic of control, and that of prediction mechanisms in relation to algorithms. Pessimistic answers include the recognition that “[those] who control communication (infrastructure) control society”, denote a strict concern for “[people] predicting the wrong answer (or the wrong things)” and the fear “to be categorized” and a to experience a “lack of control over data collection”. The hopes, instead, largely insisted on how blockchain technologies, open data, and hacking might contribute to a more decentralized (and thus controllable-from-below) world.

It must be said that (quite unexpectedly), the hopes outnumbered the fears. To be fair, whereas blunt optimism doesn’t seem to find roots in this community, we have to register some hopeless reactions, as the one whose only hope is that we run out of metal on our planet (and whose fear is that the mining industry might outsource to Mars…).

Overall, the theme of (lack of) control over peoples’ own lives seems to be the red thread. Data (as Kranzberg’s law on technology reminds us) are not good nor bad in themselves -but neither neutral, since who, when, how, and for what purposes gain control over them determines their oppressive or liberating potential. In other words, ‘big data’ are political issues, and people at SHA are much aware of that.

To conclude, two methodological notes. The term ‘datafication’, despite sometimes obscure to the respondents and overly-general for the quite structured question, worked well as a floating signifier to trigger people into discussion about the topic. The vodka, instead, would have worked better with some orange juice next to it -lesson learned.

 

If you wanna look it up yourself, here is a transcript of both fears and hopes:

Fears on datafication:

  • who controls the communication (infrastructure) controls society
  • centralization will limit knowledge and sharing until control over the population is complete
    lack of control over data collection
  • fascism
  • even if algorithms are neutral, the data they work with are biased
  • to be categorized → filter bubble
  • predict the wrong answer (or the wrong things)
  • self-fulfilling prophecy as a service
  • advancement of face recognition techniques
  • people do not question algorithms
  • they start mining metals on Mars
  • genocide

Hopes on datafication:

  • we run out of metal atoms to share all the data
  • 42
  • blockchain as a technology of socialism
  • societies move in waves like everything in life. Future will require revolution
  • the democratization of mapping data
  • balancing power through open data
  • people learn to question algorithms like they do with politicians
  • new generations will be more aware and hack more
  • It will prove mankind is hopeless
  • helps with daily life
  • that the data is used to solve problems of society
  • it’s just a hype
  • they (doing it) notice they are themselves getting fucked by categorization and negative impact on their lives
  • decentralization through blockchain tech will give us the freedom to reclaim control over communication infrastructures

 

References

Gray, Jonathan (2016), “Datafication anddemocracy: Recalibrating digital information systems to address broader societal interests”, Juncture, Volume 23, ISSUE 3

[blog] Big Data and Civil Society: Researching the researchers

In March-May 2017, I had the opportunity to join the DATACTIVE project as a research trainee, at the Media Studies Department of the University of Amsterdam. I first met the DATACTIVE team during the 2015 Winter School of the Digital Method Initiative (also at the Media Studies Department, UvA). At the time, we worked on tracing social networks through leaked files, and I very much appreciated the methods they use, and the great care they put into privacy consideration when dealing with people’s data. For these reasons, when I got the opportunity to enroll in a research traineeship abroad as part of my PhD project, I decided to go back to Amsterdam.

My research activities within DATACTIVE focused primarily on monitoring and reviewing the scope of and methods used by other research lab dealing with big data and civil society. More specifically, the aim of this research was to try and understand in which way DATACTIVE can learn from the research projects in question. This task lies at the exact intersection of the DATACTIVE research goals and my own skills and interests. My background bridges across political communication and Big Data: I completed a master in Big Data Analytics & Social Mining at the University of Pisa only some weeks before traveling to Amsterdam.

I analyzed about 23 projects from seven research labs, exploring a multitude of interesting methodologies and theoretical frameworks. It was sometimes challenging for me to deal with the many different aims, methods, and point of views represented in these different projects, but I had the possibility to familiarize myself with tools and methods used in other research labs. In what follows, I provide an overview of the most interesting findings, however hard it might be to do justice to all of them!

What have I studied?

1. Thanks to the Share Lab projects (The Share Foundation located in Serbia) I learned about the importance of meta-data, and how detailed information about people can be retrieved just exploring fragments of data, like mail headers or browsing internet histories (Metadata Investigation: Inside Hacking Team, Browsing Histories: Metadata Explorations).

2. Another research from Share Lab showed how Facebook algorithms work to match people with ads (Human Data Banks and Algorithmic Labour), and how an electoral campaign can be manipulated and dominated on the web (Mapping and quantifying political information warfare).

3. Analyzing projects developed with the CorText platform (set up by LISIS a research project located at Université Paris-Est Marne-la-Vallée in Paris) showed how text can be elaborated upon in a free and easy way to perform a more complex analysis. It can do for instance semantic networks analysis in a bunch of scientific articles (Textdrill), topic extraction and clusterization from newspaper articles (Pulseweb), or geographical clusterization through text analysis (GeoClust).

4. Forensic Architecture (Goldsmiths, University of London) exemplifies how videos, photos, interviews and other kind of (social) data retrieved on the web, could be useful to reconstruct the “truth” in hard-to-reach war scenarios such as the Al-Jinah Mosque case (in which they performed an architectural analysis of a building destroyed in a US Airstrike in Syria on March 16th 2016), MSF Supported Hospital (in which researchers, asked by MSF, tried to understand which national air force, between Russian or Syrian, carried the airstrike), and Rafah: Black Friday in which Forensic collaborated with Amnesty International to reconstruct war operations in Gaza during 1-4 August 2014. It was emotionally challenging to read the reports while keeping an academic distance. This was the case, for instance, in the reconstruction of “the left to die boat” case, a vessel left to drift in the middle of the Mediterranean sea in which sixty-three migrants (seventy-two in total) lost their lives, or the report on what happens in the Saydnaya prison in Syria in which witnesses reported abuses and tortures. These are only some examples of what I encountered during my research.

But this was not a solitary research endeavor. Being involved in all the DATACTIVE discussions, meetings, conferences, and reading groups over the period of three months shed new light on qualitative research in context of “data activism”. For example, we discussed how to code activists’ interviews in terms of research aims and coding methods.

Thanks to the DATACTIVE experience and to the analysis of some projects (i.e. The Snowden Disclosures, Technical Standards, and the Making of Surveillance Infrastructures, Marginalisation, Activism and the Flip Sides of Digital Technologies), I better learned the importance to take care of personal data, and pay more attention to the multiple sides of technologies, which we often take as a black box. I have also reflected extensively on how digital technologies could be of help to a broad range of research activities, starting from simple tasks to perform complex “counter” analysis that allows understanding how the global financial system works (Corpnet, University of Amsterdam) or how a more equal and collaborative economy could be developed (Dimmons, Internet Interdisciplinary Institute, Open University of Catalonia). I am also convinced that all these research and outputs should be known and shared also beyond academia, not only among scholars, for their ability to speak to the world we live in.

I think that the experience and knowledge gained in this research traineeship will definitely add up to my PhD work: entering such a huge field of research has indeed broadened my own perspective on political communication and Big Data. Finally, I really appreciated being part of the DATACTIVE research team and being exposed to their collaborative way of working, and I really enjoyed the cultural and life experience in. I hope to come back.

See you soon.

about Antonio Martella

Antonio is a PhD student at the Political Science Department of the University of Pisa. His research project is focused on political leaders, populism, and social media. He graduated in Business communication and human resource policy and has a postgraduate master in “Big Data Analytics & Social Mining” by the University of Pisa along with the CNR of Pisa.

Featured image: Edward Snoweden WIRED magazine cover on news stand 8/2014 by Mike Mozart of TheToyChannel

[blog] Techno-Galactic Software Observatory

Author: Lonneke van der Velden

 

Early June Becky and I participated in the Techno-Galactic Software Observatory, an event organised by Constant, a feminist art and technology collective in Brussels. It was a great event, in which theoretical insights from the philosophy of technology and software studies were combined with practical interventions which ended in an exhibition.

The event aimed to critically interrogate all kinds of assumptions about software and software knowledge. We discussed how software relates to time, spatializations, perspectives, and the hierarchies implied in ways of looking. The last day of the event was a ‘walk-in clinic’ in which visitors could get ‘software-critique as service’ at several ‘stations’.

The project I participated in was file-therapy. Departing from the Unix-philosophy that everything consists of a file (a program is a file, an instruction is a file, etc.), our desk would take people’s problems, understand them in their property of a file. Next, we would transform these files into other file types: visual data or music files.

We would not offer solutions. The idea was that our visitors, by being confronted with their new visualised or sonified file, could start developing a new relationship to this file. For example, one person would have a problem with her PhD-file: it was a big Word-file full of references and therefore difficult to handle. Working in it becomes a hassle. But listening to the transformed file is rather meditative. The other station in the room would criticise the reductionist ´file-formatted’ vision of the world, and in that way, we set up a dialogue about how computers format our lives.

 

hoij

A comparison of the various problematic files

The observatory was a great event and a learning experience at the same time. Please read other people’s experiences too 🙂

 

About constant

Constant is a non-profit, artist-run organisation based in Brussels since 1997 and active in the fields of art, media and technology.
Constant develops, investigates and experiments. Constant departs from, feminisms, copyleft, Free/Libre + Open Source Software. Constant loves collective digital artistic practices. Constant organises transdisciplinary worksessions. Constant creates installations, publications and exchanges. Constant collaborates with artists, activists, programmers, academics, designers. Constant is active archives, poetic algorithms, body and software, books with an attitude, cqrrelations, counter cartographies, situated publishing, e-traces, extitutional networks, interstitial work, libre graphics, performative protocols, relearning, discursive infrastructures, hackable devices.