From show in Big Data from the South

Unknown

DATACTIVE Speaker Series: Can Data be Decolonized?, December 4

DATACTIVE is proud to announce a talk by Nick Couldry (London School of Economics and Political Science) and Ulises A. Mejias (State University of New York at Oswego) in the framework of the DATACTIVE Speaker Series and in occasion of the Big Data from the South workshop. The talk, entitled “Can Data be Decolonized? Data Relations and the Emerging Social Order of Capitalism”, will take place on December the 4th at 3pm, at the University Library (Potgieterzaal). Below you find the blurb.

Can Data be Decolonized? Data Relations and the Emerging Social Order of Capitalism
A talk by Nick Couldry (London School of Economics and Political Science) and Ulises A. Mejias (State University of New York at Oswego)

This talk (which draws on the author’s forthcoming book from Stanford University Press, The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism) examines how contemporary practices of data extraction and processing replicate colonial modes of exploitation. Couldry and Mejias present the concept of “data colonialism” as a tool to analyze emerging forms of political control and economic dispossession. To that effect, their analysis engages the disciplines of critical political economy, sociology of media, and postcolonial science and technology studies to trace continuities from colonialism’s historic appropriation of territories and material resources to the datafication of everyday life today. While the modes, intensities, scales and contexts of dispossession have changed, the underlying function remains the same: to acquire resources from which economic value can be extracted. Just as historic colonialism paved the way for industrial capitalism, this phase of colonialism prepares the way for a new economic order. In this context, the authors analyze the ideologies and rationalities through which “data relations” (social relations conducted and organized via data processes) contribute to the capitalization of human life. Their findings hold important implications for how we study the internet, and how we may advocate for the decolonization of data in the future.

APC final photo (2)

Why we won’t be at APC 2018

In October 2018, the Amsterdam Privacy Conference (APC) will be back at the University of Amsterdam. Two DATACTIVE project team members, Stefania (Principal Investigator), and Becky (PhD candidate), enthusiastically supported the conference as coordinators of the ‘Digital Society and Surveillance’ theme. The Data Justice Lab at Cardiff University submitted a panel proposal, which was successfully included. Regretfully, neither will take part in the conference: DATACTIVE and the Data Justice Lab have decided to withdraw over the participation of the US-based software company Palantir as one of the APC’s Platinum Sponsors.

Our decision to withdraw stems from an active refusal to legitimize companies accused of enabling human rights abuses, and a concern with the lack of transparency surrounding sponsorship.

Palantir is a company specializing in big data analytics, which develops technologies for the military, law enforcement and border control. The deployment of Palantir’s technologies has raised wide-spread concern among civil liberties and human rights advocates. Reporting shows that, in the United States, Palantir has played an important role in enabling the efforts of the ICE (Immigration and Customs Enforcement) to identify, detain, and deport undocumented immigrants, refugees, and asylum seekers. This has resulted in the indefinite detention of thousands of children who have been separated from their parentsThis indefensible policy has come under strong criticism from the United Nations and prompted an alliance of technology workers and affected communities, to call – so far, unsuccessfully – for Palantir to cancel its contracts with ICE.

We feel that providing Palantir with a platform, as a sponsor of a prominent academic conference on privacy, significantly undermines efforts to resist the deployment of military-grade surveillance against migrants and marginalized communities already affected by abusive policing. 

Because we have organized conferences ourselves, we believe transparency in sponsorship agreements is key. While we praise the APC organizing committee forcommitting to full transparency, we were not informed of sponsorship agreements until the very last minute. The APC Sponsors page, in addition, was only populated after the participant registration deadline. As conference coordinators and prospective participants, we feel that we were not given the chance to make an informed choice about our contribution.

Sponsorship concerns are not a new issue: the very same controversy, around the involvement of this very same company (as well as others), emerged during the 2015 edition of APC. Though we acknowledge the complexity of corporate sponsorship, we note that other prominent tech policy conferences, such as Computers, Privacy and Data Protection (CPDP) conference, have recently stopped accepting sponsorship from Palantir. We thus believe this is a good moment for a larger discussion about how conferences should be organized in the future.

Academia—and especially publicly-funded universities—need to consider their role in efforts to neutralize or undermine human rights concerns. Such considerations are particularly pertinent in the context of what has been described as the increased neoliberalization of higher education, in which there is significant pressure to attract and pursue funding from different sources. As academics and as citizens, we will increasingly be asked to make choices of this kind. Hence, we believe it is time to set down a clear set of principles for sponsorship going forward.

 

Amsterdam and Cardiff, 19 September 2018

Stefania Milan and Becky Kazansky (DATACTIVE) & Lina Dencik, Arne Hintz, Joanna Redden, Fieke Jansen (Data Justice Lab)

By London School of Economics Library and Political Science - https://www.flickr.com/photos/lselibrary/3925726761/in/set-72157622828540200/, No restrictions, https://commons.wikimedia.org/w/index.php?curid=10180000

Data Colonialism – the first article of the Special Issue on “Big Data from the South” is out

By London School of Economics Library and Political Science - https://www.flickr.com/photos/lselibrary/3925726761/in/set-72157622828540200/, No restrictions, https://commons.wikimedia.org/w/index.php?curid=10180000
Photo by London School of Economics Library and Political Science

Nick Couldry and Ulisse A. Mejias re-frame the Data from the South debate within the context of modern day colonialism: data colonialism; an alarming stage where human life is “appropriated through data” and life is, eventually, “capitalized without limit”.

This essay marks the beginning of a series of articles under a special issue on Big Data from the South, edited by Stefania Milan and Emiliano Trerè and published on the Television and New Media Journal. This article will be freely accessible for the first month, so we encourage you to put it high up on your to-read list.

The special issue promises interesting takes and approaches from renowned scholars and experts in the filed, such as Angela Daly and Monique Mann, Payal Arora, Stefania Milan and Emiliano Trerè, Jean-Marie Chenou and Carolina Cepeda, Paola Ricaurte Quijano, Jacobo Najera and Jesús Robles Maloof, with a special commentary by Anita Say Chan. Stay tuned for our announcements of these articles as they come up.

Ration Shop, Trivandrum, India. Photo by the author

[BigDataSur] India’s Aadhaar: The datafication of anti-poverty programmes and its implications

By Silvia Masiero, Loughborough University

The notion of datafication implies rendering existing objects, actions and processes into data. Widely studied in the field of business intelligence, datafication is known to restructure consumer behaviour and the functioning of markets in multiple ways. But a less-widely researched aspect pertains to the datafication of public welfare and social protection programmes, on which the livelihoods of many poor and vulnerable people worldwide are based. The field of information and communication technology for development (ICT4D), which for more than thirty years has focused on the roles of informatics in development processes, is coming to realize the growing importance of datafication in the enactment of social policies.

Datafication acquires a particular meaning when referring to anti-poverty programmes, which are social protection schemes designed specifically for the poor. In such schemes, what is converted into machine-readable data is in the first place the population of entitled users. This leads to restructuring two core functions of anti-poverty schemes: first is the recognition of beneficiaries, automatizing the process that discriminates entitled individuals and households from non-entitled ones. Second is the correct assignation of entitlements, based on the availability of machine-readable data for their determination. While both functions were previously paper-based or only partially digitized, datafication affords the power to automatize them, with a view of infusing greater effectiveness and accountability in programme design.

Against this backdrop, my research focuses on the two concomitant aspects of the effects of datafication on the architecture of anti-poverty programmes, and its consequences on the entitlements that beneficiaries receive through them. My PhD thesis focused on the digitalization of the Public Distribution System (PDS), which is India’s largest food security programme and centres on distributing primary necessity items (mainly rice, wheat, sugar and kerosene) at subsidized prices to the nation’s poor. The back-end digitalization of the scheme, started at the state level in the early 2000s, is now culminating in datafication of the programme through the Unique Identity Project (Aadhaar), an identity scheme that constitutes the biggest biometric identification database in the world. Built with the declared purpose of facilitating the socioeconomic inclusion of India’s poor, Aadhaar provides all enrolees with a 12-digit number and the capture of biometric details, to make sure, among other aspects, that each enrolee obtains their social benefits through a simple operation of biometric recognition.

Datafication contributes to deep transformation of anti-poverty programmes, with mixed effects on programme architecture and entitlements of beneficiaries

My data collection on the datafied PDS has occurred in the two southern Indian states of Kerala and Karnataka, and also comprehends a review of the state-level cases of Aadhaar-based PDS currently operating in India. Through the years, my research has developed three lines of reflection which I synoptically illustrate below.

First, datafication is constructed by the Indian central government as a tool for simplification of access, and of improvement of users’ capability to obtain their entitlements under existing schemes. The Aadhaar-based PDS is indeed constructed to reduce the inclusion error, meaning access to the programme by non-entitled people, and the exclusion error (Swaminathan 2002), meaning the negation of subsidy to the entitled. In doing so, the biometric system traces sales from PDS ration shops to reduce diversion (rice mafia), an illegal network through which foodgrains aimed at the poor are diverted on the market for higher margins. What emerges from my research is a strong governmental narrative portraying Aadhaar as a problem-solver of PDS: technology is depicted by government officials as a simplifier of the existing system, facilitating a better and more accountable functioning of a leakage-prone anti-poverty scheme that has been in operation for a long time.

Second, recipients’ view of the datafied PDS is mixed: it reveals some positive changes, but also a set of issues that were not in place before the advent of the biometric system. One, making access conditional to enrolment in the Aadhaar database, the new system subordinates the universal right to food to enrolment in a biometric database, leading the poor to ‘trade’ their data for the food rations needed for their livelihoods. Two, while the programme is designed to combat the inclusion error, new forms of exclusion are caused by systems’ malfunctionings leading to failure in user recognition, which in turn results in families having their food rations denied even for several months in a row. Three, the system is not built to act on the back-end diversion (PDS commodities being diverted before they reach the ration shops where users buy them), where, according to existing studies of the PDS supply chain, the greatest part of goods is diverted (Khera 2011, Drèze & Khera 2015).

Third, there is a specific restructuring intent behind the creation of an Aadhaar-based PDS. From documents and narratives released by the central government, a clear teleology emerges: Aadhaar is not conceived to simply streamline the PDS, but to substitute it, in the longer run, with a system of cash transfers to the bank accounts of beneficiaries. As government officials declare, this serves the purpose of reducing the distortion caused by subsidies, and create a more effective system where existing leakages cannot take place. A large majority of beneficiaries, however, is suspicious towards cash transfers (Drèze et al. 2017): a prominent argument is that these are more complex to collect and handle, with respect to the secure materiality of PDS food rations. What is sure, beyond points of view on the appropriateness of cash transfers, is that the teleology behind the Aadhaar-based PDS is not that of streamlining the system, but that of creating a new one where the logic of buying goods on the market replaces the existing logic of subsidies.

Aadhaar concurs to enable a shift from in-kind subsidies to cash transfers, with uncertain consequences on poor people’s entitlements

Rooted into field research on datafied anti-poverty systems, these reflections offer two main contributions to extant theorizations of datafication in the Global South. First, they highlight the role of state governments in using datafied systems towards construction of a positive image of themselves, portraying datafication as a problem-solving tool adopted to tackle the most pressing issues affecting existing programmes. The power of datafication, embodied by large biometric infrastructures such as Aadhaar, is used to project an image of accountability and effectiveness, relied upon in electoral times and in the construction of consensus from the public opinion. At the same time, citizens’ perspectives reveal forms of data injustice (Heeks & Renken 2018) which did not exist before datafication, such as the denial of subsidies based on failure of user recognition by point-of-sale machines or the subordination of the right to food to enrolment in a national biometric database.

Second, datafication is often portrayed by governments and public entities as a means to streamline anti-poverty programmes, improving the mechanisms at the basis of their functioning. By contrast, my research suggests a more pervasive role of datafication, capable of transforming the very basis on which existing social protection systems are grounded (Masiero 2015). The Aadhaar case is a revealing one in this respect: as it is incorporated in extant subsidy systems, Aadhaar does not aim to simply improve their functioning, but to substitute the logic of in-kind subsidies with a market-based architecture of cash transfers. Moving the drivers of governance of anti-poverty systems from the state to the market, datafication is hence implicated in a deep reformative effort, which may have massive consequences on programme architecture and the entitlements of the poor.

Entrenched in the Indian system of social protection, Aadhaar is today the greatest datafier of anti-poverty programmes in the world. Here we have outlined its primary effects, and especially its ability to reshape existing anti-poverty policies at their very basis. Ongoing research across ICT4D, data ethics and development studies pertains to the ways datafication will affect anti-poverty programme entitlements, for the many people whose livelihoods are predicated on them.

 

Silvia Masiero is a lecturer in International Development at the School of Business and Economics, Loughborough University. Her research concerns the role of information and communication technologies (ICTs) in socio-economic development, with a focus on the participation of ICT artefacts in the politics of anti-poverty programmes and emergency management.

 

References:

Drèze, J., and Khera, R. (2015) Understanding leakages in the Public Distribution System. Economic and Political Weekly, 50(7), 39-42.

Drèze, J., Khalid, N., Khera, R., & Somanchi, A. (2017). Aadhaar and Food Security in Jharkhand. Economic & Political Weekly, 52(50), 50-60.

Heeks, R., & Renken, J. (2018). Data justice for development: What would it mean? Information Development, 34(1), 90-102.

Khera, R. (2011). India’s Public Distribution System: utilisation and impact. Journal of Development Studies, 47(7), 1038-1060.

Masiero, S. (2015). Redesigning the Indian food security system through e-governance: The case of Kerala. World Development, 67, 126-137.

Swaminathan, M. (2002). Excluding the needy: The public provisioning of food in India. Social Scientist, 30(3-4), 34-58.

blog

[BigDataSur] My experience in training women on digital safety

by Cecilia Maundu

I remember it was September 2015 when I was invited for a two-day workshop on digital safety by the Association of Media Women in Kenya. At first I was very curious because I had not heard much about digital security. The two-day workshop was an eye opener. After the workshop I found myself hungry for more information on this issue.

Naturally, I went online to find more information. I must say I was shocked at the statistics I came across on the number of women who have been abused online, and continue to suffer. Women were being subjected to sexist attacks. They were attacked because of their identity as women, not because of their opinions. I asked myself what can I do? I am well aware that I am just a drop in the ocean, but any little change I can bring will help in some way. That was a light bulb moment for me.

It was in that moment that I knew I wanted to be a digital safety trainer. I wanted to learn how to train people, especially women, on how to stay safe online. The internet is the vehicle of the future. This future is now, and we cannot afford for women to be left behind.

Online violence eventually pushes victims to stay offline. It is censorship hidden behind the veil of freedom of expression.

After this realization, I embarked on the quest to become a digital safety trainer. As fate would have it, my mentor Grace Githaiga came across the SafeSister fellowship opportunity and sent it to me. I applied and got into the program. The training was taking place in Ngaruga lodge, Uganda. The venue of the training was beyond serene. The calm lake waters next to the hotel signified how we want the internet to be for women: a calm place and a safe space where women can express themselves freely without fear of being victimized, or their voices being invalidated.

On arrival we were met by one of the facilitators, Helen, who gave us a warm welcome. The training was conducted by five facilitators, all of whom were women.

The training was student friendly. The topics were broken down in a way that allows everyone to understand what was being discussed. Each facilitator had her own way and style of delivering the different topics, from using charts to power point presentations. I must say they did an exceptional job. I got to learn more about online gender violence and how deeply rooted it is in our society, and hence the importance of digital security trainings.

Being a trainer is not only about having digital safety skills, it also requires you to be an all rounded person. While giving training you are bound to meet different types of people with different personalities, and it is your duty to make them feel comfortable and make sure the environment around them is safe. It is in this safe space that they will be able to talk and express their fears and desires, and, most importantly, they will be willing to learn. As a digital security trainer, you should first know more about your participants and how much they know about digital security. This will enable you to package your material according to their learning needs.

Being a trainer requires you to read a lot on digital security, because this keeps you updated and allows you, therefore, to relay accurate information to your trainees. As a trainer, it is also necessary to understand the concept of hands on training because it gives the participants the opportunity to put into practice what they have learnt. For example, when you are teaching about privacy status on Facebook, you don’t just talk about it, your should rather ask the participants to open their Facebook accounts – that is if they have any – and go through the instructions step by step with them till they are able to achieve the task. As a trainer there is also the possibility of meeting a participant who does not give the opportunity to the rest of the group to express their views, as they want to be the one to talk throughout. However, the trainer needs to take charge and make sure that each participant is given an equal opportunity to talk.

Before the training we had each been given a topic to make a presentation on, and mine was to do with encryption; VeraCrypt to be more specific. At first it sounded Greek to me, but then I resorted to my friend Google to get more details (this begs the question of: how was life before the internet?). By the time I was leaving Kenya for Uganda I had mastered VeraCrypt. We kept discussing our topics with the rest of the group to a point where they started calling me Vera. My presentation went so well to my surprise. The week went by so fast. By the time we realized it, it was over and it was time to go back home and start implementing what we had learnt.

We continued receiving very informative material online from the trainers. In September 2017 they opened up a pool of funding where we could apply to fund a training in our different home countries. I got the funding, and chose to hold the training at Multimedia University where I lecture part time. The reason behind my choice was that this was home for upcoming young media women, and we needed to train them on how to stay safe online, especially since media women in Kenya form the majority of victims of gender based violence. They needed to know what awaits them out there and the mechanisms they needed to have to protect themselves from the attacks. The training was a success, and the young ladies walked away happy and strong.

The second, and last, part of SafeSister (I am barely holding my tears here, because the end did come) took place in Uganda at the end of March 2018. It was such a nice reunion, meeting the other participants and our trainers after a year. This time the training was more relaxed. We were each given a chance to talk about the trainings we conducted, the challenges we encountered, the lessons learnt and what we would have done differently. For me the challenge I encountered was time management. The trainers had prepared quite informative materials, hence the time ran over, add to it a 3o minutes delayed start.

This was my first training, and one take home for me as a digital safety trainer was that not all participants will be enthusiastic about the training, but one shouldn’t be discouraged or feel like they are not doing enough. The trainer just needs to make sure that no participant is left out. The trainer should not just throw questions at the participants, or just ask for their opinion on different issues regarding digital safety. As time progresses, they gradually get enthusiastic and start feeling more at ease.

One thing I have learnt since I became a digital security trainer is that people are quite ignorant on digital security matters. People go to cybercafés and forget to sign out of their email accounts, or use the same password for more than a single account, and  then they ask you ‘’why would someone want to hack into my account or abuse me and I am not famous?” However, such questions should not discourage you, on the contrary, they should motivate you to give more trainings, because people don’t know how vulnerable they are by being online while their accounts and data are not protected. Also as a trainer, when you can, and when the need arises, give as much free trainings as you can, since not everyone can afford to pay you. It is through these trainings that you continue to sharpen your skills and become an excellent trainer.

After the training we were each awarded a certificate. It felt so good to know that I am now a certified digital security trainer; nothing can beat that feeling.  As they say, all good things must come to an end. Long live Internews, long live DefendDefenders Asante Sana. I will forever be grateful.

 

Cecilia Mwende Maundu is a broadcast journalist in Kenya, a digital security trainer and consultant with a focus on teaching women how to stay safe online. She is also a user experience (UX) trainer, collecting user information feedback and sharing it with developers.

koru-maori-vector

[BigDataSur] blog 3/3. Good Data: Challenging the Colonial Politics of Knowledge

In the previous instalment of this three-part series on the possibilities of “good data” (here and here), Anna Carlson concluded that notions of decentralisation and autonomy could not do without an understanding of global inequalities and their politics. Moving away from Northern, individualist visions of digital utopia, she considers what can be learned from Indigenous data sovereignty initiatives as they address, from the South, the colonial legacies of global knowledge production. 

The gathering of data has long been a strategy of colonialism. It is part of a set of practices designed to “standardise and simplify the indigenous ‘social hieroglyph into a legible and administratively more convenient format’ (Scott 1999, 3)” (Smith 2016, 120). Making something legible is always already political, and it always begs the question: legible for what, and to whom?

In Australia, making legible meant enumerating Indigenous “‘peoples’ into ‘populations’ (Taylor 2009); their domestic arrangements and wellbeing […] constrained within quantitative datasets and indicators that reflected colonial preoccupations and values” (Smith 2016, 120). No matter how ‘big’ the data set, data is never neutral. As Smith points out, Indigenous self-determination is increasingly linked to “the need to also reassert Indigenous peoples’ control and interpretation into the colonial data archives, and to produce alternative sources of data that are fit for their contemporary purposes” (Smith 2016, 120). Data is knowledge, and knowledge is power.

No matter how it is collected, data has long been used in ways that reinforce and sustain this colonial status quo. Data – often quantitative, often raw numbers – is used strategically. As Maggie Walter argues in a chapter in Indigenous Data Sovereignty: Towards an Agenda (2016): “social and population statistics are better understood as human artefacts, imbued with meaning. And in their current configurations, the meanings reflected in statistics are primarily drawn from the dominant social norms, values and racial hierarchy of the society in which they are created” (Walter 2016, 79).

Dianne E Smith expands in the same volume:

“Data constitute a point-in-time intervention into a flow of information of behaviour – an attempt to inject certainty and meaning into uncertainty. As such, data can be useful for generalising from a particular sample to a wider population […] testing hypotheses […] choosing between options (etc). […]  However, when derived from ethnocentric criteria and definitions, data can also impose erroneous causal connections and simplify social complexity, thereby freezing what may be fluid formations in the real world. In their unadorned quantitative form, data are hard-pressed to cope with social and cultural intangibles.” (2016, 119-120)

As such, argues Smith, questions about data governance – “who has the power and authority to make rules and decisions about the design, interpretation, validation, ownership, access to and use of data” – are increasingly emerging as key “sites of contestation” between Indigenous communities and the state (Smith 2016, 119).

One of the more interesting responses to these data challenges comes from the movements for Indigenous Data Sovereignty. C. Matthew Snipp (2016) outlines three basic preconditions for data decolonisation: “that Indigenous peoples have the power to determine who should be counted among them; that data must reflect the interests and priorities of Indigenous peoples; and that tribal communities must not only dictate the content of data collected about them, but also have the power to determine who has access to these data.” In practice, then, it means Indigenous communities having meaningful say over how information about them is collected, stored, used and managed. In Indigenous Data Sovereignty: Towards an Agenda, the editors have brought together a set of interesting case studies reflecting on different examples of Indigenous data sovereignty. These case studies are by no means exhaustive, but they do point to a set of emerging protocols around relationality and control that may prove instructive in ongoing attempts to imagine good data practices in the future.

The First Nations Information Governance Centre (FNIGC)

FNIGC in Canada is an explicitly political response to the role of knowledge production in maintaining colonial power relationship.  The FNIGC articulate a set of principles that should be respected when gathering data about First Nations communities.  These principles are “ownership, control, access and permission” (OCAP), and they are used by the FNIGC as part of a certification process for research projects, surveys and other data gathering mechanisms.  In some ways, they kind of operate as a “right of reply” – pointing to the inadequacies or successes of data-gathering practices.  They aim, at least in part, to address the heavily skewed power relationship that continues to exist between Indigenous communities and the researchers who study them.

“By Maori for Maori” healthcare

This initiative attempts to incorporate Maori knowledge protocols into primary health care provision. In this case, Maori knowledge protocols are incorporated (to some extent) into data collection, analysis and reporting tools which are used by primary health care services (Maori run) in developing knowledge about and responses to health issues in Maori communities. These protocols are also used to develop and enable processes for data sharing across related networks – for example, with housing providers.

Yawuru data sovereignty

The Yawuru, traditional owners of the area around Broome in Western Australia, recognised that gaining control of the information that existed about them (by virtue of the extensive native title data gathering process) was crucial to maintaining their sovereignty. They also recognised the value of a data archive that was produced by and for Yawuru peoples. They undertook their own data collection processes, including a “survey of all Indigenous people and dwellings in the town to create a unit-record baseline.” They sought to create an instrument to “measure local understandings of Yawuru wellbeing (mabu liyan).”They digitally mapped their lands in order to “map places of cultural, social and environmental signficance to inform a cultural and environmental management plan.” Finally, they sought to incorporate Yawuru knowledge protocols into the development of a “documentation project […] to collate and store all relevant legal records, historical information, genealogies and cultural information” in line with Yawuru law.

Beyond these discreet examples, data sovereignty is exercised in a variety of other forms. Much Indigenous knowledge exists entirely outside the colonial state, and is held and protected in line with law. Many of the knowledge keepers in Australia retain this data sovereignty, despite the increasing pressure to make such information public in order to access legal recognition. This speaks to precisely the dilemma of contemporary data politics, as “opting out” of the systems that collect our data is increasingly difficult in a world where giving up our data rights is too often a precondition to accessing the things that make life liveable.

Unknown

[BigDataSur] blog 2/3: Digital Utopianism and Decentralisation: Is Decentralised Data Good Data?

In this second of three blog posts on the challenges of imagining ‘good data’ globally (read the first first episode here), Anna Carlson considers a particular strand of technology-driven utopianism in the Global North: the idea of radically decentralised data. She writes that its latest instantiations – exemplified by blockchain – tend to forget the dire and unequally distributed ecological impact of digital technologies.    

I was born in 1989, just a handful of years before mobile phones and laptop computers became the ubiquitous symbols of urban late modernity.  It is also the year that the terms “cybersecurity” and “hypertext markup language” (HTML) first appeared in print. In the heady, early years of the World Wide Web, the internet and the data it produced (and contained and distributed) offered a utopian vision of equitable globalisation, the promise to equalise access to knowledge, the possibility of a new kind of commons. Early adopters across the political spectrum celebrated the possibilities of decentralisation and autonomy, the opportunities for community-building and collectivity, for sharing economies outside the control of the state.

These notions of the common good mostly originate in the Global North, and their promises have never quite been fulfilled. They continue to re-emerge, however, and provide interesting food for thought in the process of imagining good (or at least, better) data practices for the future.

Blockchain is probably the most prominent technology to revive utopias of radical decentralisation. At its most basic, a blockchain is a massive peer-to-peer, decentralised technology that allows digital information to be seen and shared but not copied. Described by some as the “backbone of a new type of internet,” blockchain is basically a very complex, constantly updating, decentralised titling system. A blockchain is like a database, tied to the object or site of interest, constantly cross-checking and updating.

The technology originally emerged as a way of keeping track of cryptocurrencies like Bitcoin, and proponents Don and Alex Tapscott describe it as “an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.”  The blockchain is inherently decentralised in it cannot be controlled by any single entity, but requires constant validation and authentication from the network.

Blockchain is often represented optimistically, if vaguely, as capable of supporting new economies, free from the shackles of corporate control. In Melanie Swan’s Blockchain: Blueprint for a New Economy, she goes even further: “the potential benefits of the blockchain are more than just economic – they extend into political, humanitarian, social, and scientific domains […] For example, to counter repressive political regimes, blockchain technology can be used to enact in a decentralised cloud functions that previously needed administration… (e.g.) for organisations like WikiLeaks (where national governments prevented credit card processors from accepting donations in the sensitive Edward Snowden situation).” She goes on to describe the possibility of blockchain technology as a basis for new economies dominated by (ahem) platforms like Uber and AirBnB.

So far, so good, right? The problem is that blockchains are incredibly unwieldy and immensely energy inefficient. To use the currency bitcoin as an example, the energy required for a single transaction far exceeds the amount needed for a more traditional transfer.  This is at least in part because the system is premised on hyper-individualistic, libertarian ideals.  In a recent article on Motherboard, Christopher Malmo writes that, with prices at their current level, “it would be profitable for Bitcoin miners to burn through over 24 terawatt-hours of electricity annually as they compete to solve increasingly difficult cryptographic puzzles to “mine” more Bitcoins. That’s about as much as Nigeria, a country of 186 million people, uses in a year.” At least in part, this energy is required because, as Alex de Vries suggests: “Blockchain is inefficient tech by design, as we create trust by building a system based on distrust. If you only trust yourself and a set of rules (the software), then you have to validate everything that happens against these rules yourself. That is the life of a blockchain node.”

An interesting example of the tensions of decentralised cryptocurrencies emerged recently when The New Inquiry released their newest project, Bail Bloc. Bail Bloc is a “cryptocurrency scheme against bail,” that delivers much-needed funds to the excellent Bronx Freedom Fund in order to help provide bail fees for defendants. They outline its function as follows: “When you download the app, a small part of your computer’s unused processing power is redirected toward mining a popular cryptocurrency called Monero, which is secure, private, and untraceable. At the end of every month, we exchange the Monero for US dollars and donate the earnings to the Bronx Freedom Fund.”

On the surface, taking “unused” resources and redistributing them equitably sounds like a good idea. Unfortunately, the missing information here is that the processing power is not really “unused” – what they mean that it’s unused by you, and that using it for this cause won’t inconvenience your personal use.

But there is a slippage here that seems important to clarify.  Mining Monero requires a huge amount of electricity – much the same as the Bitcoin example above. As a result, this charity structure requires that we burn huge amounts of energy to produce a fairly minimal value, which is then redistributed. Like many such examples, it would be much easier and simpler to simply donate the money directly to the Bronx Freedom Fund.  The flip side, of course, is that electricity doesn’t feel like a scarce resource in the way that money does, so generating money out of electricity feels like generating money out of nowhere.

It is easy to see what the New Inquiry designers are appealing to: the desire to affect positive change without having to really do anything at all. And it seemed to work: in the first 24 hours of the initiative’s launch, I saw numerous friends and colleagues sharing the page with suggestions that we establish similar schemes for causes closer to home.

The sense of practically and elegantly re-distributing otherwise unused resources through the magic of technology might be appealing, but it should not come at the expense of a broader understanding of global inequalities and planetary sustainability. Decentralised technology cannot sustain “good data” if the values that are encoded in it do not account for our shared stake in the world.

The question, then, is not whether decentralised data is good data, but under what terms and conditions– and what politics of the collective (or commons) goes into shaping decentralised technology.

((to be continued. Next episode will be online on June 8th, 2018))

 

 

good-data-project

[BigDataSur] blog 1/3: Imagining ‘Good’ Data: Northern Utopias, Southern Lessons

by Anna Carlson

What might ‘good data’ look like? Where to look for models, past and emerging? In this series of three blog posts, Anna Carlson highlights that we need to understand data issues as part of a politics of planetary sustainability and a live history of colonial knowledge production. In this first instalment, she introduces her own search for guiding principles in an age of ubiquitous data extraction and often dubious utopianism.           

I’m sitting by Cataract Gorge in Launceston, northern Tasmania. I’ve just climbed a couple of hundred metres through bushland to a relatively secluded look-out. It feels a long way from the city below, despite the fact that I can still hear distant traffic noise and human chatter. I pull out my laptop, perhaps by instinct. At around the same moment, a tiny native lizard dashes from the undergrowth and hovers uncertainly by my bare foot. I think briefly about pulling out my cracked and battered (second-hand) iPhone 4 to archive this moment, perhaps uploading it to Instagram with the hashtag #humansoflatecapitalism and a witty comment. Instead, I start writing.

I have been thinking a lot lately about the politics and ethics of data and digital technologies. My brief scramble up the hill was spent ruminating on the particular question of what “good data” might look like. I know what not-so-good data looks like. Already today I’ve generated a wealth of it. I paid online for a hostel bunk, receiving almost immediate cross-marketing from AirBnB and Hostelworld through social media sites as well as through Google. I logged into my Couchsurfing account, and immediately received a barrage of new “couch requests” (based, I presume, on an algorithm that lets potential couch surfers know when their prospective hosts login).  I’ve turned location services on my phone on, and used Google Maps to navigate a new city. I’ve searched for information about art galleries and hiking trails. I used a plant identification site to find out what tree I was looking at. Data, it seems, is the “digital air that I breathe.”

Writing in the Guardian, journalist Paul Lewis interviews tech consultant and author Nir Eyal, who claims that “the technologies we use have turned into compulsions, if not full-fledged addictions. […] It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” And this addictive quality is powerful: digital technologies are credited with altering everything from election results to consumer behaviour and our ability to empathise. Indeed, there’s money to be made from a digitally-addicted populace. Encompassing everything from social media platforms to wearable devices, smart cities and the Internet of Things, almost every action we take in the world produces data of some form, and this data represents value for the corporations, governments and marketers who buy it.

This is the phenomenon commonly referred to as Big Data, which describes sets of data so big that they must be analysed computationally. Usually stored in digital form, this data encompasses everything from consumer trends to live emotional insights, and it is routinely gathered by most companies with an online presence.

The not-goodness of this data isn’t intrinsic, however. There is nothing inherently wrong with creating knowledge about the activities we undertake online. Rather, the not-goodness is a characteristic of the murky processes through which data is gathered, consolidated, analysed, sold-on and redistributed. It’s to do with the fact that few of us really know what data is being gathered about us, and even fewer know what that data will be used for. And it’s to do with the lineages that have structured processes of mass data collection, as well as their unequally distributed impacts.

Many of us know that the technologies on which we rely have dark underbellies; that the convenience and comfort of our digital lives is contingent on these intersecting forms of violence. But we live in a world where access to these technologies increasingly operates as a precondition to entering the workforce, to social life, to connection and leisure and knowledge. More and more workers (even in the global north) are experiencing precarity, worklessness and insecurity; experiences that are often enabled by digital technologies and which, doubly cruelly, often render us further reliant on them.

The ubiquity of the digital realm provokes new ethical conundrums. The technologies on which we are increasingly reliant are themselves reliant on exploitative and often oppressive labour regimes. They are responsible for vast ecological footprints. Data is often represented as immaterial, ‘virtual,’ and yet its impact on environments across the world is pushing us ever closer to global ecological disaster. Further, these violent environmental and labour relations are unequally distributed: the negative impacts of the digital age are disproportionately focused on communities in the Global South, while the wealth generated is largely trapped in a few Northern hands.

Gathering data means producing knowledge within a particular set of parameters. In the new and emerging conversations around Big Data and its impact on our social worlds, much focus is placed on the scale of it, its bigness, the sheer possibility of having so much information at our fingertips. It is tempting to think of this as a new phenomenon – as an unprecedented moment brought about by new technologies. But as technologist Genevieve Bell reminds us, the “logics of our digital world – fast, smart and connected – have histories that proceed their digital turns.” Every new technological advance carries its legacies, and the colonial legacy is one that does not receive enough attention.

So, when we imagine what “good data” and good tech might look like now, we need to contend with the ethical quagmire of tech in its global and historical dimensions. To illustrate this point, I examine the limits of contemporary digital utopianism (exemplified by blockchain) as envisioned in the Global North (Episode 2), before delving into the principles guiding “good data” from the point of view of Indigenous communities (Episode 3).

Acknowledgments:  These blogposts have been produced as part of the Good Data project (@Good__Data), an interdisciplinary research initiative funded by Queensland University of Technology Faculty of Law, which is located in unceded Meanjin, Turrbal and Jagera Land (also known as Brisbane, Australia). The project examines ‘good’ and ‘ethical’ data practices with a view to developing policy recommendations and software design standards for programs and services that embody good data practices, in order to start conceptualising and implementing a more positive and ethical vision of the digital society and economy. In late 2018, an open access edited book entitled Good Data, comprising contributions from authors from different disciplines located in different parts of the world, will be published by the Amsterdam University of Applied Sciences Institute of Network Cultures.’

iur

Big Data from the South at the LASA conference in Barcelona

Stefania Milan is the co-organizer with Emiliano Trere (Cardiff University) and Anita Say Chan (University of Illinois, Urbana-Champaign) of two panels at the forthcoming conference of the Latin American Studies Association (LASA), in Barcelona on May 23-26.

The (slightly revised) lineup:

Big Data from the Global South Part I (1033 // COL – Panel – Friday, 2:15pm – 3:45pm, Sala CCIB M211 – M2)
PROJECT FRAMING + GROUP BRAINSTORMING
·      Stefania Milan, Emiliano Trere, Anita Chan: From Media to Mediations, from Datafication to Data Activism
 
Big Data from the Global South Part II: Archive Power (1101 // COL – Panel – Friday, 4:00pm – 5:30pm, Sala CCIB M211 – M2)
·      Inteligencia Artificial y Campañas Electorales en la Era PostPolítica: Seguidores, Bots, Apps: Paola Ricaurte Quijano, Eloy Caloca Lafont 
·      Open Government, APPs and Citizen Participation in Argentina, Chile, Colombia, Costa Rica and Mexico: Luisa Ochoa; Fernando J Martinez de Lemos 
·      Cryptography, Subjectivity and Spyware: From PGP Source Code and Internals to Pegasus: Zac Zimmer
·      Engineering Data Conduct: Prediction, Precarity, and Data-fied Talent in Latin America’s Start Up Ecology: Anita J Chan
 
Big Data from the Global South Part III: Data Incorporations (1167 // COL – Panel – Friday, 5:45pm – 7:15pm, Sala CCIB M211 – M2)
·      Evidence of Structural Ageism in Intelligent Systems: Mireia Fernandez 
·      Doing Things with Code: Opening Access through Hacktivism: Bernardo Caycedo
·      Decolonizing Data: Monika Halkort
·      Maputopias: Miren Gutierrez

 

About LASA 2018

Latin American studies today is experiencing a surprising dynamism. The expansion of this field defies the pessimistic projections of the 1990s about the fate of area studies in general and offers new opportunities for collaboration among scholars, practitioners, artists, and activists around the world. This can be seen in the expansion of LASA itself, which since the beginning of this century has grown from 5,000 members living primarily in the United States to nearly 12,000 members in 2016, 45 percent of whom reside outside of the United States (36 percent in Latin America and the Caribbean). And while the majority of us reside in the Americas, there are also an increasing number of Latin American studies associations and programs in Europe and Asia, most of which have their own publications and annual seminars and congresses.

Several factors explain this dynamism. Perhaps the most important is the very maturity of our field. Various generations of Latin Americanists have produced an enormous, diverse, and sophisticated body of research, with a strong commitment to interdisciplinarity and to teaching about this important part of the world. Latin American studies has produced concepts and comparative knowledge that have helped people around the world to understand processes and problematics that go well beyond this region. For example, Latin Americanists have been at the forefront of debates about the difficult relationship between democracy, development, and dependence on natural resource exports—challenges faced around the globe. Migration, immigration, and the displacement of people due to political violence, war, and economic need are also deeply rooted phenomena in our region, and pioneering work from Latin America can shed light on comparable experiences in other regions today. Needless to say, Latin American studies also has much to contribute to discussions about populism and authoritarianism in their various forms in Europe and even the United States today.

With these contributions in mind, we propose that the overarching theme of the Barcelona LASA Congress be “Latin American Studies in a Globalized World”, and that we examine both how people in other regions study and perceive Latin America and how Latin American studies contributes to the understanding of comparable processes and issues around the globe.

smart

[blog 3/3] Designing the city by numbers? KAPPO: More than a game 1

 

 

This is already the last blog posts of the series ‘Designing the city by numbers? Bottom-up initiatives for data-driven urbanism in Santiago de Chile’, by Martín Tironi and Matías Valderrama Barragán. Please find here the first blogpost ‘Hope for the data-driven city‘ and the second ‘Digital quantification regimes of cycling mobility‘.

 

For our final post in this series, we explore the case of the social game for smartphones, KAPPO. It was developed in early 2014 by four Chilean entrepreneurs with the goal of engaging citizens with cycling. It is structured around levels in which each trip on the bike gives experience points, virtual coins and rewards to the player, and the highest level of the game is the “Capo”2. It also offers a series of challenges and rankings for competing with friends or other KAPPO players. Though this gamified design, KAPPO puts together a narrative focused on its ability to “provoke,” “motivate”, “create the habit” of regularly using the bicycle and improving the user’s health. As suggested by Kappo in one of its promotional ads, the app promises to “show the cyclist inside of you”.

Although KAPPO did not have great success in Chile initially, it started to grow in adopters abroad in other countries, receiving funds and making public-private partnerships with municipalities in Denmark. Since then, its founders sought to position the app as “more than a game” for smartphones, seeking out different ways of capitalizing on the data generated through its use. For example, KAPPO would develop a competition event called “Cool Places to Bike” where organizations compete on the grounds of which best encourages the use of the bicycle measured by KAPPO. It also developed “Health and Well-being Programs” for companies, promising to improve productivity and well-being of workers by increasing bike use through the use of KAPPO. Local governments have also become KAPPO clients through the development of “KAPPO Insights”, a web platform which allows public planners and officials to process and visualize anonymized routes tracked by the app to decision-making.

The data obtained by KAPPO, however, present biases and is not representative of cyclists of Santiago. Instead of emphasizing the scientific narrative of RUBI regime, the narrative deployed by KAPPO is one that aims to convince city officials based on three aspects: an inexpensive method, data capture in real time, and allowing a “strong” participatory citizen involvement that encourages bicycle use. Via such aspects, KAPPO’s analytics and flow maps acquire value and foster decision making that modifies the city in a rapid, experimental cycle, guided by the “real” movements of cyclists gathered in non-declarative ways. KAPPO thus does not seek to measure and quantify cyclists’ mobility representatively like RUBI, but seeks to intervene directly by encouraging greater bike use, and presenting increases in cycling statistics biannually in order to legitimate this digital quantification regime.

The politics of digital quantification: Some points to an open debate

With these very brief vignettes of digital quantification regimes developed in Latin America, it is interesting to note how initiatives like KAPPO and RUBI that are born in the South and adopt a grammar of citizen participation, also try to differentiate themselves from competing foreign technologies. But they nonetheless end up replicating the rationalities and logics of nudge and automation when they try to escalate to the global market to survive as technological entrepreneurship, diminishing at once the possible capacity of activism or citizen engagement in the planning processes. This opens up a debate around the actual political capacities of sensors and tracking technologies to enhance citizen participation and the agendas behind its developers.

Second, it is relevant to consider the different specificities of each regime of digital quantification. Each regime design and mobilises materialities, narratives and economic interests in order to justify their databases, algorithms and analytics as the most convenient, objective or useful for knowing, planning and governing the city in a data-driven way. As a result, the ecology of quantification regimes is always heterogeneous, diverging and relating to their contexts and interests, and combining various technologies of justification (beyond the device or app). From this perspective, we found interesting elements on the goals of each regime and their capacities. For example, KAPPO exacerbated the participatory or citizen nature of the app under a commercial logic from its inception. By contrast, the RUBI regime initially emphasized participatory and bottom-up elements but the agency of cyclists was gradually displaced by more automated designs to obtain “scientifically correct” data. They also try to differentiate from other methods like surveys and devices, both digital and analogue, invoking limitations and biases. In short, capitalizing on digital data requires various strategies of justification (Bolstanski and Thévenot, 2006) that should not be taken for granted and that goes beyond the generation of data alone. Before going into a priori definition of digital data, users or urban space, it is crucial to delve into these strategies and interests, as well as the reasons on why some regimes of digital quantification end up prevailing, whereas others are ignored.

Third, despite the discrepancies between the cases analysed, we note that both cases start from a shared imaginary of data-driven city governance inspired by the North. In this imaginary, opening and sharing data on the mundane practice of riding a bicycle is invoked as a means of empowering citizen involvement with the capacity to make the city smarter and more bike-friendly. However, this imaginary can lead, first, to a reconfiguration of government as “clients of data” and citizen participation towards more passive, invisible versions that are free of true effort, and in which the exchange of data and is used for the benefit of certain stakeholders with interests other than democratic ends. Before turning cyclists into “co-designers” or “participants” in city planning, they are situated in practice as data producers without ever being informed of the real use of the data generated in a government decision or other use by third parties. And the process of automation of the devices or gamify the design of devices is in direct connection with these forms of participation. This point leave us with the question on which other responses for everyday breakdowns and idiotic data could be enacted to promote an active digital activism. And also, which modalities of experimentation allow for the consideration of those imperceptible murmurs that tend to be marginalized from the prevailing cannons of smart culture3?

Fourth, a data-driven planning and governance initiative opens up the discussion of how notions of “expertise” and “politics” are reconfigured. These regimes of digital quantification promote the belief that the decision-maker, without necessarily being an expert on the topic, can make decisions in a primarily technical manner driven by the “real” behaviour of the citizens and not by opinions, ideological differences or party pressures. Political factors are framed as something that must be eradicated through the gathering and processing of data on people’s behaviour. This politics of technify-ing decision-making is nothing new. As Morozov (2014) has written, the idea of an algorithmic regulation evokes the old technocratic utopia of politics without politics: “Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.”

In this sense, a data-driven urbanism would carry the risk of believing not only in a neutrality or immediacy of data, but with it a depoliticization of urban planning and government in favour of technocratic and automated decision-making systems. Behind the apparent technical reduction of discretion in decision-making by these regimes of digital quantification, in practice, we can see how many political or discretionary decisions are made in how these regimes are enacted and made public.

 

References

Boltanski, L., & Thévenot, L. (2006). On justification: Economies of worth. Princeton University Press.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and secular belief. Surveillance & Society, 12(2), 197-208.

Espeland, W. N., & Stevens, M. L. (2008). A sociology of quantification. European Journal of Sociology/Archives Européennes de Sociologie, 49(3), 401-436.

Esty, D. C. & Rushing, R. (2007). Governing by the Numbers: The Promise of Data-Driven Policymaking in the Information Age. Center for American Progress, 5, 21.

Gabrys, J. “Citizen Sensing: Recasting Digital Ontologies through Proliferating Practices.” Theorizing the Contemporary, Cultural Anthropology website, March 24, 2016.
Goldsmith, S. & Crawford, S. (2014). The responsive city: engaging communities through data-smart governance. San Francisco, CA: Jossey-Bass, a Wiley Brand.

Goodchild, M. F. (2007). Citizens as sensors: The world of volunteered geography. GeoJournal, 69(4), 211-221.

Kitchin, R. (2014b). The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London: Sage.

Mares, N. (2018). What if nothing happens? Street trials of intelligent cars as experiments in participation. In S. Maassen, Dickel, S. and Schneider, C. H. (Eds), TechnoScience in Society, Sociology of Knowledge Yearbook. Nijmegen: Springer/Kluwer.
Mayer-Schönberger, V. and Cuckier, K. (2013). Big Data: A revolution that will transform how we live, work, and think. New York: Houghton Mifflin Harcourt.

Morozov, E. (2014). The rise of data and the death of politics. The Guardian. https://www.theguardian.com/technology/2014/jul/20/rise-of-data-death-of-politics-evgeny-morozov-algorithmic-regulation

 

1. This text is based on a presentation at the Workshop “Designing people by numbers” held in Pontificia Universidad Católica in November 2017.
2. Colloquial term in Spanish for people with a great deal of expertise or knowledge about a topic or activity.
3. On this point, see the controversies and public issues generated by the street testing with driverless cars (Marres, 2018)

 

About the authors: Martín Tironi is Associate Professor, School of Design at the Pontifical Catholic University of Chile. He holds a PhD from Centre de Sociologie de l’Innovation (CSI), École des Mines de Paris, where he also did post-doctorate studies. He received his Master degree in Sociology at the Université Paris Sorbonne V and his BA in Sociology at the Pontifical Catholic University of Chile. Now he’s doing a visiting Fellow (2018) in Centre of Invention and Social Proces, Goldsmiths, University of London [email: martin.tironi@uc.cl]

Matías Valderrama Barragán is a sociologist with a Master in Sociology from the Pontifical Catholic University of Chile. He is currently working in research projects about digital transformation of organizations and datafication of individuals and environments in Chile. [email:mbvalder@uc.cl]