by Anna Carlson
What might ‘good data’ look like? Where to look for models, past and emerging? In this series of three blog posts, Anna Carlson highlights that we need to understand data issues as part of a politics of planetary sustainability and a live history of colonial knowledge production. In this first instalment, she introduces her own search for guiding principles in an age of ubiquitous data extraction and often dubious utopianism.
I’m sitting by Cataract Gorge in Launceston, northern Tasmania. I’ve just climbed a couple of hundred metres through bushland to a relatively secluded look-out. It feels a long way from the city below, despite the fact that I can still hear distant traffic noise and human chatter. I pull out my laptop, perhaps by instinct. At around the same moment, a tiny native lizard dashes from the undergrowth and hovers uncertainly by my bare foot. I think briefly about pulling out my cracked and battered (second-hand) iPhone 4 to archive this moment, perhaps uploading it to Instagram with the hashtag #humansoflatecapitalism and a witty comment. Instead, I start writing.
I have been thinking a lot lately about the politics and ethics of data and digital technologies. My brief scramble up the hill was spent ruminating on the particular question of what “good data” might look like. I know what not-so-good data looks like. Already today I’ve generated a wealth of it. I paid online for a hostel bunk, receiving almost immediate cross-marketing from AirBnB and Hostelworld through social media sites as well as through Google. I logged into my Couchsurfing account, and immediately received a barrage of new “couch requests” (based, I presume, on an algorithm that lets potential couch surfers know when their prospective hosts login). I’ve turned location services on my phone on, and used Google Maps to navigate a new city. I’ve searched for information about art galleries and hiking trails. I used a plant identification site to find out what tree I was looking at. Data, it seems, is the “digital air that I breathe.”
Writing in the Guardian, journalist Paul Lewis interviews tech consultant and author Nir Eyal, who claims that “the technologies we use have turned into compulsions, if not full-fledged addictions. […] It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” And this addictive quality is powerful: digital technologies are credited with altering everything from election results to consumer behaviour and our ability to empathise. Indeed, there’s money to be made from a digitally-addicted populace. Encompassing everything from social media platforms to wearable devices, smart cities and the Internet of Things, almost every action we take in the world produces data of some form, and this data represents value for the corporations, governments and marketers who buy it.
This is the phenomenon commonly referred to as Big Data, which describes sets of data so big that they must be analysed computationally. Usually stored in digital form, this data encompasses everything from consumer trends to live emotional insights, and it is routinely gathered by most companies with an online presence.
The not-goodness of this data isn’t intrinsic, however. There is nothing inherently wrong with creating knowledge about the activities we undertake online. Rather, the not-goodness is a characteristic of the murky processes through which data is gathered, consolidated, analysed, sold-on and redistributed. It’s to do with the fact that few of us really know what data is being gathered about us, and even fewer know what that data will be used for. And it’s to do with the lineages that have structured processes of mass data collection, as well as their unequally distributed impacts.
Many of us know that the technologies on which we rely have dark underbellies; that the convenience and comfort of our digital lives is contingent on these intersecting forms of violence. But we live in a world where access to these technologies increasingly operates as a precondition to entering the workforce, to social life, to connection and leisure and knowledge. More and more workers (even in the global north) are experiencing precarity, worklessness and insecurity; experiences that are often enabled by digital technologies and which, doubly cruelly, often render us further reliant on them.
The ubiquity of the digital realm provokes new ethical conundrums. The technologies on which we are increasingly reliant are themselves reliant on exploitative and often oppressive labour regimes. They are responsible for vast ecological footprints. Data is often represented as immaterial, ‘virtual,’ and yet its impact on environments across the world is pushing us ever closer to global ecological disaster. Further, these violent environmental and labour relations are unequally distributed: the negative impacts of the digital age are disproportionately focused on communities in the Global South, while the wealth generated is largely trapped in a few Northern hands.
Gathering data means producing knowledge within a particular set of parameters. In the new and emerging conversations around Big Data and its impact on our social worlds, much focus is placed on the scale of it, its bigness, the sheer possibility of having so much information at our fingertips. It is tempting to think of this as a new phenomenon – as an unprecedented moment brought about by new technologies. But as technologist Genevieve Bell reminds us, the “logics of our digital world – fast, smart and connected – have histories that proceed their digital turns.” Every new technological advance carries its legacies, and the colonial legacy is one that does not receive enough attention.
So, when we imagine what “good data” and good tech might look like now, we need to contend with the ethical quagmire of tech in its global and historical dimensions. To illustrate this point, I examine the limits of contemporary digital utopianism (exemplified by blockchain) as envisioned in the Global North (Episode 2), before delving into the principles guiding “good data” from the point of view of Indigenous communities (Episode 3).
Acknowledgments: These blogposts have been produced as part of the Good Data project (@Good__Data), an interdisciplinary research initiative funded by Queensland University of Technology Faculty of Law, which is located in unceded Meanjin, Turrbal and Jagera Land (also known as Brisbane, Australia). The project examines ‘good’ and ‘ethical’ data practices with a view to developing policy recommendations and software design standards for programs and services that embody good data practices, in order to start conceptualising and implementing a more positive and ethical vision of the digital society and economy. In late 2018, an open access edited book entitled Good Data, comprising contributions from authors from different disciplines located in different parts of the world, will be published by the Amsterdam University of Applied Sciences Institute of Network Cultures.’