Author: Jeroen

[BigDataSur] The Challenge of Decolonizing Big Data through Citizen Data Audits [3/3]

 

Author: Katherine Reilly, Simon Fraser University, School of Communication

Data Stewardship through Citizen Centered Data Audits

In my previous two posts (the first & the second), I talked about the nature of data audits, and how they might be applied by citizens. Audits, I explained, check whether people are carrying out practices according to established standards or criteria with the goal of ensuring effective use of resources. As citizens we have many tools available at our disposal to audit companies, but when we audit companies according to their criteria, then we risk losing sight of our own needs in the community. The question addressed by this post is how to do data audits from a citizen point of view.

Thinking about data as a resource is a first step in changing our perspective on data audits. Our current data regime is an extractive data regime. As I explained in my first post, in the current regime, governments accept the central audit criteria of businesses, and on top of this, they establish the minimal protections necessary to ensure a steady flow of personal data to those same corporate actors.

I would like to suggest that we rethink our data regime in terms of data stewardship. The term ‘stewardship’ is usually applied to the natural environment. A forest might be governed by a stewardship plan which lays out the rights and responsibilities of resource use. Stewardship implies a plan for the management of those resources, both so that they can be sustained, and also so that everyone can enjoy them.

If the raw material produced by the data forest is our personal information, then we are the trees, and we are being harvested. Our data stewardship regime is organized to support that process, and audits are the means to enforce it. The main beneficiaries of the current data stewardship regime are companies who harvest and process our data. Our own benefits – our right to walk through the forest and enjoy the birds, or our right to profit from the forest materially – are not contemplated in the current stewardship regime.

It is tempting to conclude that audits are to blame, but really, evaluation is an agnostic concept. What matters are the criteria – the standards to which we hold corporate actors. If we change the standards of the data regime, then we change the system. We can introduce principles of stewardship that reflect the needs of community members. To do this, we need to start from the audit criteria that represent the localized concerns of situated peoples.

To this end, I have started a new project in collaboration with 5 fellow data justice organizations in 5 countries in Latin America: HiperDerecho in Chile, Karisma in Colombia, TEDIC in Paraguay, HiperDerecho in Peru and ObservaTIC in Uruguay. We will also enjoy the technical support of Sula Batsu in Costa Rica.

Our focus will be on identifying alternative starting points for data audits. We won’t start from the law, or the technology, or corporate policy. Instead, we will start from people’s lived experiences, and use these as a basis to establish criteria for auditing corporate use of personal data.

We will work with small groups who share a common identity and/or experience, and who are directly affected by corporate use of their personal data. For example, people with chronic health issues have a stake in how personal data, loyalty programs and platform delivery services mediate their relationship with pharmacies and pharmaceutical companies. The project will identify community collaborators who are interested in working with us to establish alternative criteria for evaluating those companies.

Our emerging methodology will use a funnel-like approach, starting from broad discussions about the nature of data, passing through explorations of personal practices and the role of data in them, and then landing on more specific and detailed explorations of specific moments or processes in which people share their personal data.

Once the group has learned something about the reality of data in their daily lives – and in particular the instances where data is of particular concern from them – we will facilitate group activities that help them identify their data needs, as well as the behaviors that would satisfy those needs. An example of a data need might be “I need to feel valued as a person and as woman when I interact with the pharmacy.” A statement of how that need might be satisfied could be, for example, “I would feel more valued as a person and as a woman if the company changed its data collection categories.”

We are particularly interested to think through the application of community criteria to companies who have grown in power and influence during the Covid-19 pandemic. Companies like InstaCart, SkipTheDishes, Rapi, Zoom, and Amazon are uniquely empowered to control urban distribution chains that affect the welfare of millions. What do community members require from these companies in terms of their data practices, and how would they fare against an audit based on those criteria?

We find inspiration for alternative audit criteria in data advocacy projects that have been covered by DATACTIVE’s Big Data from the South Blog. For example, the First Nations Information Governance Centre (FNIGC) of Canada has established the principles of ownership, control, access and permission for the management of First Nations data, and New Zealand has adopted Maori knowledge protocols for information systems used in primary health care provision (as reported by Anna Carlson). Meanwhile, the Mexican organization Controla tu Gobierno argues that we need to view data “less as a commodity – which is the narrative that constantly tries to make us understand data as the new oil – and more as a source of meaning” (Guillen Torres and Mayli Sepulveda, 2017).

From examples like these, and given the concept of data stewardship, we can begin to see that data is only as valuable as the criteria used to assess it, and so we urgently need alternative criteria that reflect the desires, needs and rights of communities.

How would corporate actors fare in an audit based on these alternative criteria? How would such a process reposition the value of data within the community? Who should carry out these evaluative processes, and how can they work together to create a more equitable data stewardship regime that better serves the needs of communities?

By answering these questions, we can move past creating data literate subjects for the existing data stewardship regime. Instead, we can open space for discussion about how we actually want our data resources to be used. In a recent Guardian piece, Hare argued that “The GDPR protects data. To protect people, we need a bill of rights, one that protects our civil liberties in the age of AI.”2 The content of that bill of rights requires careful contemplation. Citizen data audits allow us to think creatively about how data stewardship regimes can serve the needs of communities, and from there we can build out the legal frameworks to protect those rights.

 

About the author: Dr. Katherine Reilly is Associate Professor in the School of Communication at Simon Fraser University in Vancouver, Canada. She is the recipient of a SSHRC Partnership Grant and an International Development Research Centre grant to explore citizen data audit methodologies alongside Derechos Digitales in Chile, Fundacion Karisma in Colombia, Sula Batsu in Costa Rica, TEDIC in Paraguay, HiperDerecho in Peru, and ObservaTIC in Uruguray.

WomenonWeb censored in Spain as reported by Magma

Author: Vasilis Ververis

The Magma project just published new research on censorship concerning womenonweb.org, a non-profit organization providing support to women and pregnant people. The article describes how the major ISPs in Spain are blocking womenonweb.org’s website. Spanish ISPs have been blocking this website by means of DNS manipulation, TCP reset, HTTP blocking with the use of a Deep Packet Inspection (DPI) infrastructure. Our data analysis is based on network measurements from OONI data. This is the first time that we observe Women on Web being blocked in Spain.

About Magma: Magma aims to build a scalable, reproducible, standard methodology on measuring, documenting and circumventing internet censorship, information controls, internet blackouts and surveillance in a way that will be streamlined and used in practice by researchers, front-line activists, field-workers, human rights defenders, organizations and journalists.

About the author: Vasilis Ververis is a research associate with DATACTIVE and a practitioner of the principles ~ undo / rebuild ~ the current centralization model of the internet. Their research deals with internet censorship and investigation of collateral damage via information controls and surveillance. Some recent affiliations: Humboldt-Universität zu Berlin, Germany; Universidade Estadual do Piaui, Brazil; University Institute of Lisbon, Portugal.

[BigDataSur] The Challenge of Decolonizing Big Data through Citizen Data Audits [2/3]

 

A First Attempt at Citizen Data Audits

Author: Katherine Reilly, Simon Fraser University, School of Communication

In the first post in this series, I explained that audits are used to check whether people are carrying out practices according to established standards or criteria. They are meant to ensure effective use of resources. Corporations audit their internal processes to make sure that they comply with corporate policy, while governments audit corporations to make sure that they comply with the law.

There is no reason why citizens or watchdogs can’t carry out audits as well. In fact, data privacy laws include some interesting frameworks that can facilitate this type of work. In particular, the EU’s General Data Privacy Regulation (GDPR) gives you the right to know how corporations are using your personal data, and also the ability to access the personal data that companies hold about you. This right is reproduced in the privacy legislation of many countries around the world from Canada and Chile to Costa Rica and Peru, to name just a few.

With this in mind, several years ago the Citizen Lab at the University of Toronto set up a website called Access My Info which helps people access the personal data that companies hold about them. Access My Info was set up as an experiment, so the site only includes a fixed roster of Canadian telecommunications companies, fitness trackers, and dating apps. It walks users through the process of submitting a personal data request to one of these companies, and then tracks whether the companies respond. The goal of this project was to crowdsource insights from citizens that would help researchers learn what companies know about their clients, how companies manage personal data, and who companies share data with. The results of this work have been used to advocate for changes to digital privacy laws.

Using this model as a starting point, in 2019, my team at SFU, and a team from the Peruvian digital rights advocate HiperDerecho, set up a website called SonMisDatos (Son Mis Datos translates as “It’s My Data”.) Son Mis Datos riffed on the open source platform developed by Access My Info, but made several important modifications. In particular, HiperDerecho’s Director, Miguel Morachimo, made the site database-driven so that it was easier to update the roster of corporate actors or their contact details. Miguel also decided to focus on companies that have a more direct material impact on the daily lives of Peruvians – such as gas stations, grocery stores and pharmacies. These companies have loyalty programs that are involved in collecting personal data about users.

Then we took things one step further. We used SonMisDatos to organize citizen data audits of Peruvian companies. HiperDerecho mobilized a team of people who work on digital rights in Peru, and we brought them together at two workshops. At the first workshop, we taught participants about their rights under Peru’s personal data protection laws, introduced SonMisDatos, and asked everyone to use the site to ask companies for access to their personal data. Companies need time to fulfill those requests, so then we waited for two months. At our second workshop, participants reported back on the results of their data requests, and then I shared a series of techniques for auditing companies on the basis of the personal data people had been able to access.

Our audit techniques explored the quality of the data provided, corporate compliance with data laws, how responsive companies were to data requests, the quality of their informed consent process, and several other factors. My favorite audit technique reflected a special feature of the data protection laws of Peru. In that country, companies are required to register databases of personal information with a state entity. The registry, which is published online, includes lists of companies, the titles of their databases, as well as the categories of data collected by each database. (The government does not collect the contents of the databases, it only registers their existence.)

With this information, our auditors were able to verify whether the data they got back from corporate actors was complete and accurate. In one case, the registry told us that a pharmaceutical company was collecting data about whether clients had children. However, in response to an access request, the company only provided lists of purchases organized by date, skew number, quantity and price. Our auditors were really bothered by this discovery, because it suggested that the company was making inferences about clients without telling them. Participants wondered how the company was using these inferences, and whether it might affect pricing, customer experience, access to coupons, or the like.

In another case, one of our auditors subscribed to DirecTV. To complete this process, he needed to provide his cell phone number plus his national ID number. He later realized that he had accidentally typed in the wrong ID number, because he began receiving cell phone spam addressed to another person. This was exciting, because it allowed us to learn which companies were buying personal data from DirecTV. It also demonstrated that DirecTV was doing a poor job of managing their customer’s privacy and security! However, during the audit we also looked back at DirecTV’s terms of service. We discovered that they were completely up front about their intention to sell personal information to advertisers. Our auditors were sheepish about not reading the terms of the deal, but they also felt it was wrong that they had no option but to accept these terms if they wanted to access the service.

On the basis of this experience, we wrote a guidebook that explains how to use Son Mis Datos, and how to carry about an audit on the basis of the ‘access’ provisions in personal data laws. The guide helps users think through questions like: Is the data complete, precise, unmodified, timely, accessible, machine-readable, non-discriminatory, and free? Has this company respected your data rights? What does the company’s response to your data request suggest about its data use and data management practices?

We learned a tonne from realizing these audits! We know, for instance, that the more specific the request, the more data a company provides. If you ask a company for “all of the personal data you hold about me” you will get less data that if you ask for “all of my personal information, all of my IP data, all of my mousing behaviour data, all of my transaction data, etc.”

Our experiments with citizen data audits also allow us to make claims about how companies define the term “personal data.” Often companies define personal data very narrowly to mean registration information (name, address, phone number, identification number, etc.). This lies in extreme contrast to the academic definition of personal data, which is any information that can lead to the identification of an individual person. In the age of big data, that means pretty much any digital traces you produce while logged in. Observations like these allow us to open up larger discussions about corporate data use practices, which helps to build citizen data literacy.

However, we were disappointed to discover that our citizen data audits worked to validate a data regime that is organized around the expropriation of resources from our communities. In my first blog post I explained that the 5 criteria driving data audits are profitability, risk, consent, security and privacy.

Since our audit originated with the law, with technology, and with corporate practices, we ended up using the audit criteria established by businesses and governments to assess corporate data practices. And this meant that we were checking to see if they were using our personal and community resources according to policies and laws that drive an efficient expropriation of those very same resources!

The concept of privacy was particularly difficult to escape. The idea that personal data must be private has been ingrained into all of us, so much so that the notion of pooled data or community data falls outside the popular imagination.

As a result, we felt that our citizen data audits did other people’s data audit work for them. We became watchdogs in the service of government oversight offices. We became the backers of corporate efficiencies. I’ve got nothing personal against watchdogs — they do important work — but what if the laws and policies aren’t worth protecting?

We have struggled greatly with the question of how to generate a conversation that moves beyond established parameters, and that situates our work in the community. With this in mind, we’ve begun to explore alternative approaches to thinking about and carrying out citizen data audits. That’s the subject of the final post in this series.

 

About the author: Dr. Katherine Reilly is Associate Professor in the School of Communication at Simon Fraser University in Vancouver, Canada. She is the recipient of a SSHRC Partnership Grant and an International Development Research Centre grant to explore citizen data audit methodologies alongside Derechos Digitales in Chile, Fundacion Karisma in Colombia, Sula Batsu in Costa Rica, TEDIC in Paraguay, HiperDerecho in Peru, and ObservaTIC in Uruguray.

[blogpost] Thinking Outside the Black-Box: The Case for ‘Algorithmic Sovereignty’ in Social Media

Urbano Reviglio, Ph.D. candidate of the University of Bologna in collaboration with Claudio Agosti, the brain behind tracking.exposed just pubished a new academic article on Algorithmic Sovereignty in Social  Media + Society (SAGE). Find an extended abstract below, and the full paper here

Everyday algorithms update a profile of “who you are” based on your past preferences, activities, networks and behaviours in order to make a future-oriented prediction and suggest you news (e.g. Facebook and Twitter), videos (e.g. Youtube), movies (e.g. Netflix), songs (e.g. Spotify), products (e.g. Amazon) and, of course, ads. These algorithms define the boundaries of your Internet experience, affecting, steering and nudging your information consumption, your preferences, and even your personal relations.

Two paradigmatic (and likely most influential) examples clarify well the importance of this process. On Facebook, you can encounter 350 posts on average, prioritized on about 1.500. As such, you can be exposed only to 25% of the information, while roughly 75% is actually hidden. This is Facebook’s newsfeed algorithm that is choosing for you. And it is rather good at that. Think also of Youtube; its recommendations already drive more than 70% of the time you spend in the platform, meaning you are mostly “choosing” in a pre-determined set of possibilities. In fact, 90% of the ‘related content’ on the right side of the website is already personalized for you. Yet, this process occurs largely beyond your control and it is mostly based on implicit personalization — behavioural data collected from subconscious activity (i.e. clicks, time spent etc.) — rather than on deliberate and expressed preferences. Worryingly, this might become a default choice in future personalization, essentially because you may be well satisfied without further questioning the process. Do you really think the personalization that recommends you what to read and watch is indeed the best you could experience?

Personalization is not what is narrated by mainstream social media platforms. There are a number of fundamental assumptions that are nowadays shared by most researchers, and these need clarifications. Profiling technologies that allow personalization create a kind of knowledge about you that is inherently probabilistic. Personalization, however, is not exactly ‘personal’. Profiling is indeed a matter of pattern recognition, which is comparable to categorization, generalization and stereotyping. Algorithms cannot produce or detect the complexities of yourself. They can, however, influence your sense of self. As such, profiling algorithms can trivialize your preferences and, at the same time, steer you to conform to the status quo of your past actions chosen by ‘past selves’, narrowing your “aspirational self.” They can limit the diversity of information you are exposed to, and they can ultimately perpetuate existing inequalities. In other words, they can limit your information self-determination. So, how can you fully trust proprietary algorithms that are naturally designed for ‘engagement optimization’ — to hook you up to the screen as much as possible — and not explicitly designed for your personal growth and society’s cohesion?

One of the most concerning problems is that personalization algorithms are increasingly ‘addictive by design’. Human behavior indeed can be easily manipulated by priming and conditioning, using rewards and punishments. Algorithms can autonomously explore manipulative strategies that can be detrimental to you. For example, they can use techniques (e.g. A/B testing) to experiment with various messages until they find the versions that best exploit your vulnerabilities. Compulsion loops are already found in a wide range of social media. Research suggests that such loops can work via variable-rate reinforcement in which rewards are delivered unpredictably — after n actions, a certain reward is given, like in slot machines. This unpredictability affects the brain’s dopamine pathways in ways that magnify rewards. You think you liked that post… but you may have been manipulated to like that after several boring posts, with an outstanding perfect timing. Consider how just dozens of Facebook Likes can reveal useful and highly accurate correlations; hundreds of likes can predict your personality better than your mother could do, research suggests. This can be easily exploited. For example, if you are vulnerable to moral outrage. Researchers have found that each word of moral outrage added to a tweet raises the retweet rate by 17%. Algorithms know that, and could feed you with the “right” content at the right time. 

As a matter of fact, personalization systems deeply affect public opinion, and more often negatively. For increasingly more academics, activists, policy-makers and citizens the concern is that social media, more generally, are downgrading our attention spans, a common base of facts, the capacity for complexity and nuanced critical thinking, hindering our ability to construct shared agendas to help to solve the epochal challenges we all face. This supposed degraded and degrading capacity for collective action arguably represents “the climate change of culture.” Yet, research on the risks posed by social media – and more specifically their personalization systems – is still very contradictory; these are very hard to prove and, eventually, to mitigate. In light of the fast-changing media landscape, many studies become rapidly outdated, and this contributes to the broader crisis concerning the study of algorithms; these are indeed “black-boxed”, which means their functioning is opaque and their interpretability may not even be clear to engineers. Moreover, there are no easy social media alternatives one can join in to meet friends and share information. These one day might spread but until that day billions of people worldwide have to rely on opaque personalization systems that ultimately may impoverish them. They are an essential and increasingly valuable public instrument to mediate information and relations. And considering that these even introduce a new form of power of mass behavioral prediction and modification that is nowadays concentrated in very few tech companies, there is a clear need to radically tackle these risks and concerns now. But how?

By analyzing challenges, governance and regulation of personalization, what we argue in this paper is that we as a society need to frame, discuss and ultimately grant to all users a sovereignty over personalization algorithms. More generally, with ‘algorithmic sovereignty’ in social media we intend the regulation of information filtering and personalization design choices according to democratic principles, to set their scope for private purposes, and to harness their power for the public good. In other words, to open black-boxed personalization algorithms of (mainstream) social media to citizens and independent and public institutions. By doing this, we also explore specific experiences, projects and policies that aim to increase users’ agency. Ultimately, we preliminary highlight basic legal, theoretical, technical and social preconditions to attain what we defined as algorithmic sovereignty. To regain trust between users and platforms, personalization algorithms need to be seen not as a form of legitimate hedonistic subjugation, but as an opportunity for new forms of individual liberation and social awareness. And this can only occur with the right and capacity by citizens as well as democratic institutions to make self-determined choices on these legally private (but essentially public) personalization systems. As we argue thoughout the paper, we believe that such endeavor is within reach and that public institutions and civil society could and should eventually sustain its realization.

Protesting online: Stefania interviewed by the Dutch Tegenlicht

Only a few months ago, we were able to walk the streets with for the Women’s or climate march. Now streets are empty and activists, except for a few, stay at home. How to demonstrate in the so-called one-and-a-half meter society?

Stefania has been interviewed in an article by the Dutch critical public documentary series Tegenlicht / BackLight concerning protesting online. In light of COVID what does it mean to protest changes – read the full article here (in Dutch).

[BigDataSur] The Challenge of Decolonizing Big Data through Citizen Data Audits [1/3]

Author: Katherine Reilly, Simon Fraser University, School of Communication

A curious thing happened in Europe after the creation of the GDPR. A whole new wave of data audit companies came into existence to service companies that use personal data. This is because, under the GDPR, private companies must audit their personal data management practices. An entire industry emerged around this requirement. If you enter “GDPR data audit” into Google, you’ll discover article after article covering topics like “the 7 habits of highly effective data managers” and “a checklist for personal data audits.”

Corporate data audits are central to the personal data protection frameworks that have emerged in the past few years. But among citizen groups, and in the community, data audits are very little discussed. The word “audit” is just not very sexy. It brings to mind green eyeshades, piles of ledgers, and a judge-y disposition. Also, audits seem like they might be a tool of datafication and domination. If data colonization “encloses the very substance of life” (Halkort), then wouldn’t data auditing play into these processes?

In these three blog posts, I suggest that this is not necessarily the case. In fact, we precisely need to develop the field of citizen data audits, because they offer us an indispensable tool for the decolonization of big data. The posts look at how audits contribute to upholding our current data regimes, an early attempt to realize a citizen data audit in Peru, and emerging alternative approaches. The series of the following blogposts will be published the coming weeks:

  1. The Current Reality of Personal Data Audits [find below]

  2. A First Attempt at Citizen Data Audits [link]

  3. Data Stewardship through Citizen Centered Data Audits [link]

 

The Current Reality of Personal Data Audits

Before we can talk about citizen data audits, it is helpful to first introduce the idea of auditing in general, and then unpack the current reality of personal data audits. In this post, I’ll explain what audits are, the dominant approach to data audits in the world right now, and finally, the role that audits play in normalizing the current corporate-focused data regime.

The aim of any audit is to check whether people are carrying out practices according to established standards or criteria that ensure proper, efficient and effective management of resources.

By their nature, audits are twice removed from reality. In one sense, this is because auditors look for evidence of tasks rather than engaging directly in them. An auditor shows up after data has been collected, processed, stored or applied, and they study the processes used, as well as their impacts. They ask questions like “How were these tasks completed, and, were they done properly?”

Auditors are removed from reality in a second sense, because they use standards established by other people. An auditor might ask “Were these tasks done according to corporate policy, professional standards, or the law?” Auditors might gain insights into how policies, standards or laws might be changed, but their main job is to report on compliance with standards set by others.

Because auditors are removed from the reality of data work, and because they focus on compliance, their work can come across as distant, prescribed – and therefore somewhat boring. But when you step back and look at the bigger picture, audits raise many important questions. Who do auditors report to and why? Who sets the standards by which personal data audits are carried out? What processes does a personal data audit enforce? How might audits normalize corporate use of personal data?

We can start to answer these questions by digging into the criteria that currently drive corporate audits of personal data. These can be divided into two main aspects: corporate policy and government regulation.

On the corporate side, audits are driven by two main criteria: risk management and profitability. From a corporate point of view, personal data audits are no exception. Companies want to make sure that personal data doesn’t expose them to liabilities, and that use of this resource is contributing effectively and efficiently to the corporate bottom line.

That means that when they audit their use of personal data, they will check to see whether the costs of warehousing and managing data is worth the reward in terms of efficiencies or returns. They will also check to see whether the use of personal data exposes them to risk, given existing legal requirements, social norms or professional practices. For example, poor data management may expose a company to the risk of being sued, or the risk of alienating their clientele. Companies want to ensure that their internal practices limit exposure to risks that may damage their brand, harm their reputation, incur costs, or undermine productivity.

In total, corporate data audits are driven by, and respond to, corporate policies, and those policies are organized around ensuring the viability and success of the corporation.

Of course, the success of a corporation does not always align with the well-being of the community. We see this clearly in the world of personal data. Corporate hunger for personal data resources has often come at the expense of personal or community rights.

Because of this, governments insist that companies enforce three additional regulatory data audit criteria: informed consent, personal data security, and personal data privacy.

We can see these criteria reflected clearly in the EU’s General Data Privacy Regulation. Under the GDPR, companies must ask customers for permission to access their data, and when they do so, they must provide clear information about how they intend to use that data.

They must also account for the personal data they hold, how it was gathered, from whom, to what end, where it is held, and who accesses it for what business processes. The purpose of these rules is to ensure companies develop clear internal data management policies and practices, and this, in turn, is meant to ensure companies are thinking carefully about how to protect personal privacy and data security. The GDPR requires companies to audit their data management practices on the basis of these criteria.

Taking corporate policy and government regulation together, personal data audits are currently informed by 5 criteria – profitability, risk, consent, security and privacy. What does this tell us about the management of data resources in our current data regime?

In a recent Guardian piece Stephanie Hare pointed out that “the GDPR could have … [made] privacy the default and requir[ed] us to opt in if we want to have our data collected. But this would hurt the ability of governments and companies to know about us and predict and manipulate our behaviour.” Instead, in the current regime, governments accept the central audit criteria of businesses, and on top of this, they establish the minimal protections necessary to ensure a steady flow of personal data to those same corporate actors. This means that the current data regime (at least in the West) privileges the idea that data resides with the individual, and also the idea that corporate success requires access to personal data.

Audits work to enforce the collection of personal data by private companies, by ensuring that companies are efficient, effective and risk averse in the collection of personal data. They also normalize corporate collection of personal data by providing a built in response to security threats and privacy concerns. When the model fails – when there is a security breach or privacy is disrespected – audits can be used to identify the glitch so that the system can continue its forward march.

And this means that audits can, indeed, serve as tools of datafication and domination. But I don’t think this necessarily needs to be the case. In the next post, I’ll explore what we’ve learned from experimenting with citizen data audits, before turning to the question of how they can contribute to the decolonization of big data in the final post.

 

About the author: Dr. Katherine Reilly is Associate Professor in the School of Communication at Simon Fraser University in Vancouver, Canada. She is the recipient of a SSHRC Partnership Grant and an International Development Research Centre grant to explore citizen data audit methodologies alongside Derechos Digitales in Chile, Fundacion Karisma in Colombia, Sula Batsu in Costa Rica, TEDIC in Paraguay, HiperDerecho in Peru, and ObservaTIC in Uruguray.

[BigDataSur] Data journalism without data: challenges from a Brazilian perspective

Author: Peter Füssy

For the last decade, data journalism has attracted attention from scholars, some of whom have provided distinct definitions in order to understand the changes in journalistic practices. Each one of them emphasizes a particular aspect of data journalism; from new forms of collaboration to open-source culture (Coddington, 2014). Yet, even among clashing definitions, it is possible to say they all agree that there is no data journalism without data. But which data? Relevant data does not generate by itself and it is usually related to power, economic, and/or political struggles (De Maeyer et. al, 2014). While journalists in the Global North mostly benefit from open government mechanisms for public scrutiny, journalists working in countries with less transparency and democratic tradition still face infrastructural issues when putting together data and journalism (Borges-Rey, 2019; Wright, Zamith & Bebawi, 2019).

For the next paragraphs, I draw from academic research, reports, projects, and my own experience to briefly problematize one of the most recurring challenges to data journalism in Brazil: access to information. Since relevant data is rarely available immediately, a considerable part of data-driven investigative projects in Brazil relies on Freedom of Information (FOI) law that forces governments to provide data of public interest. Also known as Access to Information or Right to Information, these acts are an essential tool to increase transparency, accountability, citizens agency, and trust. Yet, implementation and compliance of the regulation in Brazil are inefficient in all levels of government bodies (Michener, 2018; Abraji, 2019; Fonseca, 2020; Venturini, 2017).

More than just a bureaucratic issue inherited from years of dictatorship and lack of competences, this inefficiency is also a political act. As Torres argued, taking Mexico as an example, institutional resistance to transparency is carried out through subtle and non-political actions that diminish data activists agency and have the effect of producing or reinforcing inequalities (Torres, 2020). In the case of Brazil, however, recent reports imply that institutional resistance to transparency is not necessarily subtle. It may also be a political flag.

Opacity and Freedom of Information

According to Berliner, the first FOI act was passed in Sweden in 1766, but the recent wave follows the example of the United States’ act from 1966. After the US, there is no clear pattern for adoption; for example, Colombia passed a law in 1985, while the United Kingdom did so only in 2000. FOI acts are more likely to pass when there is a highly competitive domestic political environment, rather than pressure from civil society or international institutions (Berliner, 2014).

Sanctioned in 2011, the Brazilian FOI came to effect only in 2012. In the first six years, 611.3 thousand requests were filled just in the federal government (excluding state and municipal bodies). The average of 279 requests per day or 11 per hour suggests how eager the population was to decentralise information. Although public authorities often give insufficient responses and say that the request was granted, it is possible to say the law was about to “stick”. From the total requests, 458.4 thousand (75%) resulted in partial or full access to the requested information (Valente, 2018).

At the beginning of 2019, while president Jair Bolsonaro was at his first international appearance as the Brazilian head of state in Davos, vice president general Hamilton Mourão signed a decree to limit access to information by allowing government employees to declare confidentiality of public data up to the top-secret level, which makes documents unavailable for 25 years (Folha de S.Paulo, 2019). Until then, this could be done only by the president and vice president, ministers of state, commanders of the armed forces and heads of diplomatic missions abroad. Facing a backlash from civil society, Bolsonaro lost support in Congress to pass that bill and withdraw the resolution a few weeks later. Nonetheless, reports show that the issues regarding FOI requests are growing under his presidency.

Data collected from the Brazilian FOI electronic system by Agência Pública revealed that Federal Government’s denials of requests with the justification of “fishing expedition” increased from 8 in 2018 to 45 in the first year of Bolsonaro’s presidency (Fonseca, 2020). The term “fishing expedition” is pejorative and usually related to secret or non-stated purposes, like using an unrelated investigation or questioning to find evidence to be used against an adversary in a different context. However, according to the Brazilian FOI, the reason behind a request must not be taken into account when deciding to provide information or not.

At the same time, journalists’ perception of difficulties to retrieve information via FOI reached the highest numbers in 2019, when 89% of the interviewed journalists described issues like answers after the legal deadline, missing information, data in closed format, and denial of information (Abraji, 2019). In 2013, 60% reported difficulties, and the number dropped to 57% in 2015.

For example, after more than one year in the office, Bolsonaro’s presidency still refuses to make public the guest list of his inauguration reception. In addition to the guest list, the government keeps in secrecy more than R$ 15 million in expenses made with corporate cards from the Presidency and Vice President’s Office. The confidentiality remains even after a decision by the Supreme Court that overturned the confidentiality in November last year.

More from less

Despite the challenges, Brazilian journalists are following the quantitative turn in the field and creating innovative data-driven projects. As reported by the Brazilian Association of Investigative Journalism (Abraji), at least 1.289 news stories built on data from FOI requests were published from 2012 to 2019. In 2017, the “Ctrl+X” project, which scraped thousands of lawsuits to expose politicians trying to silence journalists in courts, won a prize in the Global Editors’ Data Journalism Awards.

In the following year, G1 won the public choice award with a project that tracked every single murder in the country for a week. The results from the “Violence Monitor” showed a total of 1,195 deaths, one in every eight minutes. However, this project did not rely on FOI requests but on an unprecedented collaboration of 230 journalists employed by the biggest media group in Brazil, Globo. They gathered the data from scratch at police stations all over the country to tell the stories of the victims. Besides that, G1 partnered with Universidade de São Paulo for analysis and launched a campaign on TV and social media so that people could identify some of the victims.

Regardless of the lack of resources, freedom, and safety, these projects show that data journalism can be a tool to rebuild trust from audiences. However, activism to break the resistance to transparency is a challenge even more prominent when opacity seems to be encouraged by institutional actors.

 

About the author

Peter is a journalist trying to explore new media in depth, from everyday digital practices to the undesired consequences of a highly connected environment. After more than 10 years of writing and multimedia reporting for some of the most relevant news outlets in Brazil, he is now second years Research Master’s student in Media Studies at the University of Amsterdam.

 

References

Berliner, Daniel. “The political origins of transparency.” The journal of Politics 76.2 (2014): 479-491.

Borges-Rey, Eddy. “Data Journalism in Latin America: Community, Development and Contestation.” Data Journalism in the Global South. Palgrave Macmillan, Cham, 2019. 257-283.

Coddington, Mark. “Clarifying journalism’s quantitative turn: A typology for evaluating data journalism, computational journalism, and computer-assisted reporting.” Digital journalism 3.3 (2015): 331-348.

De Maeyer, Juliette, et al. “Waiting for data journalism: A qualitative assessment of the anecdotal take-up of data journalism in French-speaking Belgium.” Digital journalism 3.3 (2015): 432-446.

Fonseca, Bruno. Governo Bolsonaro acusa cidadãos de “pescarem” dados ao negar pedidos de informação pública. Agência Pública. 6 Feb, 2020. 

Michener, Gregory, Evelyn Contreras, and Irene Niskier. “From opacity to transparency? Evaluating access to information in Brazil five years later.” Revista de Administração Pública 52.4 (2018): 610-629.

Michener, Gregory, et al. “Googling the requester: Identity‐questing and discrimination in public service provision.” Governance (2019).

Valente, Jonas. “LAI: governo federal recebeu mais de 600 mil pedidos de informação”. Agência Brasil. May 16, 2018. 

Venturini, Lilian. “Se transparência é regra, por que é preciso mandar divulgar salários de juízes?”. Nexo Jornal. São Paulo, 3 Sept. 2017.

Wright, Kate, Rodrigo Zamith, and Saba Bebawi. “Data Journalism beyond Majority World Countries: Challenges and Opportunities.” Digital Journalism 7.9 (2019): 1295-1302.

[blog] The true cost of human rights witnessing

Author: Alexandra Elliott – Header image: Troll Patrol India, Amnesty Decoders

Witnessing is widely accepted as an established element of enforcing justice, and recent increase in accessibility to big data revolutionizes this process. Data witnessing, now, can be conducted by remote actors using digital tools to code large amounts of information – a process exemplified for instance by Amnesty International’s Amnesty Decoders. Gray presents an account of the Amnesty Decoders initiative and provides examples of their cases, such as “Decode Darfur” (977) in which volunteers successfully identified the destruction of villages during war by comparing the before and after satellite imagery. A critical, yet under-discussed consequence of this type of work is the significant mental toll of engaging with this amount of confronting material. The nature of human rights exposés means witnesses are working with disturbing imagery often depicting violence and devastation, which can lead to secondary trauma and must be managed accordingly.

This blog-post should be read as an overview of completed research into the mental health effects of data witnessing and the initiatives that should be put in place to mitigate this. It concludes by highlighting Berkley’s Investigations Lab as an example of the efficient implementation of protective measures in human rights research. The text below presents, however, only the tip of the iceberg of detailed scholarship and I recommend turning to the Human Rights Resilience Project for a more thorough inventory.

The Human Rights Resilience Project is an “interdisciplinary research initiative […] working to document, awareness-raising, and the development of culturally-sensitive training programs to promote well-being and resilience among human rights workers” (“Human Rights Resilience Project – NYU School Of Law – CHRGJ”). Whilst not undertaking any human rights witnessing itself, it functions as a toolbox for those who do. It provides an excellent example of bringing the issue to the forefront of discourse, advocating for the psychological risks of engaging in human rights witnessing to receive the attention it’s severity demands so that both workers and institutions can prepare and manage accordingly.

Data Witnessing and Mental Health

We have reached a point in research in which the correlation between declining mental health and exposure to confronting material in data witnessing work is undeniable. There is a large collection of papers available which evidence the harmful impact on mental wellbeing within the human rights industry.

Dubberly, Griffin and Mert Bal’s research provides a clear overview of “the impact that viewing traumatic eyewitness media has upon the mental health of staff working for news, human rights and humanitarian organisations” (4). They introduce the notion of a “digital frontline” (5) as online data witnessing relocates the confrontation of graphic, disturbing material previously encountered exclusively in the physical field to an office desk far removed from the scene of the crime. 55% of the humanitarian workers and data witnesses observed in the research viewed shocking profanity at least weekly. Carried along with this shift is the psychological impact affiliated with engaging with disturbing content. The effects detected included that workers “developed a negative view of the world, feel isolated, experience flashbacks, nightmares and stress-related medical conditions” (5).

Over the past few years, a range of similar research was undertaken, of which I have presented merely a selection, all confirming a correlation between human rights witnessing and a negative headspace. In Knuckey, Satterthwaite, and Brown list human rights work practices that would contribute to fluctuating mental states, being; trauma exposure, a sensation of hopelessness, high standards and self-criticism, and inflexibility towards coping mechanisms. Similarly, Reiter and Koenig also discuss impacts of humanitarian research on workers’ mental health. Flores Morales et al. conducted a study of human rights defenders and journalists in Mexico of whom are consistently exposed to traumatic content in their work. They detect strong levels of secondary traumatic stress symptoms amongst 36.4% of participants. Finally in one of the earlier investigations into the concern, Joscelyne et al. surveyed international human rights workers to determine the consequences their work had on their psychological wellbeing. The results stated participant levels of 19.4% for PTSD and 18.8% for subthreshold PTSD. Depression was present amongst 14.75 of workers surveyed. Shockingly, these proportions are very similar to those observed amongst combat veterans reiterating the severity of the matter and emphasising the requirement for action.

A Call to Action

Several sectors of the literature on the relationship between data witnessing and mental health focus on what initiatives are currently adopted by organisations to identify, prevent and counteract occasions of trauma and depression amongst researchers or proposes new, potentially effective strategies.

Satterthwaite et al. is an example study that aims to map established techniques for recognizing and reacting to mental health concerns within human rights work. Ultimately it is concluded that the current action of organisations is weak and the suggestion is for targeted training programmes and further academic discourse. Observations of negligence seem to become a trend, with Dubberly et al. also reporting a lack of protective processes in place amongst the majority of organisations studied. In what is dubbed a “tough up or get out” culture (7), humanitarian efforts deny proper recognition of the effects of trauma upon their researchers and thus offer no support or compensation. Additionally, new employees are not notified of the degree of profanity of their daily work material and are consequentially inappropriately prepared.

Acknowledging this gap in current support structures, academics have sought to develop strategies detecting, preventing and reducing declining mental health amongst data witnesses. For instance, Reiter and Koenig’s “Challenges and Strategies for Researching Trauma” describe protective techniques that aim to strengthen resilience; eg. explicitly acknowledgement of the psychological consequences and subsequently fostering a supportive workplace community.

Academics too urge the need for tools for self-care. Distinct from the pampering sessions and beauty treatments commonly affiliated with the term, here self-care practices are put to use to strengthen mental health. Pyles (2018) promotes self-care within the work of data witnessing for its ability to “cultivate the conditions that might allow them to feel more connected to themselves, their clients, colleagues and communities” (xix). This sense of community and grounding within a greater environment is important to counteract any feelings of isolation. Kanter and Sherman also encourage human rights organisations to adopt a “culture of self-care” to mitigate the risk of mental burnout and Pigni’s book “The Idealist’s Survival Kit” was written to provide human rights researchers and witnesses with an artillery of 75 self-care techniques.

As mentioned by Satterthwaite et al., tt is important to acknowledge the lack of mitigating practices in place may well be because of a lack of funding rather than an act of negligence. Dependency on external fundraisers introduces a complex network in which responsibility is distributed amongst a range of actors with varying motivations.

Berkeley

Leading by Example

The tendency for human right organisations to neglect their workers’ mental wellbeing is fortunately not universal. There are instances of hiring counselors and enforcing regular breaks and rotations (Duuberly et al.), one standout initiative is that of the University of Berkley’s Human Rights Centre Investigations Lab.

Following a similar format to the Amnesty Decoders, workers at the Investigations Lab “use social media and other publicly available, internet-based sources to develop evidence for advocacy and legal accountability” (“HRC Investigations Lab | Human Rights Center”). What sets the Lab apart is its dedication to “resiliency resources” – a programme of training and tools aiming to support the witnesses’ wellbeing. Upon orientation to the lab, workers receive resiliency training in which they receive small practical tips to avoid secondary trauma; “use post-its to block out graphic material when viewing a video repeatedly” (“Resiliency Resources | Human Rights Center”), for example. Additionally they are encouraged to regularly check in with an allocated resiliency manager.

Concluding Thoughts

The material human rights witnesses engage with is horrific and the protection of their mental health must be prioritized by the institutions for which they work. However it is also important to remember the necessity of their work in detecting human rights violations and war crimes. The role of data witnessing is admirable and cannot simply be omitted. Therefore the way forward is for human rights institutions to guarantee a support network of education, tools and community so that witnesses can continue to strengthen humanitarian action without personal detrimental consequences.

About the author

Alexandra grew up in Sydney, Australia before moving to England to complete her Bachelors degree at Warwick University. She is currently undertaking a Research Masters in Media Studies at the University of Amsterdam. It is through this course that she became involved with the Good Data tutorial and DATACTIVE project.

References

Dubberley, Sam, Elizabeth Griffin, and Haluk Mert Bal. “Making secondary trauma a primary issue: A study of eyewitness media and vicarious trauma on the digital frontline.” Eyewitness Media Hub (2015).

Flores Morales, Rogelio et al. “Estrés Traumático Secundario (ETS) En Periodistas Mexicanos Y Defensores De Derechos Humanos”. Summa Psicológica, vol 13, no. 1, 2016, pp. 101-111. Summa Psicologica UST, doi:10.18774/448x.2016.13.290.

Gray, Jonathan. “Data Witnessing: Attending To Injustice With Data In Amnesty International’S Decoders Project”. Information, Communication & Society, vol 22, no. 7, 2019, pp. 971-991. Informa UK Limited, doi:10.1080/1369118x.2019.1573915.

“HRC Investigations Lab | Human Rights Center”. Humanrights.Berkeley.Edu, https://humanrights.berkeley.edu/students/hrc-investigations-lab.

“Human Rights Resilience Project – NYU School Of Law – CHRGJ”. Chrgj.Org, https://chrgj.org/focus-areas/human-rights-resilience-project/.

Joscelyne, Amy et al. “Mental Health Functioning In The Human Rights Field: Findings From An International Internet-Based Survey”. PLOS ONE, vol 10, no. 12, 2015, p. e0145188. Public Library Of Science (Plos), doi:10.1371/journal.pone.0145188.

Kanter, Beth, and Aliza Sherman. “Updating The Nonprofit Work Ethic”. Stanford Social Innovation Review, 2016, https://ssir.org/articles/entry/updating_the_nonprofit_work_ethic?utm_source=Enews&utm_medium=Email&utm_campaign=SSIR_Now&utm_content=Title

Knuckey, Sarah, Margaret Satterthwaite, and Adam Brown. “Trauma, depression, and burnout in the human rights field: Identifying barriers and pathways to resilient advocacy.” HRLR Online 2 (2018): 267.

Pigni, Alessandra. The Idealist’s Survival Kit: 75 Simple Ways to Avoid Burnout. Parallax Press, 2016.

Pyles, Loretta. Healing justice: Holistic self-care for change makers. Oxford University Press, 2018.

Reiter, Keramet, and Alexa Koenig. “Reiter And Koenig On Researching Trauma”. Www.Palgrave.Com, 2017, https://www.palgrave.com/gp/blogs/social-sciences/reiter-and-koenig-on-researching-trauma.

“Resiliency Resources | Human Rights Center”. Humanrights.Berkeley.Edu, https://humanrights.berkeley.edu/programs-projects/tech-human-rights-program/investigations-lab/resiliency-resources.

Satterthwaite, Margaret, et al. “From a Culture of Unwellness to Sustainable Advocacy: Organizational Responses to Mental Health Risks in the Human Rights Field.” S. Cal. Rev. L. & Soc. Just. 28 (2019): 443.

Image References

Berkeley. “Human Rights Investigations Lab: Where Facts Matter”. Human Rights Centre, https://humanrights.berkeley.edu/programs-projects/tech/investigations-lab.

Perpetual Media Group. “14 Things Marketers Should Never Do On Twitter”. Perpetual Media Group, https://www.perpetualmediagroup.ca/14-things-marketers-should-never-do-on-twitter/.

[blog] Catching a Glimpse of the Elusive “Feminist Drone”

Author: Erinne Paisley

Introduction

Unmanned Aerial Vehicles (UAV or “drones”) are increasingly being used for military, governmental, commercial and personal purposes (Feigenbaum 267; Estrada 100). This rapid increase in drone use raises new questions about how this technology reinforces certain social and political inequalities within its own structure, function, and use. Those who work within the growing academic field of feminist internet studies are dedicated to understanding the aspects of society’s inequities that are both present in new technologies and that can be decreased through these mediums. However, a clear picture of what a “feminist drone” can look like is still relatively elusive.

To paint a picture of how this new media form can be used to decrease gendered inequalities, we can look to two previous feminist drone projects: Droncita (Dronette) in Mexico and the “Abortion Drone” in Poland. Each of these UAV projects worked in unique ways to expose the existing inequalities that are strengthened through typical drone use and, instead, counteract these forces by using the technology to fulfill feminist agendas. Droncita worked to address spatial inequalities and the “Abortion Drone” aimed to expose and counteract legal inequalities. These cases show a glimpse into the future of feminist drones and the expanding field of feminist internet studies that support them.

Mexico’s Droncita (Dronette)

Discrimination against women includes the exclusion of women from physical spaces . They are also discriminated against in additional intersecting ways including racially and economically. This exclusion of women ranges from workplaces to specific areas of cities that have high risks of sexual assault and other forms of violence (Spain 137). Operating from the skies, drones are able to use their small aerial cameras to literally offer new opportunities for viewing and recording our political and social world. In this way, they can reimagine some of these spatially exclusionary forms of discrimination – as we can see with Droncita.

Droncita made her debut in Ecatepec – 20km from Mexico City. Ecatepec is the city’s municipality with the highest rate of deaths presumed to be murder. In 2016, Feminist protestors filled the main square in an attempt to draw attention to the state’s inadequate reaction to the increasing number of female deaths in the country. The activists worked together, using white paint, to cover the square’s ground. The message they were creating was only viewable by one activist in particular: Droncita.

The drone was created by the Rexiste collective project, that began out of an opposition to the presidential election of Peña Nieto. Above these feminist activists, the drone now whirred, recording the emerging message. From Droncita’s point of view, the white paint clearly states: “Femicide State”. By recording this message from the sky’s unclaimed public space, Droncita firstly draws attention, in contrast, to the gendered space of Ecatepec below. The drone’s recording highlights that the feminist protestors are still not fully free to create their message safely in this space. As well, Droncita reclaims the space, alongside the activists below, by completing their message and illustrating its take-over of the square.

Femicide is: “The killing of a woman or girl, in particular by a man and on account of her gender.”

Through its actions, Droncita uses “digital ethnography”, the linking of digital space with actual space, to intervene (Estrada 104). Droncita turns aerial space into public space, making violence against women and the reality of the physical more visible – ultimately holding the Mexican government accountable for its role in creating a space where women feel unsafe and face omissions of justice.

Poland’s “Abortion Drone”

Gendered and intersectional discrimination is also upheld globally through law. One of the most significant, and ongoing, ways is through legal boundaries for women’s access to safe and affordable abortions. Women’s rights to make decisions over their own bodies include decisions regarding abortions and yet this form of healthcare is still illegal in many countries. As of 2020, abortion is fully illegal in 27 countries (even if the pregnancy is due to rape or incest). This legal boundary does not mean that women stop getting abortions, but instead that they are forced to receive expensive and unsafe medical attention. According to the World Health Organization, approximately 25 million unsafe abortions occur annually worldwide and over 7 million women are admitted to hospitals in developing countries due to this lack of safe access.

This is where the role of the “Abortion Drone” comes in. In 2015, across the German border from Słubice, Poland, this drone prepared to make its first trip. On one side of the river, a collection of women’s rights organizations and doctors prepared to fly the “quadcopter” to the other side. There a collection of pro-life protestors, journalists, and two women waited to swallow the abortion-inducing pills attached to the drone.

Despite the only 60-second length of the journey, the goal of the “Abortion Drone” was far-reaching. Within Poland, abortion is still illegal unless a woman’s life is categorized as being “in danger” or there is “evidence” of rape, incest or severe fetal abnormalities (O’Neil 2015). Because of these barriers, over 50,000 “underground abortions” are conducted each year – often using out-dated and dangerous tools and for thousands of dollars (limiting the resource to those who can economically afford it). Not only are Poland’s legal barriers for women’s access to healthcare a threat for the safety of those within the country, but they also serve as a wider representation of the legal struggles of millions of women globally.

The collection of activists and doctors called Women on Waves explains: “The medicines used for a medical abortion, mifepristone and misoprostol, have been on the list of essential medicines of the World Health Organization since 2005 and are available in Germany and almost all other European countries.”

As the “Abortion Drone” takes off on its inaugural flight, there is nothing that those on the Polish side can do to legally stop the drone’s journey. The UAV weighs under 5kg and is not used for commercial purposes. Because of these features, the new technology is able to both make visible the legal barriers for women in Poland and counteract them.

The drone lands on the Polish side safely and the women ceremoniously swallow the pills. Soon after, the activists operating the drones on the German side have their technology confiscated but the drone’s work has already been successful. The “Abortion Drone” has illuminated the legal and sexist inequalities that exist with regards to women’s access to healthcare – and temporarily counteracted them.

Feminist Drones in the Future

Droncita and the “Abortion Drone” illustrate the potential of feminist drones to illuminate and counteract spatial and legal inequalities that still exist for women and minorities today. The potential for feminist drones goes much beyond these two cases. As this article is published, feminist internet scholars are working to imagine other creative ways this new media can join the global fight for equality. It is fair to say this new member of the 21st century feminist movement is becoming less elusive; in fact, if you look up you might just catch a glimpse of it.

About the author

Erinne Paisley is a current Research Media Masters student at the University of Amsterdam and completed her BA at the University of Toronto in Peace, Conflict and Justice & Book and Media Studies. She is the author of three books on social media activism for youth with Orca Book Publishing.

Works Cited

Estrada, Marcela Suarez. “Feminist Politics, Drones and the Fight against the ‘Femicide State’ in Mexico.” International Journal of Gender, Science and Technology, vol. 9, no. 2, pp. 99–117.

Feigenbaum, Anna. “From Cyborg Feminism to Drone Feminism: Remembering Women’s Anti-Nuclear Activisms.” Feminist Theory, vol. 16, no. 3, Dec. 2015, pp. 265–88. DOI.org (Crossref), doi:10.1177/1464700115604132.

Feminist Internet. Feminist Internet: About. https://feministinternet.com/about/. Accessed 26 Feb. 2020.

Jones, Sam. “Paint Remover: Mexico Activists Attempt to Drone out Beleaguered President.” The Guardian, 15 Oct. 2015, https://www.theguardian.com/global-development/2015/oct/15/mexico-droncita-rexiste-collective-president-enrique-pena-nieto.

O’Neil, Lauren. “‘Abortion Drone’ Delivers Pregnancy-Terminatinng Pills to Women in Poland.” CBC News, 29 June 2015, https://www.cbc.ca/news/trending/abortion-drone-delivers-medication-to-women-in-poland-1.3132284.

Oxford University Dictionary. “Femicide.” Lexico, https://www.lexico.com/en/definition/femicide. Accessed 26 Feb. 2020.

Spain, Daphne. “Gendered Spaces and Women’s Status.” Sociological Theory, vol. 11, no. 2, July 1993, pp. 137–51.

Women on Waves. Abortion Drone; First Flight to Poland. https://www.womenonwaves.org/en/page/5636/abortion-drone–first-flight-to-poland. Accessed 26 Feb. 2020.

World Health Organization. Preventing Unsafe Abortion. 26 June 2019, https://www.who.int/news-room/fact-sheets/detail/preventing-unsafe-abortion.

World Population Review. Countries Where Abortion Is Illegal 2020. http://worldpopulationreview.com/countries/countries-where-abortion-is-illegal/. Accessed 26 Feb. 2020.

[blog] Show me the numbers: a case of impact communication in FLOSS

Author: Jeroen de Vos, header image by Ford Foundation

This blog post will explore the potential of repurposing impact assessment tools as a means to leverage funding problems in Free and Libre Open Source Software by making explicit the role they have in crucial public digital infrastructure. Two key concepts are relevant to help explain this specific exploration, the first of which is Free and Libre Open Source Software (FLOSS) and the central role it plays in facilitating a common software infrastructure used by both public and private organisations as well as civil society at large. The second is the notion of impact assessment as a strategy to understand, account for and communicate results of your efforts beyond merely financial numbers.

‘Money talk is kind of a taboo in the F[L]OSS community’, one respondent replied in an interview I most recently conducted at CCC 36C3. The talk he just gave outlined some of the tentative revenue models one could think of to make your software development activities more sustainable – it attracted a larger-than-expected audience with interesting follow-up questions. FLOSS software development very much draws on the internal motivation of developers or a developer community, with recurring questions of sustainability when relying on volunteering time that could be spent differently. And the complexity of this situation cannot be underestimated. The 2016 Ford Foundation report Roads and bridges: The Unseen Labor Behind Our Digital Infrastructure (Eghbal) contextualizes some of the common problems in the open-source software development – think of for instance the lack of appreciation of invisible labour, the emotional burden of upkeeping a popular project started, or the constant struggle over motivation while being structurally un- or underfunded.

The report draws on the metaphor of FLOSS as infrastructure, since it is readily available to anyone alike, but also in needs maintenance – has its limitations, but works well to illustrate the point. Just like infrastructure supports the flows of ideas, goods and people FLOSS operates on every level of digital infrastructure, whether talking about the NTP protocol synchronizing the internet, GnuPG (an encryption protocol allowing secure communication and data sharing) or MySQL (a database structure which quickly became a go-to standard for information storage and retrieval). Another commonality: as long as the infrastructure functions, its underlying support systems are seemingly invisible. That is, up until the point of failure it is unseen to which extent both private and public goods and services and public or private communication rely on these software packages. Only at failure, it becomes painfully explicit.

The recent well-known example of this escalation taking place is with the so-called Heartbleed bug. The FLOSS OpenSSL package contains the most widely used protocol for encrypting web traffic. Due to a bug creeping into the code somewhere in 2011, attackers could intercept information from connections that should be encrypted – which rendered large parts of online infrastructure unsafe in design, including services like Google, Amazon and many others. The issue raised the attention to the OpenSSL developers’ under-capacity – only one working full time for a salary of only a third to its colleagues in commercial counterparts. This is the point where the impact assessment tools might come into play – rather than relying on controversies to make visible the apparent widespread embedding and dependency on particular pieces of software, why not use impact assessment as a way to understand public relevance?

Conducting impact assessments can help communicate the necessity of maintenance by making visible the embeddedness of FLOSS software packages – whether it is on the level of language, operating system or protocol. To briefly contextualize, impact assessment grew out of changing management needs and has been implemented in the organisation of ‘soft output’ whether it be policymaking or social entrepreneurship. It is an interventionist tool that allows defining qualitative output with subsequent quantitative proxies to help understand the implementation results in relation to the desired output as described in a theory of change. It helps to both evaluate the social, technological, economic, environmental and political value created and subsequently make insightful the extend to which obsoletion would disrupt existing public digital infrastructure.

Without going too much into detail it needs mentioning that impact assessment already made its introduction as part of reporting deliverables to funders where relevant. Part of this exercise, however, is to instrumentalize impact assessment not only for (private) reporting by projects already funded but for (public) communicating FLOSS impact especially for projects without the necessary revenue streams in place. Needless to say, this output is only one of the steps in the process of making crucial FLOSS more sustainable but an important one, assessment output might help tapping into public or private sponsorship, establishing new collaborations with governments, educators and businesses alike, and venture into other new and exciting funding models.

This piece is meant as a conversation starter, do you already know of existing strategies to help communicate FLOSS output, are you involved in creating alternative business models for for-good public data infrastructure – ideas and comments welcome. Email: jeroen@data-activism.net

As for a short disclaimer I have been working with social enterprises developing market research and impact-first business models, I have been mulling over the crossover between social entrepreneurship and (FLOSS) activism, in their common struggle for sustainability, relying on informal networks or communities of action and trying to make a social change either from within or from the outside. This blog post is an attempt to think together social entrepreneurship and data activism through the use of a use-case: impact assessment for FLOSS.

References:

Eghbal, N. (2016). Roads and bridges: The unseen labor behind our digital infrastructure. Ford Foundation.