Category: show on landing page

Why we won’t be at APC 2018

In October 2018, the Amsterdam Privacy Conference (APC) will be back at the University of Amsterdam. Two DATACTIVE project team members, Stefania (Principal Investigator), and Becky (PhD candidate), enthusiastically supported the conference as coordinators of the ‘Digital Society and Surveillance’ theme. The Data Justice Lab at Cardiff University submitted a panel proposal, which was successfully included. Regretfully, neither will take part in the conference: DATACTIVE and the Data Justice Lab have decided to withdraw over the participation of the US-based software company Palantir as one of the APC’s Platinum Sponsors.

Our decision to withdraw stems from an active refusal to legitimize companies accused of enabling human rights abuses, and a concern with the lack of transparency surrounding sponsorship.

Palantir is a company specializing in big data analytics, which develops technologies for the military, law enforcement and border control. The deployment of Palantir’s technologies has raised wide-spread concern among civil liberties and human rights advocates. Reporting shows that, in the United States, Palantir has played an important role in enabling the efforts of the ICE (Immigration and Customs Enforcement) to identify, detain, and deport undocumented immigrants, refugees, and asylum seekers. This has resulted in the indefinite detention of thousands of children who have been separated from their parentsThis indefensible policy has come under strong criticism from the United Nations and prompted an alliance of technology workers and affected communities, to call – so far, unsuccessfully – for Palantir to cancel its contracts with ICE.

We feel that providing Palantir with a platform, as a sponsor of a prominent academic conference on privacy, significantly undermines efforts to resist the deployment of military-grade surveillance against migrants and marginalized communities already affected by abusive policing. 

Because we have organized conferences ourselves, we believe transparency in sponsorship agreements is key. While we praise the APC organizing committee forcommitting to full transparency, we were not informed of sponsorship agreements until the very last minute. The APC Sponsors page, in addition, was only populated after the participant registration deadline. As conference coordinators and prospective participants, we feel that we were not given the chance to make an informed choice about our contribution.

Sponsorship concerns are not a new issue: the very same controversy, around the involvement of this very same company (as well as others), emerged during the 2015 edition of APC. Though we acknowledge the complexity of corporate sponsorship, we note that other prominent tech policy conferences, such as Computers, Privacy and Data Protection (CPDP) conference, have recently stopped accepting sponsorship from Palantir. We thus believe this is a good moment for a larger discussion about how conferences should be organized in the future.

Academia—and especially publicly-funded universities—need to consider their role in efforts to neutralize or undermine human rights concerns. Such considerations are particularly pertinent in the context of what has been described as the increased neoliberalization of higher education, in which there is significant pressure to attract and pursue funding from different sources. As academics and as citizens, we will increasingly be asked to make choices of this kind. Hence, we believe it is time to set down a clear set of principles for sponsorship going forward.

 

Amsterdam and Cardiff, 19 September 2018

Stefania Milan and Becky Kazansky (DATACTIVE) & Lina Dencik, Arne Hintz, Joanna Redden, Fieke Jansen (Data Justice Lab)

Data Colonialism – the first article of the Special Issue on “Big Data from the South” is out

By London School of Economics Library and Political Science - https://www.flickr.com/photos/lselibrary/3925726761/in/set-72157622828540200/, No restrictions, https://commons.wikimedia.org/w/index.php?curid=10180000
Photo by London School of Economics Library and Political Science

Nick Couldry and Ulisse A. Mejias re-frame the Data from the South debate within the context of modern day colonialism: data colonialism; an alarming stage where human life is “appropriated through data” and life is, eventually, “capitalized without limit”.

This essay marks the beginning of a series of articles under a special issue on Big Data from the South, edited by Stefania Milan and Emiliano Trerè and published on the Television and New Media Journal. This article will be freely accessible for the first month, so we encourage you to put it high up on your to-read list.

The special issue promises interesting takes and approaches from renowned scholars and experts in the filed, such as Angela Daly and Monique Mann, Payal Arora, Stefania Milan and Emiliano Trerè, Jean-Marie Chenou and Carolina Cepeda, Paola Ricaurte Quijano, Jacobo Najera and Jesús Robles Maloof, with a special commentary by Anita Say Chan. Stay tuned for our announcements of these articles as they come up.

Welcome to DATACTIVE’s spinoff ALEX! An interview with fbtrex Lead Developer Claudio Agosti

by Tu Quynh Hoang and Stefania Milan

DATACTIVE is proud to announce that its spin-off ALEX project has been awarded a Proof of Concept grant of the European Research Council. ALEX, which stands in for “ALgorithms Exposed (ALEX). Investigating Automated Personalization and Filtering for Research and Activism”, aims at unmasking the functioning of personalization algorithms on social media platforms, initially taking Facebook as a test case. ALEX marks the engagement of DATACTIVE with “data activism in practice”that is to say, turning data into a point of intervention in society.

To mark the occasion, we publish an interview with Claudio Agosti, DATACTIVE Research Associate and Lead Developer of facebook.tracking.exposed browser extension (fbtrex), whose open-source code is at the core of the ALEX project. Claudio was interviewed by DATACTIVE Principal Investigator Stefania Milan at the Internet Freedom Festival in Valencia, Spain, in relation to a project on content regulation on/by platforms.

Claudio (also known as vecna) is a self-taught technician in digital security. With the internet gradually becoming a central agent in the political debate, he moved from the corporate security services to the defence of human rights in the digital sphere. Currently, he is exploring the influence of algorithms on society. Claudio is the coordinator of the free software projects behind https://tracking.exposed and a Founding Member and Vice-President of the Hermes Center for Transparency and Digital Human Rights

Stefania: Is the spread of fake news predominantly a technical or social problem?

Claudio: It is a social problem in the sense that the lack of critical judgment in individuals creates the conditions for fake news or misinformation to spread. However, through technology, the dissemination of misinformation is much faster and can scale up. The problem we are facing now is that when the costs of spreading content drop, the possibilities for an individual to deliver a successful information operation (or infops, I feel this term is more accurate than propaganda) is higher. However, it isn’t true that people lack critical judgment in absolute terms. At a personal level, one can only be an knowledgeable on a limited range of subjects, but the information we receive is very diverse and, most of the time, outside of our domain of expertise. As social media users and information consumers, we should have a way to validate that information. I wonder what if we would know how to validate on our own? This does not exist in mainstream news media either. It is possible, for example, on Wikipedia, but anywhere else, the way that information is spread implies that information is true on its own. A news report, a blog post or a status update on social media do not contain any information that helps validation. All in all, I think fake news is simultaneously a technical and a political problem, because those who create and spread information have responsibility towards user expectations, and this shape also the users’ vulnerability to infops.

Stefania: As a developer, what is your main goal with the facebook.tracking.exposed browser extension?

Claudio: At the moment we haven’t had the tools to assess responsibility with respect to infops. If we say that fake news is a social problem because people are gullible, we put responsibility on users/readers. But it’s also a problem of those publishing the information, who allow themselves to publish incorrect information because they will be hardly held accountable. According to some observers, social media platforms such as Facebook are to be blamed for the spread of misinformation. We have three actors: the user/the reader, the publisher, and the platform. With facebook.tracking.exposed, I’m trying to collect actual data that allows us to reflect on where the responsibilities are. For example, sometimes Facebook is thought to be responsible but in fact it is the responsibility of the content publisher. And sometimes the publishers are to be blamed, but are not legally responsible. We want to collect actual data that can help investigate these assumptions. We do so from an external, neutral position.

Stefania: Based on your studies of the spread of information on social media during the recent elections in Argentina and Italy, can you tell us what the role of platforms is, and of Facebook in particular?

Claudio: In the analyses we did in Argentina and Italy, we realized that there are two accountable actors: the publisher and the platform. Some of the publishers are actually spamming users’ timelines as they are producing too many posts per day. I find it hard to believe that they are producing quality content in that way. They just aim at occupying users’ timelines to exploit some of their seconds of attention. In my opinion, this is to be considered spam. What we also found is that Facebook’s algorithms are completely arbitrary in deciding what a user is or is not going to see. It’s frightening when we consider that a person that displays some kind of deviant behavior such as reading and sharing only fascist or racist content will keep being exposed to even less diverse content. From our investigations of social media content during two heated election campaigns, we have the evidence that if a person expresses populist or fascist behavior, the platform is designed to show her less diverse information in comparison to other users, and that can only reinforce her position. We can also argue that the information experience of that person is of lower quality, assuming that maximum information exposure is always to be preferred.  

Stefania: So what can users do to fix this problem? 

Claudio: I think users should be empowered to run their own algorithms and they should have better tools at their disposal to select the sources of their information diets. This has to become also a task of information publishers. Although everybody on social media is both a publisher and a consumer, people who do publishing as their daily jobs are ever more responsible. For example, they should create much more metadata to go along with information so to permit the system to better filter and categorize content. Users, on the other hand, should have these tools in hand. When we don’t have that set of metadata and thus the possibility to define our own algorithm, we have to rely on Facebook’s algorithms. But Facebook’s algorithms are implicitly promoting Facebook’s agenda and its capitalist imperative of maximizing users’ attention and engagement. For users to have the possibility of defining their own algorithms, we should first of all create the need and the interest to do so by showing how much of the algorithm is the platform’s agenda and how it can really influence our perception of reality. That is what I’m doing now: collecting evidence about this problems and trying to explain it to a broader audience, raising awareness amongst social media users. 

Stefania: Do you think we should involve the government in the process? From your perspective of software developer, do you think we need more regulation?

Claudio: Regulation is really key because it’s important to keep corporations in check. But I’m afraid that, among others, there is a misunderstanding in making regulations which seem to have direct benefits on people’s life, but for example might end up limiting some of the key features of open source software and its distribution. Therefore I’m quite skeptical. I have to say that high level regulations like the General Data Protection Regulation do not try to regulate the technology but rather its effects and in particular data usage. They are quite abstract and distant from the technology itself. If the regulators want to tell the company what to do and what not to do, I’m afraid that in the democratic competition of the technical field the probability of making mistakes is higher. On the other hand, if we just regulate users/consumers production explicitly, we would end up reinforcing the goals of the data corporations even more. So far, regulations have in fact been exploited by the biggest fish in the market. In this game we can distinguish three political entities: users, companies, and governments. In retrospect, we see that there have been cases where companies have helped citizens against governments and, in some other case, governments have helped citizen against companies. I hope we can aggregate users and civil society organizations around our project, because that’s the political entity that is in utmost need to be somehow guided or supported.

Stefania: So the solution is ultimately in users?

Claudio: The problem is complex thus the solution can’t be found in one of the three entities only. With ALEX we will have the opportunity to re-use our data with policies we determine, and therefore try to produce features which can, at least, offer a new social imaginary.

First of all, we aim at promoting diversity. Fbtrex will provide users with tools for comparing their social media timelines to those of others users, based on mutual sharing agreements which puts the users—rather than the company—on the driver seat. The goal is to involve and compare a diverse group of users and their timelines across the globe. In so doing, we empower users to understand what is hidden from them on a given topic. Targeted communication and user defined grouping, as implemented on most social media, lead to fragmentation of knowledge. Filtered interactions confirming a user’s position have been complicit in this fragmentation. Our approach doesn’t intend to solve this technocratic subterfuges with other technological fixes, but to let the user explore the diversity.

In fact, the fragmentation of information and individuals produced by social media has made it even more difficult for users to relate to problems far removed from their reality. How do you understand the problems of migrants, for example, if you have never been away from home yourself, and you don’t spend time in their company? To counter this effect, thanks to the funding of the European Research Council, we will work on an advanced functionality which will… turn the so-called filter bubbles against themselves, sort to speak. 

Secondly, we want to support delegation and fact-checking, enabling third-party researchers to play a role in the process. The data mined by fbtrex will be anonymized and provided to selected third-party researchers, either individuals or collectives. These will be enabled to contextualize the findings, combine it with other data and complement it with data obtained through other social science research methods such as focus groups. But, thanks to the innovative data reuse protocols we will devise, in any given moment users, as data producers, will have a say as to whether and how they want to volunteer their data. We will also work to create trusted relationships and networks with researchers and users.

In conclusion, if users want to really be free, they have to be empowered to be able to exercise their freedom. This means: they have to own their own data, run their algorithms, and understand the political implications behind technological decision-making. To resort to a metaphor, this is exactly the difference between dictatorship and democracy: you can believe or accept that someone will do things for your own good like in a dictatorship, or you can decide to assume your share of responsibility, taking things in your hands and trying to do what is best for you while respecting others—which is exactly what democracy teaches us.

***

ALEX is a joint effort by Claudio Agosti, Davide Beraldo, Jeroen de Vos and Stefania Milan.

See more: the news in Dutch, the press release by the ERC, our project featured in the highlights of the call

Stay tuned for details.

The new website https://algorithms.exposed will go live soon!

 

Stefania at AlgoSov Summit in Copenhagen, 8 September

On the 8th of September Stefania will give a talk at the Algorithmic Sovereignty Summit in Copenhagen, in the framework of the TechFestival, a festival “to find human answers to the big questions of technology”.

The Summit in an initiative of Jaromil Rojo from Dyne.org, who also sits in the DATACTIVE’s Ethics Advisory Board. The summit kickstarts the European Observatory on Algorithmic Sovereignty.

DATACTIVE welcomes a new member to the team

DATACTIVE welcomes its newest addition, Lara AlMalakeh, who joins the team as a Managing Editor of the three blogs of the project.

Lara has just obtained a MA in Comparative Cultural Analysis from the University of Amsterdam. Before that she obtained a Post Graduate in Principles and Practice of Translation from City University London. Lara wears many hats as she initially studied Fine Arts in Damascus University and graduated with specialization in oil painting. She then built a career in administration spanning over 12 years in Dubai, UAE. During that time, she joined Global Voices as the Arabic language editor and started translating for NGOs specialized in advocacy and digital rights, namely Electronic Frontier Foundation and First Draft News.

DATACTIVE is glad with the diversity that this new addition brings to its ensemble of academics and activists. The team is looking forward to leveraging on the various skills and attributes Lara brings along, whether from her professional background or her various involvements in the activism sphere.

Lara is a proud mother of two girls under 10. She enjoys discussing politics and debating art over a beer. Her new found love is philosophy and she dreads bikes.

[BigDataSur] India’s Aadhaar: The datafication of anti-poverty programmes and its implications

By Silvia Masiero, Loughborough University

The notion of datafication implies rendering existing objects, actions and processes into data. Widely studied in the field of business intelligence, datafication is known to restructure consumer behaviour and the functioning of markets in multiple ways. But a less-widely researched aspect pertains to the datafication of public welfare and social protection programmes, on which the livelihoods of many poor and vulnerable people worldwide are based. The field of information and communication technology for development (ICT4D), which for more than thirty years has focused on the roles of informatics in development processes, is coming to realize the growing importance of datafication in the enactment of social policies.

Datafication acquires a particular meaning when referring to anti-poverty programmes, which are social protection schemes designed specifically for the poor. In such schemes, what is converted into machine-readable data is in the first place the population of entitled users. This leads to restructuring two core functions of anti-poverty schemes: first is the recognition of beneficiaries, automatizing the process that discriminates entitled individuals and households from non-entitled ones. Second is the correct assignation of entitlements, based on the availability of machine-readable data for their determination. While both functions were previously paper-based or only partially digitized, datafication affords the power to automatize them, with a view of infusing greater effectiveness and accountability in programme design.

Against this backdrop, my research focuses on the two concomitant aspects of the effects of datafication on the architecture of anti-poverty programmes, and its consequences on the entitlements that beneficiaries receive through them. My PhD thesis focused on the digitalization of the Public Distribution System (PDS), which is India’s largest food security programme and centres on distributing primary necessity items (mainly rice, wheat, sugar and kerosene) at subsidized prices to the nation’s poor. The back-end digitalization of the scheme, started at the state level in the early 2000s, is now culminating in datafication of the programme through the Unique Identity Project (Aadhaar), an identity scheme that constitutes the biggest biometric identification database in the world. Built with the declared purpose of facilitating the socioeconomic inclusion of India’s poor, Aadhaar provides all enrolees with a 12-digit number and the capture of biometric details, to make sure, among other aspects, that each enrolee obtains their social benefits through a simple operation of biometric recognition.

Datafication contributes to deep transformation of anti-poverty programmes, with mixed effects on programme architecture and entitlements of beneficiaries

My data collection on the datafied PDS has occurred in the two southern Indian states of Kerala and Karnataka, and also comprehends a review of the state-level cases of Aadhaar-based PDS currently operating in India. Through the years, my research has developed three lines of reflection which I synoptically illustrate below.

First, datafication is constructed by the Indian central government as a tool for simplification of access, and of improvement of users’ capability to obtain their entitlements under existing schemes. The Aadhaar-based PDS is indeed constructed to reduce the inclusion error, meaning access to the programme by non-entitled people, and the exclusion error (Swaminathan 2002), meaning the negation of subsidy to the entitled. In doing so, the biometric system traces sales from PDS ration shops to reduce diversion (rice mafia), an illegal network through which foodgrains aimed at the poor are diverted on the market for higher margins. What emerges from my research is a strong governmental narrative portraying Aadhaar as a problem-solver of PDS: technology is depicted by government officials as a simplifier of the existing system, facilitating a better and more accountable functioning of a leakage-prone anti-poverty scheme that has been in operation for a long time.

Second, recipients’ view of the datafied PDS is mixed: it reveals some positive changes, but also a set of issues that were not in place before the advent of the biometric system. One, making access conditional to enrolment in the Aadhaar database, the new system subordinates the universal right to food to enrolment in a biometric database, leading the poor to ‘trade’ their data for the food rations needed for their livelihoods. Two, while the programme is designed to combat the inclusion error, new forms of exclusion are caused by systems’ malfunctionings leading to failure in user recognition, which in turn results in families having their food rations denied even for several months in a row. Three, the system is not built to act on the back-end diversion (PDS commodities being diverted before they reach the ration shops where users buy them), where, according to existing studies of the PDS supply chain, the greatest part of goods is diverted (Khera 2011, Drèze & Khera 2015).

Third, there is a specific restructuring intent behind the creation of an Aadhaar-based PDS. From documents and narratives released by the central government, a clear teleology emerges: Aadhaar is not conceived to simply streamline the PDS, but to substitute it, in the longer run, with a system of cash transfers to the bank accounts of beneficiaries. As government officials declare, this serves the purpose of reducing the distortion caused by subsidies, and create a more effective system where existing leakages cannot take place. A large majority of beneficiaries, however, is suspicious towards cash transfers (Drèze et al. 2017): a prominent argument is that these are more complex to collect and handle, with respect to the secure materiality of PDS food rations. What is sure, beyond points of view on the appropriateness of cash transfers, is that the teleology behind the Aadhaar-based PDS is not that of streamlining the system, but that of creating a new one where the logic of buying goods on the market replaces the existing logic of subsidies.

Aadhaar concurs to enable a shift from in-kind subsidies to cash transfers, with uncertain consequences on poor people’s entitlements

Rooted into field research on datafied anti-poverty systems, these reflections offer two main contributions to extant theorizations of datafication in the Global South. First, they highlight the role of state governments in using datafied systems towards construction of a positive image of themselves, portraying datafication as a problem-solving tool adopted to tackle the most pressing issues affecting existing programmes. The power of datafication, embodied by large biometric infrastructures such as Aadhaar, is used to project an image of accountability and effectiveness, relied upon in electoral times and in the construction of consensus from the public opinion. At the same time, citizens’ perspectives reveal forms of data injustice (Heeks & Renken 2018) which did not exist before datafication, such as the denial of subsidies based on failure of user recognition by point-of-sale machines or the subordination of the right to food to enrolment in a national biometric database.

Second, datafication is often portrayed by governments and public entities as a means to streamline anti-poverty programmes, improving the mechanisms at the basis of their functioning. By contrast, my research suggests a more pervasive role of datafication, capable of transforming the very basis on which existing social protection systems are grounded (Masiero 2015). The Aadhaar case is a revealing one in this respect: as it is incorporated in extant subsidy systems, Aadhaar does not aim to simply improve their functioning, but to substitute the logic of in-kind subsidies with a market-based architecture of cash transfers. Moving the drivers of governance of anti-poverty systems from the state to the market, datafication is hence implicated in a deep reformative effort, which may have massive consequences on programme architecture and the entitlements of the poor.

Entrenched in the Indian system of social protection, Aadhaar is today the greatest datafier of anti-poverty programmes in the world. Here we have outlined its primary effects, and especially its ability to reshape existing anti-poverty policies at their very basis. Ongoing research across ICT4D, data ethics and development studies pertains to the ways datafication will affect anti-poverty programme entitlements, for the many people whose livelihoods are predicated on them.

 

Silvia Masiero is a lecturer in International Development at the School of Business and Economics, Loughborough University. Her research concerns the role of information and communication technologies (ICTs) in socio-economic development, with a focus on the participation of ICT artefacts in the politics of anti-poverty programmes and emergency management.

 

References:

Drèze, J., and Khera, R. (2015) Understanding leakages in the Public Distribution System. Economic and Political Weekly, 50(7), 39-42.

Drèze, J., Khalid, N., Khera, R., & Somanchi, A. (2017). Aadhaar and Food Security in Jharkhand. Economic & Political Weekly, 52(50), 50-60.

Heeks, R., & Renken, J. (2018). Data justice for development: What would it mean? Information Development, 34(1), 90-102.

Khera, R. (2011). India’s Public Distribution System: utilisation and impact. Journal of Development Studies, 47(7), 1038-1060.

Masiero, S. (2015). Redesigning the Indian food security system through e-governance: The case of Kerala. World Development, 67, 126-137.

Swaminathan, M. (2002). Excluding the needy: The public provisioning of food in India. Social Scientist, 30(3-4), 34-58.

[BigDataSur] My experience in training women on digital safety

by Cecilia Maundu

I remember it was September 2015 when I was invited for a two-day workshop on digital safety by the Association of Media Women in Kenya. At first I was very curious because I had not heard much about digital security. The two-day workshop was an eye opener. After the workshop I found myself hungry for more information on this issue.

Naturally, I went online to find more information. I must say I was shocked at the statistics I came across on the number of women who have been abused online, and continue to suffer. Women were being subjected to sexist attacks. They were attacked because of their identity as women, not because of their opinions. I asked myself what can I do? I am well aware that I am just a drop in the ocean, but any little change I can bring will help in some way. That was a light bulb moment for me.

It was in that moment that I knew I wanted to be a digital safety trainer. I wanted to learn how to train people, especially women, on how to stay safe online. The internet is the vehicle of the future. This future is now, and we cannot afford for women to be left behind.

Online violence eventually pushes victims to stay offline. It is censorship hidden behind the veil of freedom of expression.

After this realization, I embarked on the quest to become a digital safety trainer. As fate would have it, my mentor Grace Githaiga came across the SafeSister fellowship opportunity and sent it to me. I applied and got into the program. The training was taking place in Ngaruga lodge, Uganda. The venue of the training was beyond serene. The calm lake waters next to the hotel signified how we want the internet to be for women: a calm place and a safe space where women can express themselves freely without fear of being victimized, or their voices being invalidated.

On arrival we were met by one of the facilitators, Helen, who gave us a warm welcome. The training was conducted by five facilitators, all of whom were women.

The training was student friendly. The topics were broken down in a way that allows everyone to understand what was being discussed. Each facilitator had her own way and style of delivering the different topics, from using charts to power point presentations. I must say they did an exceptional job. I got to learn more about online gender violence and how deeply rooted it is in our society, and hence the importance of digital security trainings.

Being a trainer is not only about having digital safety skills, it also requires you to be an all rounded person. While giving training you are bound to meet different types of people with different personalities, and it is your duty to make them feel comfortable and make sure the environment around them is safe. It is in this safe space that they will be able to talk and express their fears and desires, and, most importantly, they will be willing to learn. As a digital security trainer, you should first know more about your participants and how much they know about digital security. This will enable you to package your material according to their learning needs.

Being a trainer requires you to read a lot on digital security, because this keeps you updated and allows you, therefore, to relay accurate information to your trainees. As a trainer, it is also necessary to understand the concept of hands on training because it gives the participants the opportunity to put into practice what they have learnt. For example, when you are teaching about privacy status on Facebook, you don’t just talk about it, your should rather ask the participants to open their Facebook accounts – that is if they have any – and go through the instructions step by step with them till they are able to achieve the task. As a trainer there is also the possibility of meeting a participant who does not give the opportunity to the rest of the group to express their views, as they want to be the one to talk throughout. However, the trainer needs to take charge and make sure that each participant is given an equal opportunity to talk.

Before the training we had each been given a topic to make a presentation on, and mine was to do with encryption; VeraCrypt to be more specific. At first it sounded Greek to me, but then I resorted to my friend Google to get more details (this begs the question of: how was life before the internet?). By the time I was leaving Kenya for Uganda I had mastered VeraCrypt. We kept discussing our topics with the rest of the group to a point where they started calling me Vera. My presentation went so well to my surprise. The week went by so fast. By the time we realized it, it was over and it was time to go back home and start implementing what we had learnt.

We continued receiving very informative material online from the trainers. In September 2017 they opened up a pool of funding where we could apply to fund a training in our different home countries. I got the funding, and chose to hold the training at Multimedia University where I lecture part time. The reason behind my choice was that this was home for upcoming young media women, and we needed to train them on how to stay safe online, especially since media women in Kenya form the majority of victims of gender based violence. They needed to know what awaits them out there and the mechanisms they needed to have to protect themselves from the attacks. The training was a success, and the young ladies walked away happy and strong.

The second, and last, part of SafeSister (I am barely holding my tears here, because the end did come) took place in Uganda at the end of March 2018. It was such a nice reunion, meeting the other participants and our trainers after a year. This time the training was more relaxed. We were each given a chance to talk about the trainings we conducted, the challenges we encountered, the lessons learnt and what we would have done differently. For me the challenge I encountered was time management. The trainers had prepared quite informative materials, hence the time ran over, add to it a 3o minutes delayed start.

This was my first training, and one take home for me as a digital safety trainer was that not all participants will be enthusiastic about the training, but one shouldn’t be discouraged or feel like they are not doing enough. The trainer just needs to make sure that no participant is left out. The trainer should not just throw questions at the participants, or just ask for their opinion on different issues regarding digital safety. As time progresses, they gradually get enthusiastic and start feeling more at ease.

One thing I have learnt since I became a digital security trainer is that people are quite ignorant on digital security matters. People go to cybercafés and forget to sign out of their email accounts, or use the same password for more than a single account, and  then they ask you ‘’why would someone want to hack into my account or abuse me and I am not famous?” However, such questions should not discourage you, on the contrary, they should motivate you to give more trainings, because people don’t know how vulnerable they are by being online while their accounts and data are not protected. Also as a trainer, when you can, and when the need arises, give as much free trainings as you can, since not everyone can afford to pay you. It is through these trainings that you continue to sharpen your skills and become an excellent trainer.

After the training we were each awarded a certificate. It felt so good to know that I am now a certified digital security trainer; nothing can beat that feeling.  As they say, all good things must come to an end. Long live Internews, long live DefendDefenders Asante Sana. I will forever be grateful.

 

Cecilia Mwende Maundu is a broadcast journalist in Kenya, a digital security trainer and consultant with a focus on teaching women how to stay safe online. She is also a user experience (UX) trainer, collecting user information feedback and sharing it with developers.

XRDS Summer 2018 issue is out -with contributions from DATACTIVE

The last issue of XRDS – The ACM Magazine for Students is out. The issue has been co-edited by our research associate Vasilis Ververis and features contributions by three of us: Stefania Milan, Niels ten Oever, Davide Beraldo, and Vasilis himself.

  1. Stefania’s piece ‘Autonomous infrastructure for a suckless internet’ explores the role of politically motivated techies in rethinking a human rights respecting internet.
  2. Niels and Davide, in their ‘Routes to rights’, discuss the problems of ossification and commercialization of internet architecture.
  3. Vasilis, together with Gunnar Wolf (also editor of the issue), has written on ‘Pseudonimity and anonymity as tools for regaining privacy’.

XRDS (Crossroads) is the quarterly magazine of the Association for Computing Machinery. You can reach the full issue here.

DATACTIVE at EASST 2018

Stefania and Guillén will be present this week at EASST 2018: Meetings – Making Science, Technology and Society Together, in Lancaster, UK.

If you are around, drop by our panel “After data activism: reactions to civil society’s engagement with data” on Saturday morning (9:30) at the Elizabeth Livingston Lecture Theatre. We will be focusing on how data governance, data science and social technologies are co-producing asymmetries of power through five papers dealing with Data flows, data sharing, the scoring society, civil society and data practices, and resistance through data.

Apart from that, Stefania will also be presenting along Anita Chan a paper on “Data cultures from the Global South: decentering data universalism” and will participate in a panel organized by the European Research Council.

Come say hi!

BigBang

BigBang v0.2.0 ‘Tulip Revolution’ released

DATACTIVE has been collaborating with researchers from New York University and the University of California at Berkeley to release version 0.2.0 of the quantitative mailinglists analysis software BigBang. Mailinglists are among the most widely used communication tools in Internet Governance institutions and among software developers. Therefore mailinglists lend themselves really well to do analysis on the development of the communities as well as topics for discussion and their propagation through the community. BigBang, a python based tool, is there to facilitate this. You can start analyzing mailinglists with BigBang by following the installation instructions.

This release, BigBang v0.2.0 Tulip Revolution, marks a new milestone in BigBang development. A few new features:
– Gender participation estimation
– Improved support for IETF and ICANN mailing list ingest
– Extensive gardening and upgrade of the example notebooks
– Upgraded all notebooks to Jupyter 4
– Improved installation process based on user testing

En route to this milestone, the BigBang community made a number of changes to its procedures. These include:

– The adoption of a Governance document for guiding decision-making.
– The adoption of a Code of Conduct establishing norms of respectful behavior within the community.
– The creation of an ombudsteam for handling personal disputes.

We have also for this milestone adopted by community decision the GNU Affero General Public License v3.0.

If you have any questions or comment, feel free to join the mailinglist,
join us on gitter chat or file an issue on Github.

If you are interested in using BigBang but don’t know where to start, we are happy to help you on your way via videochat or organize a webinar for you and your community. Feel free to get in touch!