Category: Uncategorized

cómo colaborar en “COVID-19 from the margins”

¡Gracias por tu interés en colaborar en el blog COVID-19 from the margins! Aquí encontrarás información acerca de cómo preparar tu entrega.
Por favor, leela atentamente.


Qué publicamos

  • Invitamos a colaborar involucrando las diferentes formas de impacto del COVID-19 en el Sur y en los diferentes Sures, incluyendo las consecuencias en infraestructura y redistribución en relación con la sociedad estudiada (vigilancia, estadísticas, intentos de construcción de narraciones). En particular, buscamos publicar entradas de blog que exploren aquellas consecuencias y las maneras en las que las personas y las comunidades de los Sures responden a ellas.
  • ¿Cuál es nuestra definición de Sur(es)? El Sur es una entidad amalgamada y plural, que si bien incluye una connotación geográfica (el “Sur Global”), va más allá de ella. Desde esta perspectiva, el Sur(es) es a place of (and a proxy for) alterity, resistance, subversion, and creativity (Milan and Treré 2019, p. 235).
  • Los autores de publicaciones aceptadas recibirán una retribución standard (€ 50). Las categorías son las siguientes: estudiantes, personas desempleadas o trabajadores precarios, en particular del llamado Sur Global. Los pedidos serán evaluados de manera individual.
    Rogamos tener en cuenta que actualmente estamos recaudando fondos y por ahora tenemos una retribución confirmada solo para las primeras veinte entradas.

Cómo preparar tu manuscrito

  • Cantidad de palabras: entre 600 y 1.200 (máximo 1.500), estilo blog, accesible para una audiencia amplia. Publicaciones más largas podrían ser publicadas como una serie de “episodios” vinculados entre sí.
  • Por favor, seguir la hoja de estilo blog para preparar tu manuscrito (blog stylesheet to prepare your manuscript)
  • Una vez que esté listo, enviarlo a covid19blog@data-activism.net.

how to contribute to “COVID-19 from the margins”

Thanks for your interest in contributing to the blog COVID-19 from the margins! Here you can find information about how to prepare your submission. Please read them carefully.

What we publish

  • We invite contributions engaging various forms of impact of COVID-19 on the Souths, including its economic, infrastructural and redistributional consequences in relation to the datafied society (e.g., surveillance, statistics, grassroots efforts to counter narratives). In particular, we seek to publish blog posts that explore such consequences and the ways people and communities across the Souths respond to them.

To be considered in the blog, your post should
1) explicitly reflect on one or more aspect of the datafied society at the time of the pandemic (e.g., surveillance, data production, data-based narratives, technological solutions or obstacles, data justice, data activism…).
2) explicitly take a human-centred perspective in exploring the consequences of the pandemic (e.g. how it is affecting people and communities on the ground, its impact on data privacy, redistribution of resources, access to key services, inclusion/exclusion from service provision…)

  • What is our definition of the South(s)? The South(s) is a composite and plural entity, including but also going beyond the geographical connotation (i.e., “global South”). In this understanding, the South(s) is a place of (and a proxy for) alterity, resistance, subversion, and creativity (Milan and Treré 2019, p. 235).
  • A standard retribution (€ 50) will be allocated to authors of accepted posts in the following categories: students, unemployed or precarious workers, in particular in the so-called Global South. Requests will be evaluated on a case-by-case basis. Please note that we are currently fundraising, and to date we have secured a retribution for the first twenty posts only.

How to prepare your manuscript

  • Length: between 600 and 1,200 words (max 1,500), blog style accessible to a wider audience. Longer posts might be published as a series of “episodes” linked to each other.
  • Please follow the blog stylesheet to prepare your manuscript. Don’t forget to include a title and a teaser, and a picture to accompany your post.
  • When ready, send to covid19blog@data-activism.net.

 

 

 

Magma guide release announcement

January 29, 2020

By Vasilis Ververis, DATACTIVE

We are very pleased to announce you that the magma guide has been released.

What is the magma guide?

An open-licensed, collaborative repository that provides the first publicly available research framework for people working to measure information controls and online censorship activities. In it, users can find the resources they need to perform their research more effectively and efficiently.

It is available under the following website: https://magma.lavafeld.org

The content of the guide represents industry best practices, developed in consultation with networking researchers, activists, and technologists. And it’s evergreen, too–constantly updated with new content, resources, and tutorials. The host website is regularly updated and synced to a version control repository (Git) that can be used by members of the network measurements community to review, translate, and revise content of the guide.

If you or someone you know is able to provide such information, please get in touch with us or read on how you can directly contribute to the guide.

All content of the magma guide (unless otherwise mentioned) is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).

Many thanks to everyone who helped make the magma guide a reality.

You may use any of the communication channels (listed in contact page) to get in touch with us.

 

Vasilis Ververis is a research associate with DATACTIVE and a practitioner of the principles ~ undo / rebuild ~ the current centralization model of the internet. Their research deals with internet censorship and investigation of collateral damage via information controls and surveillance. Some recent affiliations: Humboldt-Universität zu Berlin, Germany; Universidade Estadual do Piaui, Brazil; University Institute of Lisbon, Portugal.

Davide in Lugano with paper on algorithms as online discourse (January 30)

Davide Beraldo will be in Lugano, Switzerland, to present a paper on ‘Algorithms as Online Discourse. Exploring topic modeling and network analysis to study algorithmic imaginaries’, co-authored with Massimo Airoldi (Lifestyle Research Center, (EMLYON Business School). The paper is a contribution to the ‘Rethinking Digital Myths. Mediation, narratives and mythopoiesis in the digital age’ workshop hosted at the Università della Svizzera Italiana.

[blog] Why Psychologists need New Media Theory

by Salvatore Romano

 

I’m a graduate student at the University of Padova, Italy. I’m studying Social Psychology, and I spent four months doing an Erasmus Internship with the DATACTIVE team in Amsterdam.

 

It’s not so common to find a student of psychology in the department of Media Studies; some of my Italian colleagues asked me the reason for my choice. So I would like to explain four good reasons for a student of psychology to get interested in New Media Theory and Digital Humanities. In doing that, I will quote some articles to give a starting point to other colleagues who would like to study similar issues.

I participated in the “Digital Method Summer School,” which has been an excellent way to get a general overview of the different topics and methodologies in use in the department. In just two weeks, we discussed many things: from a sociological point of view on the Syrian war to an anthropological comprehension of alt-right memes, passing by semantic analysis, and data scraping tools. In the following months, I had the chance to deepen the critical approach and the activist’s point of view, collaborating with the Tracking Exposed project. The main question that drove my engagement for the whole period has been: “what reflections should we make before using the so-called ‘big data’ made available by digital media?”.

The first important point to note is: research through media should always be also research about media. It is possible to use this data to investigate the human mind and not just to make assumptions about the medium itself. However, it is still essential to have specific knowledge about the medium. New Media theory is interesting not only because it tells you what New Media are, but rather because it is crucial to understand how to use new media data to answer different questions coming from various fields of studies. That’s why, also as psychologists, we can benefit from the discussion.

The second compelling reason is that you need specific and in-deep knowledge to deal with technical problems related to digital media and its data. I experienced some of the difficulties that you can face while researching social media data: most of the time you need to build your research tools, because no one had your exact question before you or, at least, you need to be able to adapt someone else’s tool to your needs. And this is just the beginning; to keep your (or other’s) tool working, you need to update it really often, sometimes also fighting with a company that tries to obstruct independent research as much as possible. In general, the world of digital media is changing much faster than traditional media; you could have a new trendy platform each year; stay up to date is a real challenge, and we cannot turn a blind eye to all of this.

Precisely for that reason, the third reflection I made is about the reliability of the data we use for psychological research. Especially in social psychology, students are familiar with using questionnaires and experiments to validate their hypotheses. With those kinds of methodologies, the measurement error is mostly controlled by the investigator that creates the sample and assures that the experimental conditions are respected. But with big data social sciences experiment, the possibility to trace significant collective dynamics down to single interactions, as long as you can get those data and analyze them properly. To make use of this opportunity, we analyze databases that are not recorded by us, and that lack an experimental environment (for example, when using Facebook API). This lack of independence could introduce distortions imputable to the standardization operated by social media platforms and not monitorable by the researcher. Moreover, to use APIs without general knowledge about what kind of media recorded those data is really dangerous, as the chances to misunderstand the authentic meaning of the communication we analyze are high.

Also if we don’t administer a test directly to the subjects, or we don’t make assumptions just from experimental set-up, we still need to reproduce a scientific accuracy to analyze big data produced by digital media. It is essential to build our tools to create the database independently; it’s necessary to know the medium to reduce misunderstandings, and all this is something we can learn from a Media Studies approach, also as psychologists.

The fourth point is about how digital media implement psychological theory to shape at best their design. Those platforms use psychology to augment the engagement (and profits), while psychologists use very rarely the data stored by the same platforms to improve psychological knowledge. Most of the time, omnipotent multinational corporations play with targeted advertising, escalating to psychological manipulation, while a lot of psychologists struggle to understand the real potential of those data.

Concrete examples of what we could do are the analysis of the hidden effects of the Dark Patterns adopted by Facebook to glue you to the screen; the “Research Personas” method to uncover the affective charge created by apps like Tinder; the graphical representation of the personalization process involved in the Youtube algorithm.

 

In general, I think that it’s essential for us, as academic psychologists, to test all the possible effects of those new communication platforms, not relying just on the analysis made by the same company about itself, we need instead to produce independent and public research. The fundamental discussion about how to build the collective communications system should be driven by those types of investigations, and should not just follow uncritically what is “good” for those companies themselves.

 

Stefania in Tel Aviv for the workshop “Algorithmic Knowledge in Culture and in the Media” (October 23-25)

On October 23-25, Stefania will be in Tel Aviv to take part in the international workshop “Algorithmic Knowledge in Culture and in the Media” at the Open University of Israel. The invitation-only workshop is organized by Eran Fisher, Anat Ben-David and Norma Musih. Stefania will present a paper on the ALEX project, DATACTIVE’s spin-off, as an experiment into algorithmic knowledge.

Unpacking the Effects of Personalization Algorithms: Experimental Methodologies and Their Ethical Challenges

Stefania Milan, University of Amsterdam

With social media platforms playing an ever-prominent role in today’s public sphere, concerns have been raised by multiple parties regarding the role of personalization algorithms in shaping people’s perception of the world around them. Personalization algorithms are accused of promoting the so-called ‘filter bubble’ (Pariser 2011) and suspected of intensifying political polarization. What’s more, said algorithms are shielded behind trade secrets, which contributes to their technical undecipherability (Pasquale 2015). Against this backdrop, the ALgorithms EXposed (ALEX) project, has set off trying to unpack the effects of personalization algorithms, experimenting with methodologies, software developments, and collaborations with hackers, nongovernmental organizations, and small enterprises. In this presentation, I will reflect on four aspects of the ALEX project as an experiment into algorithmic knowledge, and namely: i) software development, illustrating the working of the browser extensions facebook.tracking.exposed and youtube.tracking.exposed; ii) experimental collaborations within and beyond academia; iii) methodological challenges, including the use of bots; and iv) ethical challenges, in particular the development of data reuse protocols allowing users to volunteer their data for scientific research while individual safeguarding data sovereignty.

YouTube Algorithm Exposed: DMI Summer School project week 1

DATACTIVE participated in the first week of the Digital Methods Initiative summer school 2019 with a data sprint related to the side project ALEX. DATACTIVE’s insiders Davide and Jeroen, together with research associate and ALEX’s software developer Claudio Agosti, pitched a project aimed at exploring the logic of YouTube’s recommendation algorithm, using the ALEX-related browser extension youtube.tracking.exposed. ytTREX allows you to produce copies of the set of recommended videos, with the main purpose to investigate the logic of personalization and tracking behind the algorithm. During the week, together with a number of highly motivated students and researchers, we engaged in collective reflection, experiments and analysis, fueled by Brexit talks, Gangnam Style beats, and the secret life of octopuses. Our main findings (previewed below, and detailed later in a wiki report) pertain look into which factors (language settings, browsing behavior, previous views, domain of videos, etc.) help trigger the highest level of personalization in the recommended results.

 

Algorithm exposed_ investigasting Youtube – slides

 

 

 

Stefania at Science Foo

On July 12-14 Stefania will be at X in Mountain View, in Silicon Valley, as one of the invitees to Sci Foo. Science Foo is a series of interdisciplinary conferences organized by O’Reilly Media, Digital Science, Nature Publishing Group and Google. It is an “unconference focused on emerging technology, and is designed to encourage collaboration between scientists who would not typically work together”. Stefania plans to propose a session on ‘decolonizing data’.