Category: show on blog page

[BigDataSur] Cuba y su ecosistema de redes después de la revolución

Por: Yery Menéndez García y Jessica Domínguez.

En Cuba la información, la comunicación y los datos son “recursos estratégicos del estado” [1] y “asunto de seguridad nacional” [2]. En la práctica, pero también en la mayoría de los documentos normativos del país, queda establecida la propiedad estatal sobre el capital simbólico de la nación.

A lo anterior se suman niveles de acceso y existencia de plataformas de redes telemáticas considerados entre los más bajos del planeta, importantes restricciones internacionales para el acceso a infraestructura, financiamientos, circuitos de telecomunicaciones y conectividad, y la existencia de programas que usan las TICS para intentar desestabilizar abiertamente al gobierno cubano.

Ante este contexto, y debido a los altos precios de conexión, grupos ciudadanos desarrollan prácticas de circulación de información que se adaptan a un contexto híbrido (off-on line). Estas iniciativas asumen un carácter autónomo, deslocalizado y auto gestionado e intentan satisfacer demandas diarias fuera de los mecanismos del estado. Algunas de las más relevantes en los últimos diez años son:

  1. Nuevos medios alternativos de comunicación

Un grupo de jóvenes periodistas graduados de universidades cubanas y otros profesionales están utilizando un grupo de recursos socio-técnicos para generar otras matrices de información.

Estas nuevas plataformas de información de interés público vienen a llenar vacíos dejado por los medios oficiales, únicos permitidos de existir. Algunos actúan como proyectos sombrilla o repositorios, albergando otras iniciativas ciudadanas de información.

Durante diez años y ante carencias de acceso a redes para resolver cuestiones infraestructurales, de fortalecimiento de capacidades y de acceso a fuentes, estas iniciativas han desarrollado formas de gestión creativas e innovadoras en concordancia con las más recientes tendencias globales.

A pesar de esto, la principal fuente de financiamiento de estos proyectos continúa siendo donaciones y becas provenientes de organizaciones internacionales. Este sigue siendo el principal punto de ataque usado para desacreditarlos por representantes del gobierno.

Entre los más relevantes y reconocidos se encuentran:

  • On Cuba, una plataforma en inglés y español dirigida, sobre todo a la comunidad cubana emigrada.
  • El Toque, un medio generalista, enfocado principalmente a los jóvenes y gestionado por jóvenes que cuenta historias de ciudadanía. El Toque pertenece a un grupo mayor de “emprendimientos de comunicación” reunidos dentro del Colectivo +Voces y que incluye también una radio digital llamada “El Enjambre” y un suplemento de humor gráfico, Xel2.
  • Periodismo de Barrio, una revista dedicada a tratar temas medioambientales y vulnerabilidades sociales.
  • El Estornudo, medio especializado en periodismo literario.
  • Joven Cuba y La Tizza, ambos son blogs colaborativos para promover el debate político.

Todos estos medios tienen como principal forma de socialización sus portales online. Pero desde que la distribución de formatos impresos es prohibida por el código penal cubano y el acceso online es caro, estos medios han tenido que innovar en sus interacciones con sus comunidades. La manera fundamental que han encontrado es la creación de una base de datos que se descarga una vez por semana. Con la base de datos descargada se actualiza la aplicación móvil de los sitios y desde entonces se puede acceder a todo el contenido offline.

Existe una clara diferencia entre estos medios y los medios abiertamente opuestos al gobierno de la isla. Los primeros están enfocados en producir información fuera de la égida del departamento ideológico del Partido Comunista de Cuba, estructura encargada de regular toda la producción simbólica del país, mientras los segundos subordinan la información que producen a su activismo político.

  1. El paquete semanal

El paquete es un producto-servicio que capitaliza redes sociales ya desarrolladas y las extiende. Si bien el objetivo final de esta expresión socio-técnica es el lucro y no la práctica de sentido de ciudadanía, si vale la pena comprender como esas redes de datos interactúan con redes sociales y como son producidas socialmente.

Dentro del paquete se recopila alrededor de 1 terabyte de contenido pirata, semana por semana. Este contenido se descarga de internet desde diferentes nodos o matrices que todo el mundo conoce, pero que permanecen ocultas, como secretos a voces. Una vez descargado el contenido, se entrega a un grupo de personas que a su vez, lo distribuyen mediante discos extraíbles a otras ciudadanos y así sucesivamente, por módicos precios.

De esta manera, en una especie de bola de nieve, los cubanos tienen acceso a internet offline y se mantienen actualizados de todo cuanto acontece en materia de información. Los contenidos del paquete incluyen desde cine hasta publicidad no permitida en los canales oficiales cubanos; desde música hasta bases de datos de otras plataformas de todo tipo. El paquete semanal es la principal forma de distribución de los medios y revistas mencionados anteriormente y de otros tantos, religiosos, humorísticos y políticos que no tienen otros espacios donde posicionarse.

La mejor descripción para el paquete es la de fenómeno híbrido de socialización de datos que media entre interacciones sociales no dependientes de algoritmos. Para la realidad semi-conectada de Cuba, el paquete semanal es hoy el recurso de distribución más popular y asequible. Y aunque no es legal, su carácter reticular, su distribución por nodos y de mano a mano y la calidad en la gestión y jerarquización de sus contenidos, hace imposible para las autoridades detenerlo completamente.

  1. The Street Network

La SNET (Street Network, por sus siglas en inglés) o Red de la calle, fue otra popular experiencia de distribución de contenidos y de creación de comunidades que, a diferencia del paquete, no tenía ánimo de lucro. En esta red, conectada por cables y Wi-Fi, sus “miembros” comenzaron a agruparse en nodos por toda la Habana con la intención de jugar partidas online. Con el paso del tiempo, la SNET fue creciendo y perfeccionando en estructura y organización, llegando a otras provincias del país. Y su objetivo primario pasó de ser el espacio de la comunidad gamer cubana a convertirse en un esquema para la generación de prácticas conectadas de ciudadanía mediadas por software.

La SNET, a pesar de ser un tejido ilegal, desarrolló un complejo sistema jerárquico, principios y éticas de funcionamiento bien establecidas, llegando a desplegar un nivel de infraestructura de red nunca antes visto, fuera de los márgenes del estado.

Convertida en un verdadero movimiento de activismo de datos, en 2019 el gobierno trató de institucionalizarla dentro de los Jóvenes Clubs de Computación y Electrónica. Este intento de cooptar la iniciativa generó protestas y demostraciones públicas que llevaron al gobierno, por primera vez, a sostener diálogos y llegar a consenso con los representantes de los nodos de SNET. A pesar de los acuerdos entre ambas partes, la red está hoy casi extinta.

  1. Articulaciones ciudadanas en redes sociales

En enero del pasado 2019 un tornado azotó la Habana devastando el ya vetusto fondo habitacional de la capital cubana. Luego de este fenómeno natural, una oleada de ciudadanos organizados congregaron a cubanos residentes y emigrados para brindar ayuda a los necesitados. Convocándose principalmente mediante Facebook, se crearon directorios colaborativos con los contactos de aquellos dispuestos a ayudar, bases de datos abiertas con los nombres y datos demográficos de los más necesitados e iniciativas de mapping para localizar los lugares donde fue mayor el daño.

Esta iniciativa fue, en su mayoría, impulsada por jóvenes profesionales y artistas. El nivel de movilización demostrado superó a las capacidades del estado, el que una vez más trató de institucionalizar las ayudas. En este caso, el movimiento siguió operando paralelo a los esfuerzos estatales y solo concluyó una vez que la mayoría de las personas afectadas recibieran kits básicos de apoyo.

  1. Plataformas comerciales

También existe una extensiva red de repositorios comerciales colaborativos como Revolico.com que intentan generar una alternativa dinámica al desprovisto mercado oficial. En estos repositorios se crea, gestiona, jerarquiza, recupera y socializa información referente a bienes y servicios que son adquiridos con otros bienes y servicios, moderados por reglas que toda la comunidad que utiliza la plataforma debe seguir.

En una situación de a-legalidad conviven estas comunidades de interpretación, creación y resistencia ante la información estatalizada. Ante un estado centralizador, estas nuevas relaciones sociales de producción dirigidas a llenar vacíos de sentidos que no pueden ser llenados de otra manera, mediadas o no por algoritmos; representan hoy alternativas cada vez más articuladas, populares y endógenas y de eso depende enteramente su supervivencia.

[1] Lineamientos de la política social del Estado (PCC, 2011, updated in 2016)

[2] Decreto Ley 370 de MInisterio de Información y Comunicaciones

Biografía 

Yery Menéndez García es periodista y profesora de la Facultad de Comunicación de la Universidad de La Habana. MA in Media Practice for Development and Social Change por la Universidad de Sussex en Reino Unido. Gestora de Audiencias en el medio independiente cubano El Toque.

[blog] Catching a Glimpse of the Elusive “Feminist Drone”

Author: Erinne Paisley

Introduction

Unmanned Aerial Vehicles (UAV or “drones”) are increasingly being used for military, governmental, commercial and personal purposes (Feigenbaum 267; Estrada 100). This rapid increase in drone use raises new questions about how this technology reinforces certain social and political inequalities within its own structure, function, and use. Those who work within the growing academic field of feminist internet studies are dedicated to understanding the aspects of society’s inequities that are both present in new technologies and that can be decreased through these mediums. However, a clear picture of what a “feminist drone” can look like is still relatively elusive.

To paint a picture of how this new media form can be used to decrease gendered inequalities, we can look to two previous feminist drone projects: Droncita (Dronette) in Mexico and the “Abortion Drone” in Poland. Each of these UAV projects worked in unique ways to expose the existing inequalities that are strengthened through typical drone use and, instead, counteract these forces by using the technology to fulfill feminist agendas. Droncita worked to address spatial inequalities and the “Abortion Drone” aimed to expose and counteract legal inequalities. These cases show a glimpse into the future of feminist drones and the expanding field of feminist internet studies that support them.

Mexico’s Droncita (Dronette)

Discrimination against women includes the exclusion of women from physical spaces . They are also discriminated against in additional intersecting ways including racially and economically. This exclusion of women ranges from workplaces to specific areas of cities that have high risks of sexual assault and other forms of violence (Spain 137). Operating from the skies, drones are able to use their small aerial cameras to literally offer new opportunities for viewing and recording our political and social world. In this way, they can reimagine some of these spatially exclusionary forms of discrimination – as we can see with Droncita.

Droncita made her debut in Ecatepec – 20km from Mexico City. Ecatepec is the city’s municipality with the highest rate of deaths presumed to be murder. In 2016, Feminist protestors filled the main square in an attempt to draw attention to the state’s inadequate reaction to the increasing number of female deaths in the country. The activists worked together, using white paint, to cover the square’s ground. The message they were creating was only viewable by one activist in particular: Droncita.

The drone was created by the Rexiste collective project, that began out of an opposition to the presidential election of Peña Nieto. Above these feminist activists, the drone now whirred, recording the emerging message. From Droncita’s point of view, the white paint clearly states: “Femicide State”. By recording this message from the sky’s unclaimed public space, Droncita firstly draws attention, in contrast, to the gendered space of Ecatepec below. The drone’s recording highlights that the feminist protestors are still not fully free to create their message safely in this space. As well, Droncita reclaims the space, alongside the activists below, by completing their message and illustrating its take-over of the square.

Femicide is: “The killing of a woman or girl, in particular by a man and on account of her gender.”

Through its actions, Droncita uses “digital ethnography”, the linking of digital space with actual space, to intervene (Estrada 104). Droncita turns aerial space into public space, making violence against women and the reality of the physical more visible – ultimately holding the Mexican government accountable for its role in creating a space where women feel unsafe and face omissions of justice.

Poland’s “Abortion Drone”

Gendered and intersectional discrimination is also upheld globally through law. One of the most significant, and ongoing, ways is through legal boundaries for women’s access to safe and affordable abortions. Women’s rights to make decisions over their own bodies include decisions regarding abortions and yet this form of healthcare is still illegal in many countries. As of 2020, abortion is fully illegal in 27 countries (even if the pregnancy is due to rape or incest). This legal boundary does not mean that women stop getting abortions, but instead that they are forced to receive expensive and unsafe medical attention. According to the World Health Organization, approximately 25 million unsafe abortions occur annually worldwide and over 7 million women are admitted to hospitals in developing countries due to this lack of safe access.

This is where the role of the “Abortion Drone” comes in. In 2015, across the German border from Słubice, Poland, this drone prepared to make its first trip. On one side of the river, a collection of women’s rights organizations and doctors prepared to fly the “quadcopter” to the other side. There a collection of pro-life protestors, journalists, and two women waited to swallow the abortion-inducing pills attached to the drone.

Despite the only 60-second length of the journey, the goal of the “Abortion Drone” was far-reaching. Within Poland, abortion is still illegal unless a woman’s life is categorized as being “in danger” or there is “evidence” of rape, incest or severe fetal abnormalities (O’Neil 2015). Because of these barriers, over 50,000 “underground abortions” are conducted each year – often using out-dated and dangerous tools and for thousands of dollars (limiting the resource to those who can economically afford it). Not only are Poland’s legal barriers for women’s access to healthcare a threat for the safety of those within the country, but they also serve as a wider representation of the legal struggles of millions of women globally.

The collection of activists and doctors called Women on Waves explains: “The medicines used for a medical abortion, mifepristone and misoprostol, have been on the list of essential medicines of the World Health Organization since 2005 and are available in Germany and almost all other European countries.”

As the “Abortion Drone” takes off on its inaugural flight, there is nothing that those on the Polish side can do to legally stop the drone’s journey. The UAV weighs under 5kg and is not used for commercial purposes. Because of these features, the new technology is able to both make visible the legal barriers for women in Poland and counteract them.

The drone lands on the Polish side safely and the women ceremoniously swallow the pills. Soon after, the activists operating the drones on the German side have their technology confiscated but the drone’s work has already been successful. The “Abortion Drone” has illuminated the legal and sexist inequalities that exist with regards to women’s access to healthcare – and temporarily counteracted them.

Feminist Drones in the Future

Droncita and the “Abortion Drone” illustrate the potential of feminist drones to illuminate and counteract spatial and legal inequalities that still exist for women and minorities today. The potential for feminist drones goes much beyond these two cases. As this article is published, feminist internet scholars are working to imagine other creative ways this new media can join the global fight for equality. It is fair to say this new member of the 21st century feminist movement is becoming less elusive; in fact, if you look up you might just catch a glimpse of it.

About the author

Erinne Paisley is a current Research Media Masters student at the University of Amsterdam and completed her BA at the University of Toronto in Peace, Conflict and Justice & Book and Media Studies. She is the author of three books on social media activism for youth with Orca Book Publishing.

Works Cited

Estrada, Marcela Suarez. “Feminist Politics, Drones and the Fight against the ‘Femicide State’ in Mexico.” International Journal of Gender, Science and Technology, vol. 9, no. 2, pp. 99–117.

Feigenbaum, Anna. “From Cyborg Feminism to Drone Feminism: Remembering Women’s Anti-Nuclear Activisms.” Feminist Theory, vol. 16, no. 3, Dec. 2015, pp. 265–88. DOI.org (Crossref), doi:10.1177/1464700115604132.

Feminist Internet. Feminist Internet: About. https://feministinternet.com/about/. Accessed 26 Feb. 2020.

Jones, Sam. “Paint Remover: Mexico Activists Attempt to Drone out Beleaguered President.” The Guardian, 15 Oct. 2015, https://www.theguardian.com/global-development/2015/oct/15/mexico-droncita-rexiste-collective-president-enrique-pena-nieto.

O’Neil, Lauren. “‘Abortion Drone’ Delivers Pregnancy-Terminatinng Pills to Women in Poland.” CBC News, 29 June 2015, https://www.cbc.ca/news/trending/abortion-drone-delivers-medication-to-women-in-poland-1.3132284.

Oxford University Dictionary. “Femicide.” Lexico, https://www.lexico.com/en/definition/femicide. Accessed 26 Feb. 2020.

Spain, Daphne. “Gendered Spaces and Women’s Status.” Sociological Theory, vol. 11, no. 2, July 1993, pp. 137–51.

Women on Waves. Abortion Drone; First Flight to Poland. https://www.womenonwaves.org/en/page/5636/abortion-drone–first-flight-to-poland. Accessed 26 Feb. 2020.

World Health Organization. Preventing Unsafe Abortion. 26 June 2019, https://www.who.int/news-room/fact-sheets/detail/preventing-unsafe-abortion.

World Population Review. Countries Where Abortion Is Illegal 2020. http://worldpopulationreview.com/countries/countries-where-abortion-is-illegal/. Accessed 26 Feb. 2020.

[blog] Show me the numbers: a case of impact communication in FLOSS

Author: Jeroen de Vos, header image by Ford Foundation

This blog post will explore the potential of repurposing impact assessment tools as a means to leverage funding problems in Free and Libre Open Source Software by making explicit the role they have in crucial public digital infrastructure. Two key concepts are relevant to help explain this specific exploration, the first of which is Free and Libre Open Source Software (FLOSS) and the central role it plays in facilitating a common software infrastructure used by both public and private organisations as well as civil society at large. The second is the notion of impact assessment as a strategy to understand, account for and communicate results of your efforts beyond merely financial numbers.

‘Money talk is kind of a taboo in the F[L]OSS community’, one respondent replied in an interview I most recently conducted at CCC 36C3. The talk he just gave outlined some of the tentative revenue models one could think of to make your software development activities more sustainable – it attracted a larger-than-expected audience with interesting follow-up questions. FLOSS software development very much draws on the internal motivation of developers or a developer community, with recurring questions of sustainability when relying on volunteering time that could be spent differently. And the complexity of this situation cannot be underestimated. The 2016 Ford Foundation report Roads and bridges: The Unseen Labor Behind Our Digital Infrastructure (Eghbal) contextualizes some of the common problems in the open-source software development – think of for instance the lack of appreciation of invisible labour, the emotional burden of upkeeping a popular project started, or the constant struggle over motivation while being structurally un- or underfunded.

The report draws on the metaphor of FLOSS as infrastructure, since it is readily available to anyone alike, but also in needs maintenance – has its limitations, but works well to illustrate the point. Just like infrastructure supports the flows of ideas, goods and people FLOSS operates on every level of digital infrastructure, whether talking about the NTP protocol synchronizing the internet, GnuPG (an encryption protocol allowing secure communication and data sharing) or MySQL (a database structure which quickly became a go-to standard for information storage and retrieval). Another commonality: as long as the infrastructure functions, its underlying support systems are seemingly invisible. That is, up until the point of failure it is unseen to which extent both private and public goods and services and public or private communication rely on these software packages. Only at failure, it becomes painfully explicit.

The recent well-known example of this escalation taking place is with the so-called Heartbleed bug. The FLOSS OpenSSL package contains the most widely used protocol for encrypting web traffic. Due to a bug creeping into the code somewhere in 2011, attackers could intercept information from connections that should be encrypted – which rendered large parts of online infrastructure unsafe in design, including services like Google, Amazon and many others. The issue raised the attention to the OpenSSL developers’ under-capacity – only one working full time for a salary of only a third to its colleagues in commercial counterparts. This is the point where the impact assessment tools might come into play – rather than relying on controversies to make visible the apparent widespread embedding and dependency on particular pieces of software, why not use impact assessment as a way to understand public relevance?

Conducting impact assessments can help communicate the necessity of maintenance by making visible the embeddedness of FLOSS software packages – whether it is on the level of language, operating system or protocol. To briefly contextualize, impact assessment grew out of changing management needs and has been implemented in the organisation of ‘soft output’ whether it be policymaking or social entrepreneurship. It is an interventionist tool that allows defining qualitative output with subsequent quantitative proxies to help understand the implementation results in relation to the desired output as described in a theory of change. It helps to both evaluate the social, technological, economic, environmental and political value created and subsequently make insightful the extend to which obsoletion would disrupt existing public digital infrastructure.

Without going too much into detail it needs mentioning that impact assessment already made its introduction as part of reporting deliverables to funders where relevant. Part of this exercise, however, is to instrumentalize impact assessment not only for (private) reporting by projects already funded but for (public) communicating FLOSS impact especially for projects without the necessary revenue streams in place. Needless to say, this output is only one of the steps in the process of making crucial FLOSS more sustainable but an important one, assessment output might help tapping into public or private sponsorship, establishing new collaborations with governments, educators and businesses alike, and venture into other new and exciting funding models.

This piece is meant as a conversation starter, do you already know of existing strategies to help communicate FLOSS output, are you involved in creating alternative business models for for-good public data infrastructure – ideas and comments welcome. Email: jeroen@data-activism.net

As for a short disclaimer I have been working with social enterprises developing market research and impact-first business models, I have been mulling over the crossover between social entrepreneurship and (FLOSS) activism, in their common struggle for sustainability, relying on informal networks or communities of action and trying to make a social change either from within or from the outside. This blog post is an attempt to think together social entrepreneurship and data activism through the use of a use-case: impact assessment for FLOSS.

References:

Eghbal, N. (2016). Roads and bridges: The unseen labor behind our digital infrastructure. Ford Foundation.

[BigDataSur] El Sur Global podría nacionalizar sus datos

Por Ulises Alí Mejías

(An English version of this article appeared in Al Jazeera on December 2019)

Introducción 

Las grandes empresas de tecnología están extrayendo datos de sus usuarios en todo el mundo, sin pagarles por éstos. Es hora de cambiar esta situación.

Abstract

Big tech corporations are extracting data from users across the world without paying for it. This process can be called “data colonialism”: a new resource-grab whereby human life itself has become a direct input into economic production. Instead of solutions that seek to solve the problem by paying individuals for their data, it makes much more sense for countries to take advantage of their scale and take the bold step to declare data a national resource, nationalise it, and demand that companies like Facebook and Google pay for using this resource so its exploitation primarily benefits the citizens of that country.

Nacionalización de datos 

El reciente golpe de estado en Bolivia nos recuerda que los países pobres, pero que son ricos en recursos naturales, continúan siendo plagados por el legado del colonialismo. Cualquier iniciativa que pretenda obstruir la capacidad de las compañías extranjeras para extraer recursos de manera barata se arriesga a ser prontamente eliminada.

Hoy, aparte de los minerales y el petróleo que abunda en algunos rincones del continente, las empresas están persiguiendo otro tipo de recurso, uno que quizás es más valioso: los datos personales. Al igual que los recursos naturales, los datos personales se han convertido en el blanco de ejercicios extractivos llevados a cabo por el sector dedicado a la tecnología.

Como el sociólogo Nick Couldry y yo hemos argumentado en nuestro libro, Los costos de la conexión (The Cost of Connection: How data is Colonizing Human Life and Appropriating It for Capitalism – Stanford University Press), hay un nuevo tipo de colonialismo emergiendo en el mundo de hoy: el colonialismo de los datos. Con este término queremos sugerir que estamos observando una nueva ola de apropiación de recursos en la cual la vida human en sí misma, expresada en los datos extraídos desde los mismos usuarios, se convierte en una aportación directa a la producción económica.

Reconocemos que este concepto puede resultar controversial dada la extrema violencia física y las estructuras aún presentes del racismo colonial histórico. Pero no queremos decir que el colonialismo de datos es igual al colonialismo histórico. Más bien, que la función esencial del colonialismo es justamente la misma. Esa función fue -y sigue siendo- la extracción, la explotación, y la apropiación de nuestros recursos.

Como el colonialismo clásico, el colonialismo de datos va transformando violentamente las relaciones sociales en elementos de producción económica. Elementos como la tierra, el agua, y otros recursos naturales fueron valuados por los primeros pueblos en la era precolonial, pero no de la misma manera que los colonizadores -y más tarde los capitalistas- llegaron a valorarlos, es decir, como una propiedad privada. De la misma manera, estamos viviendo en una situación en la que cosas que antes estaban fuera de la esfera económica -tales como las interacciones privadas con nuestros amigos y familiares, o nuestros archivos médicos- ahora han sido privatizadas y convertidas en parte del ciclo económico de la extracción de datos. Un ciclo que claramente beneficia principalmente a unas cuantas grandes empresas.

¿Pero qué pueden hacer los países de este “Sur Global” para evitar la explotación del colonialismo de datos?

Soluciones para el Sur Global

Una clara opción para este conjunto de países sería la de promulgar propuestas como las del escritor Jaron Lanier y el candidato presidencial estadounidense Andrew Yang, quienes han sugerido que cada uno de nosotros debería ser remunerado por los datos que producimos, a través de algún mecanismo de compensación. Pero estas propuestas neoliberales que buscan resolver el problema a nivel individual pueden al mismo tiempo diluir el valor de los recursos agregados. Si enfrentamos el problema así, los pagos a los usuarios serán difíciles de calcular, y tal vez muy pequeños.

En vez de esto, es mucho más lógico que los países del Sur Global aprovechen su tamaño y posición en el escenario internacional y tomen el paso audaz de declarar los datos generados por sus ciudadanos como un recursos nacional, demandando que compañías como Facebook o Google paguen por utilizar este recurso. Así, los principales beneficiarios del uso de datos personales serían justamente los ciudadanos que los producen.

Hagamos unos cálculos utilizando a México como un ejemplo: Facebook cuenta con 54.6 millones de usuarios en este país. En promedio, cada usuario global produce para Facebook $25 dólares al año en ganancias, lo que representa alrededor de $1.4 billones de dólares que la compañía se termina embolsando gracias a los mexicanos. Supongamos entonces que México nacionalizara sus datos y por lo tanto demandara quedarse con una parte substancial de esta suma. Y supongamos, ya que estamos haciendo este ejercicio, que arreglos similares se aplicaran al mismo tiempo con compañías como Google, Amazon, TikTok, etc.

Con billones de dólares recuperados a través de la nacionalización de los datos, el gobierno mexicano podría invertir en el desarrollo de campos como la salud, la educación, o la crisis migratoria por la cual atraviesa el país actualmente.

Sin embargo, una cosa es segura: cualquier intento de nacionalizar los datos por los países que conforman el Sur Global se enfrentaría con una intensa oposición. México nacionalizó su petróleo en 1938, gracias a una acción realizada por el presidente Lázaro Cárdenas, hoy considerado un héroe nacional, que enfureció a las compañías extranjeras. Lo anterior resultó en el boicoteo inmediato por parte de Estados Unidos, el Reino Unido, Holanda, y otros países. México solo podría librarse de esta situación por el eventual estallido de la Segunda Guerra Mundial.

También está el ejemplo de Chile. Salvador Allende amenazó en la década de 1970 con nacionalizar el sector telefónico, (que en ese minuto era controlado por la compañía norteamericana International Telephone & Telegraph), así como otras industrias. Antes de que se pudiera llevar a cabo, la CIA organizó un golpe de estado en 1973 que terminó con la muerte de Allende y una dictadura que duraría hasta 1990.

Y a Evo Morales, que experimentó con formas blandas de nacionalización que beneficiaron a los sectores más pobres de Bolivia mientras que mantenían a los inversionistas extranjeros moderadamente satisfechos, ahora lo han sacado por la fuerza de su país. No ayudó a su causa el hecho de que Morales, en un acto controversial, enmendó la constitución para poder volver postular a la presidencia luego de servir los dos periodos que ya eran permitidos por la ley boliviana.

Cualquiera sea el caso, la derecha en Bolivia y en Estados Unidos hoy están celebrando lo que algunos ven como un desarrollo interesante en la lucha por el control de minerales como el litio o el indio, los cuales son esenciales para la producción de dispositivos electrónicos.

Aún si los países que decidieran nacionalizar sus datos sobrevivieran a la represalia esperada, la nacionalización de datos no pondría fin a la raíz del problema; la normalización y legitimación de las extracción de información que ya se encuentra en proceso.

El futuro de la nacionalización de datos 

La nacionalización de datos no detendrá necesariamente la colonización que vive la región. Por eso, es una medida que debe ser pensada y entendida como una respuesta limitada a un problema mayor. Este es la razón por la cual la nacionalización de datos debe tener como objetivo final la separación de la economía del Sur Global de esta nueva especie de colonialismo.

La riqueza recuperada podría utilizarse también para desarrollar infraestructuras públicas que brinden versiones menos invasivas o explotadoras de los servicios ofrecidos por las grandes compañías tecnológicas de China y Estados Unidos. Parece difícil imaginar hoy algunas de estas alternativas, pero ya existen modelos que el Sur Global podría adoptar para desarrollar servicios que respeten la privacidad del individuo y no abusen del deseo humano de socializar.

Para evitar la corrupción y la mala administración, la sociedad civil deberá estar directamente involucrada en la toma de decisiones sobre el futuro de esta riqueza, incluyendo la capacidad de bloquear aplicaciones y usos abusivos de parte de compañías extranjeras sobre los datos generados por ciudadanos. Son, después de todos, sus datos, y es el público el que deberá tener un asiento en la mesa cuando se decida de qué manera se pueden ocupar esos recursos.

La propuesta de nacionalización de datos, aunque parezca inalcanzable y poco práctica, nos obliga por los menos a cuestionar la extracción de datos que continúa de manera indiscutible, a veces bajo el pretexto de que es un tipo de progreso que nos beneficia a todos.

 

[BigDataSur] Inteligencia artificial y soberanía digital

Por Lucía Benítez Eyzaguirre

Resumen

La autonomía que van logrando los algoritmos, y en especial la inteligencia artificial, nos obliga a repensar los riesgos de la falta de calidad de los datos, de que en general no estén desagregados y los sesgos y aspectos ocultos de los algoritmos. Las cuestiones de seguridad y éticas están en el centro de las decisiones a adoptar en Europa relacionadas con estos temas. Todo un reto, cuando todavía no hemos logrado ni la soberanía digital.

Abstract

The autonomy that the algorithms are achieving and, especially, artificial intelligence forces us to rethink the risks of lack of quality in data, the fact that in general they are not disaggregated, and the biases and hidden aspects of the algorithms. Security and ethical issues are at the center of the decisions to be taken in Europe related to these issues. It looks like a big challenge, considering that we have not yet achieved even digital sovereignty.

IA y soberanía digital

Los algoritmos organizan y formatean nuestra vida. Como si fueran un software social y cultural, éstos se van adaptando a los comportamientos humanos, y avanzan en su existencia autónoma. Sin embargo, vivimos de forma ajena a su capacidad de control sobre la desigualdad, sobre la vigilancia de nuestras vidas o al margen del desarrollo inminente del internet de las cosas o de la inteligencia artificial (IA): como si pudiéramos darnos el lujo de ignorar cómo se van independizando cada vez más de las decisiones humanas. Ahora, por ejemplo, por primera vez se ha planteado si habrá que modificar los criterios de patentes después de la intención de registrar como propiedad intelectual los inventos y diseños hechos por una inteligencia artificial. De momento, ni la Unión Europea (UE) ni el Reino Unido se han mostrado dispuestos a aceptar una iniciativa de este tipo sin un debate sobre el papel de la IA y del escenario de incertidumbre que esta situación abre.

Es en este contexto que comienza a oírse una pluralidad de voces que piden una regulación de las tecnologías asociadas a la IA; un freno al futuro de un desarrollo autónomo e inseguro. Algunas de las corporaciones de los GAFAM -el grupo que concentra las cinco empresas más grandes en tecnología en el mundo-, como Microsoft o Google ya han pedido esta regulación. Es más, incluso pareciera que estos gigantes tecnológicos comienzan a avanzar hacia la autorregulación en cuestiones éticas o de responsabilidad social, a la vista del impacto que no hacerlo puede tener sobre su reputación. La cuestión para la UE supone valorar y reconocer los riesgos del devenir incontrolable de la IA, sobre todo en asuntos como la salud o la vigilancia. De ahí que parece que el reconocimiento facial en lugares públicos se frenará en los próximos años en algunos países de Occidente, para así prevenir los riesgos detectados en China.

Para combatir los riesgos de la IA hay que comenzar por asegurar la calidad de los datos y los algoritmos, investigar sobre los sesgos que producen y la responsabilidad sobre los errores y criterios. La IA se entrena en muchos casos con datasets no desagregados y a menudo ya sesgados, por lo que conducirá a algoritmos deformados y poco representativos de la población y a desarrollos parciales, de baja calidad y dudosos resultados. Frente al cada vez más numeroso trabajo que se realiza con datos masivos, apenas hay estudios técnicos sobre su impacto humano y social. Por lo mismo, trabajos como los del profesor Matthew Fuller son un clásico recurso para tomar conciencia de la importancia de la transparencia sobre el funcionamiento de los algoritmos. Fuller plantea la aplicación de sistemas que garanticen la condición verdadera de los resultados, la mejora del modelo a partir de un mayor número de conexiones, un funcionamiento que muestre las conexiones sociales o que ponga en evidencia que a menudo se supera la capacidad de los propios sistemas que se analizan con algoritmos.

Si queremos atender a los riesgos de la IA hay que comenzar por el logro de la “gobernabilidad algorítmica”. Este concepto supone la prevención del abuso y del control con el que los algoritmos regulan nuestra vida o con el que la programación rige nuestro quehacer, nuestras rutinas. Esta gobernanza es una garantía de la transparencia, con la supervisión colectiva de usuarios y empresas de los resultados, y la responsabilidad ante el uso de la información. Los algoritmos deben garantizar la transparencia y calidad de los datos (concepto conocido como open data en inglés), ofrecer su propio código de fuente abierto, que sea auditable por sus usuarios y que pueda responder a las reclamaciones fruto de los controles ciudadanos. Pero también es imprescindible que el algoritmo sea leal y justo, es decir, que evite la discriminación que sufren las mujeres, las minorías, o cualquier otro colectivo desfavorecido. Y si se trata de un algoritmo en línea, hay que tener también en cuenta las API (Application Public Programming Interface) públicas porque condicionan tanto la recolecta de datos como la forma en que se aplican técnicas comerciales, que oculta cómo se apropian de la información.

Este espíritu también se recoge en la Declaración de Zaragoza de 2019 a partir del debate de profesionales y académicos sobre los efectos adversos, y los riesgos potenciales. Sin embargo, esta declaración también señala las recomendaciones de uso de la IA, da a conocer sus impactos y su evolución en la sociedad. Esto lo hace a través de cinco puntos sobre las dimensiones humana y social, el enfoque transdisciplinar con el que abordar la AI, la responsabilidad y el respeto a los derechos, a partir de un código deontológico propio.

La Declaración pone el acento en la necesidad de desarrollos para las políticas de interés público y la sostenibilidad, pero siempre a partir de sistemas trazables y auditables, con un compromiso con los usuarios para evaluar el cumplimiento de sus objetivos y separar los defectos o desviaciones. En cuestiones éticas, la Declaración propone la formación de los programadores no sólo técnica sino ética, social y humanista, ya que los desarrollos de software también deben contemplar estas dimensiones, así como diferentes fuentes de conocimiento y experiencia.

La Declaración de Zaragoza también incluye un “derecho a la explicación” sobre las decisiones algorítmicas, siempre y cuando éstas entren en juego con los derechos fundamentales de las personas. A pesar del que el Reglamento General de Protección de Datos de la Unión Europea ha avanzado en derechos digitales, todavía estamos muy lejos de una soberanía tecnológica al estilo de la francesa. Desde 2016, Francia se rige por la “Ley de la república digital” que impulsa los algoritmos auditables, la neutralidad de la red, la apertura de datos, la protección de la privacidad y lealtad de las plataformas con la información de sus consumidores, el derecho a la fibra y a la conexión a Internet, el derecho al olvido, la herencia digital, la obligación de informar de las brechas de seguridad detectadas, las multas en materia de protección de datos.

 

Magma guide release announcement

January 29, 2020

By Vasilis Ververis, DATACTIVE

We are very pleased to announce you that the magma guide has been released.

What is the magma guide?

An open-licensed, collaborative repository that provides the first publicly available research framework for people working to measure information controls and online censorship activities. In it, users can find the resources they need to perform their research more effectively and efficiently.

It is available under the following website: https://magma.lavafeld.org

The content of the guide represents industry best practices, developed in consultation with networking researchers, activists, and technologists. And it’s evergreen, too–constantly updated with new content, resources, and tutorials. The host website is regularly updated and synced to a version control repository (Git) that can be used by members of the network measurements community to review, translate, and revise content of the guide.

If you or someone you know is able to provide such information, please get in touch with us or read on how you can directly contribute to the guide.

All content of the magma guide (unless otherwise mentioned) is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).

Many thanks to everyone who helped make the magma guide a reality.

You may use any of the communication channels (listed in contact page) to get in touch with us.

 

Vasilis Ververis is a research associate with DATACTIVE and a practitioner of the principles ~ undo / rebuild ~ the current centralization model of the internet. Their research deals with internet censorship and investigation of collateral damage via information controls and surveillance. Some recent affiliations: Humboldt-Universität zu Berlin, Germany; Universidade Estadual do Piaui, Brazil; University Institute of Lisbon, Portugal.

[BigDataSur] How Chilean activists used citizen-generated data to fight disinformation

by Tomás Dodds

Introduction
For over 80 days now, and with no end in sight, Chile has been in the grip of waves of social protests and cultural manifestations with tens of thousands of demonstrators taking to the streets across the country. For many, the upsurge of this social outburst has its roots in a civil society rebelling against an uncaring economic and political elite that has ruled the country since its return to democracy in 1990. Mass protests were soon followed by a muddle of misinformation, both online and in the traditional press. In this blog post, I provide insights into how Chilean activists, including journalists, filmmakers, and demonstrators themselves, have started using citizen-generated data to fight media disinformation and the government’s attempts to conceal cases of human rights violations from the public.

Background
The evening of October 18th 2019 saw how Chileans started to demand the end of a neoliberal-based economic system, perceived among citizens as the main cause for the social inequalities and political injustices that occurred in the country over the last decades. However, demonstrations were met with brutal police repression and several corroborated cases of human rights violations, including sexual torture. To this day, information gathered by national and international non-governmental organizations show at least that 26 people have died and more than 2.200 have been injured during the rallies.

Although I was raised in Chile, today I am living in Amsterdam. Therefore, I could only follow the news as any other Chilean abroad; online. I placed a screen in my room streaming in a loop the YouTube channels of the prime-time late-night news of major media outlets. During the day, I constantly checked different social media platforms like Facebook or Twitter, and from time to time I would get news and tips from friends and fellow journalists in the field over WhatsApp or Signal. Information started flooding every digital space available: a video posted on social media in the morning would have several different interpretations by that evening, and dissimilar explanations would be offered by experts across the entire media spectrum by night.

And this was only the start. Amidst the growing body of online videos and pictures showing evidence of excessive military force against demonstrators, Chilean President Sebastián Piñera sat in on a televised interview for CNN’s Oppenheimer Presenta where he claimed that many recordings circulating on social platforms like Facebook, Instagram, and Twitter have been either “misrepresenting events, or filmed outside of Chile.” The President effectively argued that many of these videos were clearly “fake news” disseminated by foreign governments seeking to destabilize the country, like those of Venezuela and Cuba. Although Piñera later backed down from his claims, substantial doubts were already planted in Chileans’ minds. How could the public be sure that the videos they were watching on their social networks were indeed real, contemporary, and locally filmed? How could someone prove that the images of soldiers shooting rubber bullets at unarmed civilians were not the result of a Castro-Chavista conspiracy, orchestrated by Venezuelan President Nicolás Maduro, as some tweets and posts seem to claim with a bewildering lack of doubt? How could these stories be corroborated when most of them were absent from the traditional media outlets’ agendas?

As a recent study suggests, unlike their parents or grandparents, the generation that was born in Chile after 1990 is less likely to self-censor their political opinions and show a higher willingness to participate in public discussion. After all, they were born in democracy and do not have the grim memories of the dictatorship in their minds. This is also the generation of activists who, using digital methods, have taking it up to themselves to mount the digital infrastructure that makes relevant information visible and, at the same time, accessible to an eager audience that cannot find on traditional media the horror tales and stories that reflect the ones told by their friends and neighbors. Thus, different digital projects have started to gather and report data collected by a network of independent journalists, non-governmental organizations, and the protestors themselves in order to engage politically with the reality of the events occurring on the streets. Of these new digital projects, here I present only two that stand out in particular, and which I argue help to alleviate, or at least they did for me, the uncertainty of news consumption in times of social unrest.

DSC06091-Editar

(Image courtesy of Osvaldo Pereira) 

From singular stories to collective data
Only four days after the beginning of the protests, journalists Miguel Paz and Nicolás Ríos started ChileRegistra.info (or Chile-Records in English), a depository of audio-visual material and information regarding the ongoing protests. Chile-Registra stores and distributes videos that have been previously shared by volunteers and social networks users who have attended the rallies. According to these journalists, traditional media could not show videos of human rights violations shared on social networks because they were unable to verify them, and therefore would only broadcast images of riots and barricades, which would later produce higher levels of mistrust between the demonstrators and the press.

As a response to this problem, the project has two main purposes; First, to create a “super data base” with photos and videos of the protests, and military and police abuses. Second, to identify the creators of videos and photos already posted and shared on social networks, in order to make these users available as news source or witness for both traditional media and the prosecutors. National newspaper La Tercera and Publimetro, among other national and international media outlets, did already use this platform to published or broadcast data collected within the depository. By using this project, users were able to easily discredit Piñera’s claims that many of these videos were being recorded abroad.

The second project I would like to draw attention to is Proyecto AMA (The Audio-visual Memory Archive Project in English). AMA is a collective of journalists, photographers, and filmmakers who have been interviewing victims of human rights violations during the protests. Using the Knight Lab’s StoryMap tools, AMA’s users can also track where and when these violations have taken place, and read the personal stories behind the videos that they most probably saw before online. According to their website, members of this project “feel the urgent need to generate a memory file with the images shared on social networks, and give voice and face to the stories of victims of police, military and civil violence in Chile.”

These two projects have certainly different approaches for how they generate content. While ChileRegistra relies on collecting data from social media and citizen journalists uploading audio-visual material, Proyecto AMA’s members interview and collect testimonies from victims of repression and brutality. Although the physical and technological boundaries of each media platform are still present, these projects complement each other in a cross-media effort that precisely plays with the strengths of each of the platforms used to inform the work activists do.

New sources for informed-activism
These projects are at the intersection between technology and social justice, between the ideation and application of a new digital-oriented, computer assisted reporting. Moreover, the creation and continuous updating of these “bottom-up” data sets detailing serious human rights violations have not only been used to further the social movements, but they also indicate the necessity that digital activist have to gather, organize, classify, and perhaps more importantly, corroborate information in times of social unrest.

As long as Chileans keep taking to the streets, this civil revolution presents the opportunity to observe new ways of activism, including the use of independently-gathered data by non-traditional media and the collection of evidence and testimonies from victims of police and military brutality in the streets, hospitals, and prisons.

What can we, only relying on our remote gaze, learn from looking at the situation going on today in Chile? This movement has shown us how the public engagement of a fear-free generation and the development of a strong digital infrastructure are helping to shape collaborative data-based projects with deep democratic roots.

Lastly, let’s hope that these projects, among others, also shed some light on how social movements can be empowered and engaged by new ways of activism actively creating their own data infrastructure in order to challenge existing power relations, seemingly resistant to fade into history.

 

[blog] Why Psychologists need New Media Theory

by Salvatore Romano

 

I’m a graduate student at the University of Padova, Italy. I’m studying Social Psychology, and I spent four months doing an Erasmus Internship with the DATACTIVE team in Amsterdam.

 

It’s not so common to find a student of psychology in the department of Media Studies; some of my Italian colleagues asked me the reason for my choice. So I would like to explain four good reasons for a student of psychology to get interested in New Media Theory and Digital Humanities. In doing that, I will quote some articles to give a starting point to other colleagues who would like to study similar issues.

I participated in the “Digital Method Summer School,” which has been an excellent way to get a general overview of the different topics and methodologies in use in the department. In just two weeks, we discussed many things: from a sociological point of view on the Syrian war to an anthropological comprehension of alt-right memes, passing by semantic analysis, and data scraping tools. In the following months, I had the chance to deepen the critical approach and the activist’s point of view, collaborating with the Tracking Exposed project. The main question that drove my engagement for the whole period has been: “what reflections should we make before using the so-called ‘big data’ made available by digital media?”.

The first important point to note is: research through media should always be also research about media. It is possible to use this data to investigate the human mind and not just to make assumptions about the medium itself. However, it is still essential to have specific knowledge about the medium. New Media theory is interesting not only because it tells you what New Media are, but rather because it is crucial to understand how to use new media data to answer different questions coming from various fields of studies. That’s why, also as psychologists, we can benefit from the discussion.

The second compelling reason is that you need specific and in-deep knowledge to deal with technical problems related to digital media and its data. I experienced some of the difficulties that you can face while researching social media data: most of the time you need to build your research tools, because no one had your exact question before you or, at least, you need to be able to adapt someone else’s tool to your needs. And this is just the beginning; to keep your (or other’s) tool working, you need to update it really often, sometimes also fighting with a company that tries to obstruct independent research as much as possible. In general, the world of digital media is changing much faster than traditional media; you could have a new trendy platform each year; stay up to date is a real challenge, and we cannot turn a blind eye to all of this.

Precisely for that reason, the third reflection I made is about the reliability of the data we use for psychological research. Especially in social psychology, students are familiar with using questionnaires and experiments to validate their hypotheses. With those kinds of methodologies, the measurement error is mostly controlled by the investigator that creates the sample and assures that the experimental conditions are respected. But with big data social sciences experiment, the possibility to trace significant collective dynamics down to single interactions, as long as you can get those data and analyze them properly. To make use of this opportunity, we analyze databases that are not recorded by us, and that lack an experimental environment (for example, when using Facebook API). This lack of independence could introduce distortions imputable to the standardization operated by social media platforms and not monitorable by the researcher. Moreover, to use APIs without general knowledge about what kind of media recorded those data is really dangerous, as the chances to misunderstand the authentic meaning of the communication we analyze are high.

Also if we don’t administer a test directly to the subjects, or we don’t make assumptions just from experimental set-up, we still need to reproduce a scientific accuracy to analyze big data produced by digital media. It is essential to build our tools to create the database independently; it’s necessary to know the medium to reduce misunderstandings, and all this is something we can learn from a Media Studies approach, also as psychologists.

The fourth point is about how digital media implement psychological theory to shape at best their design. Those platforms use psychology to augment the engagement (and profits), while psychologists use very rarely the data stored by the same platforms to improve psychological knowledge. Most of the time, omnipotent multinational corporations play with targeted advertising, escalating to psychological manipulation, while a lot of psychologists struggle to understand the real potential of those data.

Concrete examples of what we could do are the analysis of the hidden effects of the Dark Patterns adopted by Facebook to glue you to the screen; the “Research Personas” method to uncover the affective charge created by apps like Tinder; the graphical representation of the personalization process involved in the Youtube algorithm.

 

In general, I think that it’s essential for us, as academic psychologists, to test all the possible effects of those new communication platforms, not relying just on the analysis made by the same company about itself, we need instead to produce independent and public research. The fundamental discussion about how to build the collective communications system should be driven by those types of investigations, and should not just follow uncritically what is “good” for those companies themselves.

 

Off the Beaten Path: Human rights advocacy to change the Internet infrastructure

Report on Public Interest Internet Infrastructure workshop held at Harvard University in September 2019

by Corinne Cath-Speth and Niels ten Oever

Introduction

Surveillance-based business model[s] force people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse.

Choice words from the latest report published by Amnesty International, in which they consider the human rights’ implications of Big Tech’s extractive business model. Their conclusions are bleak; the terms of service on which we engage in social media and search are diametrically opposed to human rights. This, however, comes as no surprise to academics and activists who have been highlighting the Internet’s negative ramifications over the past decade. In this blog, we present some thoughts on the promises and perils of human rights advocacy aimed at changing computer, rather than, legal code. It draws on insights shared during a two-day workshop on public interest advocacy and design in Internet governance processes, with a particular focus on Internet standards. The workshop, entitled “Future Paths to a Public Interest Internet Infrastructure” took place in the fall of 2019 at the Harvard Kennedy School, in Cambridge, Massachusetts. It brought together 26 academics, activists, technologists, civil servants, and private sector representatives from 12 countries.

Concerns at the intersection of Internet governance and society span way beyond—or rather, below—those touching on social media, search engines, or e-commerce. They also include technologies, like Internet standards and protocols, that most of us have never seen but rely on for our day-to-day use of the Internet. The development and governance of these technologies is increasingly subject to scrutiny of public interest advocates. This is not that surprising given the history of struggles over power, norms, and values that colour the development of global communications infrastructures, like the phone, the telegraph, and Internet standards.

The advocates currently participating in governance and standards bodies are legion: they span from the American Civil Liberties Union (ACLU) to various Centres for Internet and Society, to the C-suites of tech-companies. Their theories-of-change rooted in the idea that digital technologies shape communication such that it can impede or enable the exercise of rights. Their tactics focused on direct engagement with companies, often through the technical working groups of the key Internet governance organizations. Little, however, is known about these advocacy efforts. Like the standards they focus on, these efforts are largely invisible. The ferocity of the public debates about the negative impact of the Internet on society, as well as growing condemnation of industry-led tech ethics efforts, calls for these efforts to be brought to light.

Documenting Workshop Discussions

The discussion at the workshop took us from the very top of the Internet’s stack, where our social media and search applications live, to its depths where sharks chew on Internet cables. We discussed expanding, collapsing, horizontally and vertically integrating the Internet’s stack, and even doing away with the concept all together. Likewise, we discussed what it means to do public interest advocacy aimed at changing the Internet’s infrastructure, what “public interest” entails as a concept, how different stakeholders can be effective advocates of it, and what it takes to study it. We do not aim to provide definitive answers. Rather, we will highlight three discussions that show where participants diverged and converged on their respective path(s) towards including public interest considerations in the Internet’s infrastructure.

  • Pragmatism and its politics: How and when public interest advocates should team up with colleagues in the private sector or government was a crucial discussion during the workshop. It revealed that cross-industry cooperation often put the public interest advocates between a rock and a hard problem: how do you known when cooperation turns into co-optation? Many took a “pragmatist” position, acknowledging that their concerns around tech development often stemmed from core-business decisions, which they considered beyond their influence. However, they argued, this was insufficient reason to write off strategic cooperation to move the technical needle. Even if it meant much of their work was focused on treating symptoms rather than causes. The turn to pragmatism, highlighted an underlying concern. As with most social values, “public interest”, means different things to different folks. This in turn implies that public interest representatives are not only contending with difficult choices about strategic collaboration across sectors, but also within them. This tension is both irresolvable and interesting, for the debate and careful articulation of advocacy positions it requires. Which, as one participant optimistically quipped: is helpful because now at least I know where you are going and what it takes for us to get there.
  • Shrinking space for civil society: Civil society organisations trying to raise public interest considerations in Internet governance are fighting on multiple fronts. Within Internet governance organisations they are contending with inherent hurdles: the power differentials between corporate and non-commercial participants, lack of civil society funding for work seen as technically opaque and difficult to explain to funders; the technical learning curve; lack of consensus among allied organisations; and the confrontational culture of Internet standardisation bodies. At the same time, they are operating in a broader context of a shrinking space for civil society. In many countries, the regulatory environment is such that it is near impossible to be an effective civil society organisation. The question then becomes how to grow and sustain civil society participation in the development of the Internet’s infrastructure in the face of internal and external pressure that limit it.
  • What is the endgame? For some getting the tech right was their main concern. Other argued that this was too narrow an endgame for public interest representation in Internet governance. Focusing on the tech is necessary but insufficient. Code, the participants agreed, is not the pinnacle of societal change. In order for these interventions to have ramifications beyond their direct context they need to connect to existing work done outside of a limited number of Internet standardisation bodies. Many of the participants were actively creating these necessary connections to other technical communities, by talking to Internet Service Providers (ISP) and other Internet governance stakeholders. Yet, many agreed that ensuring the Internet’s infrastructure reflects particular articulations of “the public interest” requires policy as much as protocol intervention.

These three discussions only scratch at the surface of the conversation during the workshop. If you are interested in learning more, please see here for the full workshop report. The social movements bringing a range of public interest considerations (from civil liberties, to social justice, to human rights) to the Internet infrastructure and its governance processes, will keep evolving. Like the Internet’s infrastructure itself. This blog should thus as is good practice in academia, engineering, and activism alike, be seen as documentation of known issues and efforts at this current moment. Rather than a singular path-forward. It provides a departure point to further develop this conversation to include a broader range of stakeholders, network engaged scholars, and practitioners.

The workshop was organised by:

  • Niels ten Oever, DATACTIVE, University of Amsterdam
  • Corinne Cath-Speth, Oxford Internet Institute, Digital Ethics Lab, University of Oxford
  • Beatrice Martini, Digital HKS, Harvard Kennedy School

We would like to thank the Harvard Kennedy School, ARTICLE19, Ford Foundation, MacArthur Foundation, Open Technology Fund, European Research Council, DATACTIVE, and the Amsterdam School for Globalisation Studies for their generous support that made this this workshop possible.

 

Internet governance, standards, and infrastructure