From Uncategorized

Connection two persons

[blog] Why Psychologists need New Media Theory

by Salvatore Romano

 

I’m a graduate student at the University of Padova, Italy. I’m studying Social Psychology, and I spent four months doing an Erasmus Internship with the DATACTIVE team in Amsterdam.

 

It’s not so common to find a student of psychology in the department of Media Studies; some of my Italian colleagues asked me the reason for my choice. So I would like to explain four good reasons for a student of psychology to get interested in New Media Theory and Digital Humanities. In doing that, I will quote some articles to give a starting point to other colleagues who would like to study similar issues.

I participated in the “Digital Method Summer School,” which has been an excellent way to get a general overview of the different topics and methodologies in use in the department. In just two weeks, we discussed many things: from a sociological point of view on the Syrian war to an anthropological comprehension of alt-right memes, passing by semantic analysis, and data scraping tools. In the following months, I had the chance to deepen the critical approach and the activist’s point of view, collaborating with the Tracking Exposed project. The main question that drove my engagement for the whole period has been: “what reflections should we make before using the so-called ‘big data’ made available by digital media?”.

The first important point to note is: research through media should always be also research about media. It is possible to use this data to investigate the human mind and not just to make assumptions about the medium itself. However, it is still essential to have specific knowledge about the medium. New Media theory is interesting not only because it tells you what New Media are, but rather because it is crucial to understand how to use new media data to answer different questions coming from various fields of studies. That’s why, also as psychologists, we can benefit from the discussion.

The second compelling reason is that you need specific and in-deep knowledge to deal with technical problems related to digital media and its data. I experienced some of the difficulties that you can face while researching social media data: most of the time you need to build your research tools, because no one had your exact question before you or, at least, you need to be able to adapt someone else’s tool to your needs. And this is just the beginning; to keep your (or other’s) tool working, you need to update it really often, sometimes also fighting with a company that tries to obstruct independent research as much as possible. In general, the world of digital media is changing much faster than traditional media; you could have a new trendy platform each year; stay up to date is a real challenge, and we cannot turn a blind eye to all of this.

Precisely for that reason, the third reflection I made is about the reliability of the data we use for psychological research. Especially in social psychology, students are familiar with using questionnaires and experiments to validate their hypotheses. With those kinds of methodologies, the measurement error is mostly controlled by the investigator that creates the sample and assures that the experimental conditions are respected. But with big data social sciences experiment, the possibility to trace significant collective dynamics down to single interactions, as long as you can get those data and analyze them properly. To make use of this opportunity, we analyze databases that are not recorded by us, and that lack an experimental environment (for example, when using Facebook API). This lack of independence could introduce distortions imputable to the standardization operated by social media platforms and not monitorable by the researcher. Moreover, to use APIs without general knowledge about what kind of media recorded those data is really dangerous, as the chances to misunderstand the authentic meaning of the communication we analyze are high.

Also if we don’t administer a test directly to the subjects, or we don’t make assumptions just from experimental set-up, we still need to reproduce a scientific accuracy to analyze big data produced by digital media. It is essential to build our tools to create the database independently; it’s necessary to know the medium to reduce misunderstandings, and all this is something we can learn from a Media Studies approach, also as psychologists.

The fourth point is about how digital media implement psychological theory to shape at best their design. Those platforms use psychology to augment the engagement (and profits), while psychologists use very rarely the data stored by the same platforms to improve psychological knowledge. Most of the time, omnipotent multinational corporations play with targeted advertising, escalating to psychological manipulation, while a lot of psychologists struggle to understand the real potential of those data.

Concrete examples of what we could do are the analysis of the hidden effects of the Dark Patterns adopted by Facebook to glue you to the screen; the “Research Personas” method to uncover the affective charge created by apps like Tinder; the graphical representation of the personalization process involved in the Youtube algorithm.

 

In general, I think that it’s essential for us, as academic psychologists, to test all the possible effects of those new communication platforms, not relying just on the analysis made by the same company about itself, we need instead to produce independent and public research. The fundamental discussion about how to build the collective communications system should be driven by those types of investigations, and should not just follow uncritically what is “good” for those companies themselves.

 

Unknown

Stefania in Tel Aviv for the workshop “Algorithmic Knowledge in Culture and in the Media” (October 23-25)

On October 23-25, Stefania will be in Tel Aviv to take part in the international workshop “Algorithmic Knowledge in Culture and in the Media” at the Open University of Israel. The invitation-only workshop is organized by Eran Fisher, Anat Ben-David and Norma Musih. Stefania will present a paper on the ALEX project, DATACTIVE’s spin-off, as an experiment into algorithmic knowledge.

Unpacking the Effects of Personalization Algorithms: Experimental Methodologies and Their Ethical Challenges

Stefania Milan, University of Amsterdam

With social media platforms playing an ever-prominent role in today’s public sphere, concerns have been raised by multiple parties regarding the role of personalization algorithms in shaping people’s perception of the world around them. Personalization algorithms are accused of promoting the so-called ‘filter bubble’ (Pariser 2011) and suspected of intensifying political polarization. What’s more, said algorithms are shielded behind trade secrets, which contributes to their technical undecipherability (Pasquale 2015). Against this backdrop, the ALgorithms EXposed (ALEX) project, has set off trying to unpack the effects of personalization algorithms, experimenting with methodologies, software developments, and collaborations with hackers, nongovernmental organizations, and small enterprises. In this presentation, I will reflect on four aspects of the ALEX project as an experiment into algorithmic knowledge, and namely: i) software development, illustrating the working of the browser extensions facebook.tracking.exposed and youtube.tracking.exposed; ii) experimental collaborations within and beyond academia; iii) methodological challenges, including the use of bots; and iv) ethical challenges, in particular the development of data reuse protocols allowing users to volunteer their data for scientific research while individual safeguarding data sovereignty.

Unknown

YouTube Algorithm Exposed: DMI Summer School project week 1

DATACTIVE participated in the first week of the Digital Methods Initiative summer school 2019 with a data sprint related to the side project ALEX. DATACTIVE’s insiders Davide and Jeroen, together with research associate and ALEX’s software developer Claudio Agosti, pitched a project aimed at exploring the logic of YouTube’s recommendation algorithm, using the ALEX-related browser extension youtube.tracking.exposed. ytTREX allows you to produce copies of the set of recommended videos, with the main purpose to investigate the logic of personalization and tracking behind the algorithm. During the week, together with a number of highly motivated students and researchers, we engaged in collective reflection, experiments and analysis, fueled by Brexit talks, Gangnam Style beats, and the secret life of octopuses. Our main findings (previewed below, and detailed later in a wiki report) pertain look into which factors (language settings, browsing behavior, previous views, domain of videos, etc.) help trigger the highest level of personalization in the recommended results.

 

Algorithm exposed_ investigasting Youtube – slides

 

 

 

sci-foo

Stefania at Science Foo

On July 12-14 Stefania will be at X in Mountain View, in Silicon Valley, as one of the invitees to Sci Foo. Science Foo is a series of interdisciplinary conferences organized by O’Reilly Media, Digital Science, Nature Publishing Group and Google. It is an “unconference focused on emerging technology, and is designed to encourage collaboration between scientists who would not typically work together”. Stefania plans to propose a session on ‘decolonizing data’.

cosmos

Stefania at the Summer School on Methods for the Study of Political Participation and Mobilization, Florence

On June 4, Stefania gives a lecture on ethical issues in social movement and political participation research at the Summer School on Methods for the Study of Political Participation and Mobilization, in Florence, Italy.

The school is organised by the ECPR Standing Group on Participation and Mobilization and the Dipartimento di Scienze Politico-Sociali at the Scuola Normale Superiore.

 

dpg-logo-screen

Stefania at the Deutsche Physikalische Gesellschaft, Berlin

On April 9, Stefania was in Berlin to give a talk at the Magnus-Haus, the headquarters of the Deutsche Physikalische Gesellschaft (German Physical Society), as part of the Physik und Gesellschaft series.
The talk was entitled /Error 404: Social Life Not Found/ – How to bring politics back into the datafied society, and was moderated by Prof. Dr. Wolfgang Eberhardt.

Abstract
Datafication – or the process of rendering into data aspects of social life that have never been quantified before – has altered the way we experience ourselves and exercise our citizenship today. Blanket surveillance and privacy infringements, however, are making citizens grow aware of the critical role of information as the new fabric of social life. As the advent of datafication and the automation turn threaten social life as we know it, how can we re-invent citizenship? How can we bring progressive politics back, to inform, among others, technological development and public policies? In this talk I will reflect on how politics and citizen agency are re-designed in light of the challenges and possibilities of big data and machine learning.