Urbano Reviglio, Ph.D. candidate of the University of Bologna in collaboration with Claudio Agosti, the brain behind tracking.exposed just pubished a new academic article on Algorithmic Sovereignty in Social Media + Society (SAGE). Find an extended abstract below, and the full paper here.
Everyday algorithms update a profile of “who you are” based on your past preferences, activities, networks and behaviours in order to make a future-oriented prediction and suggest you news (e.g. Facebook and Twitter), videos (e.g. Youtube), movies (e.g. Netflix), songs (e.g. Spotify), products (e.g. Amazon) and, of course, ads. These algorithms define the boundaries of your Internet experience, affecting, steering and nudging your information consumption, your preferences, and even your personal relations.
Two paradigmatic (and likely most influential) examples clarify well the importance of this process. On Facebook, you can encounter 350 posts on average, prioritized on about 1.500. As such, you can be exposed only to 25% of the information, while roughly 75% is actually hidden. This is Facebook’s newsfeed algorithm that is choosing for you. And it is rather good at that. Think also of Youtube; its recommendations already drive more than 70% of the time you spend in the platform, meaning you are mostly “choosing” in a pre-determined set of possibilities. In fact, 90% of the ‘related content’ on the right side of the website is already personalized for you. Yet, this process occurs largely beyond your control and it is mostly based on implicit personalization — behavioural data collected from subconscious activity (i.e. clicks, time spent etc.) — rather than on deliberate and expressed preferences. Worryingly, this might become a default choice in future personalization, essentially because you may be well satisfied without further questioning the process. Do you really think the personalization that recommends you what to read and watch is indeed the best you could experience?
Personalization is not what is narrated by mainstream social media platforms. There are a number of fundamental assumptions that are nowadays shared by most researchers, and these need clarifications. Profiling technologies that allow personalization create a kind of knowledge about you that is inherently probabilistic. Personalization, however, is not exactly ‘personal’. Profiling is indeed a matter of pattern recognition, which is comparable to categorization, generalization and stereotyping. Algorithms cannot produce or detect the complexities of yourself. They can, however, influence your sense of self. As such, profiling algorithms can trivialize your preferences and, at the same time, steer you to conform to the status quo of your past actions chosen by ‘past selves’, narrowing your “aspirational self.” They can limit the diversity of information you are exposed to, and they can ultimately perpetuate existing inequalities. In other words, they can limit your information self-determination. So, how can you fully trust proprietary algorithms that are naturally designed for ‘engagement optimization’ — to hook you up to the screen as much as possible — and not explicitly designed for your personal growth and society’s cohesion?
One of the most concerning problems is that personalization algorithms are increasingly ‘addictive by design’. Human behavior indeed can be easily manipulated by priming and conditioning, using rewards and punishments. Algorithms can autonomously explore manipulative strategies that can be detrimental to you. For example, they can use techniques (e.g. A/B testing) to experiment with various messages until they find the versions that best exploit your vulnerabilities. Compulsion loops are already found in a wide range of social media. Research suggests that such loops can work via variable-rate reinforcement in which rewards are delivered unpredictably — after n actions, a certain reward is given, like in slot machines. This unpredictability affects the brain’s dopamine pathways in ways that magnify rewards. You think you liked that post… but you may have been manipulated to like that after several boring posts, with an outstanding perfect timing. Consider how just dozens of Facebook Likes can reveal useful and highly accurate correlations; hundreds of likes can predict your personality better than your mother could do, research suggests. This can be easily exploited. For example, if you are vulnerable to moral outrage. Researchers have found that each word of moral outrage added to a tweet raises the retweet rate by 17%. Algorithms know that, and could feed you with the “right” content at the right time.
As a matter of fact, personalization systems deeply affect public opinion, and more often negatively. For increasingly more academics, activists, policy-makers and citizens the concern is that social media, more generally, are downgrading our attention spans, a common base of facts, the capacity for complexity and nuanced critical thinking, hindering our ability to construct shared agendas to help to solve the epochal challenges we all face. This supposed degraded and degrading capacity for collective action arguably represents “the climate change of culture.” Yet, research on the risks posed by social media – and more specifically their personalization systems – is still very contradictory; these are very hard to prove and, eventually, to mitigate. In light of the fast-changing media landscape, many studies become rapidly outdated, and this contributes to the broader crisis concerning the study of algorithms; these are indeed “black-boxed”, which means their functioning is opaque and their interpretability may not even be clear to engineers. Moreover, there are no easy social media alternatives one can join in to meet friends and share information. These one day might spread but until that day billions of people worldwide have to rely on opaque personalization systems that ultimately may impoverish them. They are an essential and increasingly valuable public instrument to mediate information and relations. And considering that these even introduce a new form of power of mass behavioral prediction and modification that is nowadays concentrated in very few tech companies, there is a clear need to radically tackle these risks and concerns now. But how?
By analyzing challenges, governance and regulation of personalization, what we argue in this paper is that we as a society need to frame, discuss and ultimately grant to all users a sovereignty over personalization algorithms. More generally, with ‘algorithmic sovereignty’ in social media we intend the regulation of information filtering and personalization design choices according to democratic principles, to set their scope for private purposes, and to harness their power for the public good. In other words, to open black-boxed personalization algorithms of (mainstream) social media to citizens and independent and public institutions. By doing this, we also explore specific experiences, projects and policies that aim to increase users’ agency. Ultimately, we preliminary highlight basic legal, theoretical, technical and social preconditions to attain what we defined as algorithmic sovereignty. To regain trust between users and platforms, personalization algorithms need to be seen not as a form of legitimate hedonistic subjugation, but as an opportunity for new forms of individual liberation and social awareness. And this can only occur with the right and capacity by citizens as well as democratic institutions to make self-determined choices on these legally private (but essentially public) personalization systems. As we argue thoughout the paper, we believe that such endeavor is within reach and that public institutions and civil society could and should eventually sustain its realization.