lectio praecursoria

Lectio Praecursoria: Who is the algorithm? Interfacing the social, emotional, and algorithmic

Even a perfectly transparent algorithm – with all its laws and rules fully visible – might not tell enough about the social impacts and entanglements of technology, suggests Laura Savolainen in her doctoral thesis “Who is the algorithm? Interfacing the social, emotional, and algorithmic”, which was examined at the University of Helsinki on September 1, 2023. This post is the lector praecursoria presented at the public examination.

Sociology has always been interested in the ways in which social worlds are patterned: how they hold together, order themselves, and change over time. Studying the pervasiveness of formal rules and informal norms, social conventions, and rituals of everyday life, reveals that despite our tendency to regard ourselves as unique individuals, we behave quite programmatically – in the sense of following shared, often unspoken rules and reproducing collective practices.

Sociologists’ interest in social order has led us to study, among other things, rituals, forms of organization, such as social hierarchies, and bureaucracies. Today, there are new powerful sets of rules on the sociological radar: algorithms. Given the fact that big social media platforms rule in digital ecosystems, and are central nodes in communication networks, the algorithmic systems of services like Instagram, TikTok, Facebook or YouTube matter both for the well-being, economic situation, and digital rights of individual users, as well as on us collectively, for instance, with regards to consumption or political behavior.

At the simplest level, an algorithm can be defined as a set of rules or instructions that process input to arrive at a desired output. This definition points to datafication that underlies algorithmic media. Datafication refers to how everyday activities and interactions, and other previously qualitative aspects of life, increasingly become transformed into quantitative, digital information. This data can then be used in various ways by companies and other organizations, for instance, to aid in decision-making or as material for new products. On social media platforms, machine learning algorithms are used to find patterns in behavioral data, select what is worthy of attention and for whom, and enact content policies and community guidelines. Platforms are invested in artificial intelligence to the extent that many of the leading machine learning scholars are employed by platform companies like Meta and Google, and these companies have their own, richly resourced departments for AI research and development.

In Programmed Visions, Wendy Chun1 discusses software as a kind of metaphor for the invisible causal forces whose effects we can observe in our daily lives, yet that we can never encounter directly or in their totality. The discussion on algorithms makes similar claims, suggesting that perhaps by having more transparency to the underlying laws of code, we could understand how social behavior is patterned online, and why things appear as they do.


Pervasive datafication and algorithms are often associated with surveillance and control. Indeed, the histories of quantification, and formal, rule-based decision-making and statistical techniques are intimately related to the emergence of centralized government. Yet, proponents of datafication and artificial intelligence maintain that data-driven techniques can in fact enable removing biased gatekeepers and allow for novel discoveries beyond human preconceptions and judgments.

The key research problem of this dissertation concerns precisely this paradoxical nature of algorithmic media. On the one hand, powerful algorithms are seen as an external force that monitors and manipulates social media users for the benefit of corporations, forcing people to act or consume in some way, for instance: ”TikTok algorithm promotes content by attractive creators”. On the other hand, machine learning algorithms are assumed to simply reflect human behaviour and consumer demand. Thus, equally often, we seem to be hearing that algorithms do not represent the platform’s will, but that they empower consumers: if a content creator isn’t “doing well in the algorithm”, it is because their content “simply isn’t good enough, and doesn’t interest people”. From this latter perspective, when we encounter algorithmic outputs, we can be seen as engaging with ourselves and our society: machines simply reflect ourselves back to us, though sometimes in unpredictable ways. This ambivalence is also where the name of the dissertation stems from: Who is the algorithm?


The dissertation consists of four original publications and employs various research methods. I study algorithms in relation to broader socio-technical systems and the contexts in which they are used. Overall, I approach platform infrastructure and algorithmic logics through user experiences and interfaces, and the effects of algorithms in practice. These four original papers contribute to ongoing discussions within social media research, such as disinformation, digital activism, algorithmic content moderation, and human agency in relation to algorithmic decision-making. The dissertation doesn’t aim to analyse and draw general conclusions from a single empirical case. Instead, it illustrates, through several distinct studies, how algorithmic platforms and their use mutually shape one another. However, this relationship is uneven: while users do influence what becomes visible or not, their agency is limited since they have minimal control over how their actions are recorded as data, and what goals the algorithmic system prioritizes.

To contextualize these uneven relationships, I identify three modes in which social ordering by algorithmic platforms is understood in the public debate and social scientific literature: the market mode, the ruler mode, and the game mode. I show how, in trying to understand how algorithms structure and steer social cooperation, previous work on algorithmic power has searched for analogies from social forms seemingly rife with formal laws and rules.

  1. From the point of view of the market, the algorithmic system connects supply and demand for content. Algorithms match user groups with complementary needs, with their interactions creating signals and incentives that create further cooperation and competition. Algorithms, at bottom, merely help laws of the market to work. Platforms indeed represent their algorithmic systems as enablers of market democracy in digital media.
  2. In contrast, the ruler-mode draws attention to platform companies as autocrats in the digital realm. According to this perspective, it is they – and not some invisible hand of the (algorithmic) market – who design the rules and set the incentives. They make and enforce policy regarding appropriate and desirable behavior above and beyond what is legally expected of them: what counts as nudity or distasteful, or recommendable or non-recommendable content. And what should be the punishment for not following the rules.
  3. The perspectives on algorithmic platforms as markets and as rulers don’t pay too much attention to people: that users act according to market signals or follow policies is simply assumed. The notion of algorithms as games, meanwhile, brings human agency back in. This perspective is more empirical in focus, showing how people in fact constitute a persistent design problem, making it difficult to engineer social order and cooperation. Users probe how recommendation and moderation algorithms work, in order to side-step and exploit algorithmic rules for their own favor. The game metaphor depicts the relationship between platforms and users as a kind of continuous play of cat and mouse.

The conceptual perspectives of the market, ruler, and game, illustrate important aspects about how algorithms are employed to create social order, and how they subjectify users. However, I also problematize these three notions, illustrating some of their limits: 

  1. First, the ways in which platforms talk about their algorithms as creating markets for content makes it sound like they would merely facilitate and intermediate rational, pre-existing consumer choices and needs. Yet, this story seems dubious given the highly persuasive techniques involved in their designs. If anything, what is being created is market populism based on aggregated, largely unconscious behavioral signals and rife with predictions that become self-fulfilling prophecies. Again, it needs to be asked: Who really is deciding?
  2. Second, machine learning systems follow manually coded rules only to the extent of being instructed to learn actual decision-making principles from extremely multidimensional data, putting considerable pressure on the idea of a centralized power in charge. They even start to appear as rulers in their own right, responsible for not only executing but also setting and interpreting rules. Who really is in control?
  3. The game metaphor can also be challenged as a highly situational notion. People don’t always feel like they are playing with or against algorithms when they’re using social media. This is because digital platforms are designed to create a highly seamless user experience, which people also enjoy and seek out.

The perspectives of the market, ruler, and game help us see an algorithmic system in a new analytical light by describing it in terms of some other system. Yet, these provocations suggest that these three modes of algorithmic power – as identified by previous research – fall short in overlooking the specificity of algorithmic mediations, making it feel like we’re dealing with something that is not new at all: just another market, just another bureaucracy, just another strategic situation. If platforms were like efficient markets, they would be mere intermediaries without any consequential technicity of their own. If they are seen as all-powerful rulers, technology appears simply as an epiphenomenon of corporate power, a non-issue, again without any consequential role of its own. And if recommendation algorithms are like games we try to win or “beat”, why do we so often engage with them mindlessly, desire for them to decide for us, or end up feeling like we are the ones being played? 


In terms of my own research, rather than seeking to conceptually pin down the “algorithmic”, I’ve been precisely interested in how it can be defined in contradictory ways simultaneously, how it entangles agencies, and how it is shaped by different contexts of use. What I suggest, then, is that we might need understandings that do not begin with a clear separation of causality between the social and the technological, but rather focus on their empirical interdependencies. The influential French philosopher and sociologist of science, Bruno Latour, argued that one pitfall of those whom he called “modernists” was in thinking that scientific and technological “development” would lead to a greater level of human mastery, separation of values from facts, and emancipation from constraints2. Instead, he argued, an entirely different process is at stake: namely, “a continuous movement toward a greater and greater level of attachments of things and people at an everexpanding scale and at an ever-increasing degree of intimacy” (Latour, 2007: 107). To me, this notion of intimacy is very suggestive, even if it seems unintuitive at first, given that technology is often understood primarily as cold or instrumental. I take intimacy, in this context, to refer to a relationship that involves mutual shaping and growing closeness over time. I ask: if we take this enmeshing or as I call it in the dissertation, interfacing of social and technological as a starting point, in what kind of light do the challenges of improving platform governance, and enhancing user agency, appear?

This perspective draws attention to unfolding socio-technical interdependencies: our societies – cultural practices and expectations, forms of socialization, professions, institutions and media ecosystems – have already co-evolved with platforms, datafication, and algorithms. This intimate co-evolving is indeed an ongoing process: we only pay attention to it when something breaks down: there is a data breach, disinformation or other problematic content seems to spread in a flash, or we hear stories about social media radicalization. Suddenly, we wake up to the fact that our old ways of conceptualising or regulating have become ineffective or outdated; think, for instance, of the current debate around how we should understand speech rights in the age of algorithmic amplification and non-recommendation.

This also means that there is no sense in putting our faith in some perfectly ethical or moral algorithm. It is, of course, absolutely crucial to correct for discriminatory and marginalizing effects of algorithmic decision-making. Yet, questions over what should be made visible will always be biased – in other words, partisan, and favoring certain values and causes over others. Importantly, there can be no perfectly ethical algorithm, because the societal and ethical impacts of algorithmic systems often relate to the ways in which technologies are used, appropriated, and what contextual effects they have.

Indeed, it is not ‘the algorithm’ that, for instance, would identify and promote divisive content. There is no such single source of agency. Rather, what’s at stake is how platform algorithms and decision-making pipelines play together with social behaviors, tastes and identities; the strategies of content creators; and market pressures. We need to learn to manage and re-balance this socio-technical whole. From this perspective, it seems much more important to create healthier incentives, instead of exclusively dealing with the symptom (for instance, by using machine learning models to delete harmful content).


So, it is often said that algorithms are too difficult to understand. But what if they are not? Maybe the real opacity and uncertainty concerns precisely these unfolding socio-technical interdependencies. It’s not enough to know about the abstract principles of machine learning. Even a fully transparent algorithm – with all its laws and rules fully visible – might actually tell quite little about its social impacts and entanglements, because the social world is by definition multistable, messy and emergent. In addition to obtaining visibility to the hidden rules of algorithms, we need real-world data about how algorithms work in practice: for instance, what they have promoted, demoted, and deleted, and with what real-world consequences in the lives of people and organizations. Understanding such processes requires a fundamentally multidisciplinary, mixed methods research agenda.

All in all, based on my research and reflection, social media algorithms appear as distributed socio-technical systems. These systems are a mix of different parts that interact, and they are characterized by both strict rules set from the top, and emergent self-organization. It is true that complex dynamics and feedback loops exist in other spheres, too. Yet, on social media, the circulation of information happens instantly, responsively, and largely without human supervision, increasing the level of uncertainty. A key challenge then relates to how to respond quickly to emerging concerns, while also ensuring public deliberation, oversight, and legitimacy.

To return to the beginning, sociology has a lot to offer here, in this remaking and rebuilding. This is because sociology is not only interested in how action is structured or determined by rules, but also in its inherent openness. Without agency and learning, there would be no social change, but we would be forever repeating ourselves, in a certain type of a feedback-loop.  Indeed, everything that is built or organized by humans, can be done in another way: artificiality is actually potential. Such a  perspective allows us to imagine computation that would see and build the world differently. We tend to associate machine learning algorithms and statistical techniques with merely repeating what has happened in the past, and reproducing the probable and average, but they can equally guide us toward outliers. Rather than marrying computation with the logic of the neoliberal market, or categorization based on social identities, could we try to think of other guiding principles for algorithmic pipelines and processes? Algorithmic systems could redistribute visibility more equitably and fairly; they could be designed to slow down rather than speed up viral, cascading phenomena; they could involve more human-machine collaboration; or they could be more pedagogical, allowing people to gain more meaningful knowledge about machine learning algorithms as they are using them. In reviving our imagination about the various and always open-ended forms that social organization and cooperation may take in the digital realm, social science is a significant and valuable resource.


The full dissertation is available at: http://hdl.handle.net/10138/564001

  1. Chun W (2011) Programmed visions: Software and memory. Cambridge, Mass.: MIT Press. ↩︎
  2. Latour B (2008) “It’s Development, Stupid!” or: How to Modernize Modernization, Downloaded from: http://www.bruno-latour.fr/sites/default/files/107-NORDHAUS%26SHELLENBERGER.pdf (20.08.2023). ↩︎

Leave a comment