The first meeting in our NOS-HS workshop series Nordic Perspectives on Algorithmic Systems: Concepts, Methods, and Interventions was organized on May 22 and 23 in Stockholm. The goal of the workshop series is to develop a Nordic approach to critical algorithm studies, with the first workshop focusing on coming up with metaphors and concepts that would be useful for pushing debates about algorithmic systems forward. In addition, the workshop series aims to establish a network in the Nordics for those interested in the topic of algorithm studies.
We think it is safe to say that the first workshop took some quite successful steps towards achieving these goals. We had an intense two days of brainstorming and exchanging ideas with 16 participants from Helsinki, Tampere, Stockholm and Copenhagen, representing a multitude of different fields ranging from Human-Computer Interaction and Software Development to Sociology and Philosophy of Science.
On the first day, we heard short presentations from each participant, along with introductions to specific concepts/metaphors they consider relevant for approaching and thinking about algorithmic systems. On the basis of these discussions, we collated conceptual maps of various ways to conceive of algorithms. On the second day, we discussed in pairs articles which each participant had brought along as examples of inspiring work. Further, we had a discussion about optimistic/constructive and pessimistic/critical approaches to discussing technology.
Here are selected takeaways from our discussions, including both conceptual approaches to algorithmic systems, as well as thoughts on how to approach the debate more generally:
Control, care, and empowerment
One central issue in thinking about algorithmic systems concerns the motivations and justifications for the use of algorithms. In this respect, the distinctions between control, care and empowerment were brought into discussion as useful notions for elucidating the different logics of using algorithms. While the logic of control relates to algorithmic surveillance and the aim of governing or managing behavior, the aim inherent in the logic of care is that of supporting certain forms of behavior rather than preventing others. For instance, we discussed the work of content moderators on discussion forums, where moderators wish that automated methods could liberate them to work on fostering and guiding discussion instead of the current focus of deciding what content to delete and what to allow. The important point here is that these different aims encompass divergent justifications for the use of algorithms: while surveillance as control is often justified in terms of necessity and protection, the legitimacy of algorithmic care rests on the thriving and well-being of its subjects. Contrasting with these aims, the logic of empowerment justifies the use of algorithms in terms of performance increases and efficiency. The aim of empowerment is then grounded on ideals such as progress and development. Although empowerment aims at providing people with increased capabilities for action, its underlying principle of optimization can also become self-serving, with people turning into material in the quest for optimizing shallow and cheap quantitative metrics.
Optimization and resilience
The concept of optimization was discussed in particular by reference to the work of Halpern and others [1] on smart cities. In developing algorithms with the aim of optimization, improving the system’s performance in terms of quantified metrics can become an end in itself, which supersedes conscious planning and deliberation in organizing action. Optimal performance carries with it a rhetorical force, which can work to legitimate algorithmic management of increasingly many aspects of life. As such, as Orit Halpern and others note, optimization serves to justify the use of notions such as “smartness” in relation to systems which seek to find optimal solutions to predefined problems. Thus, from the perspective of optimization, the crucial question to ask about algorithmic systems might not concern their performance in the technical sense, but rather the choices made when defining the system’s goals and means of finding optimal solutions.
Another notion connected to the idea of autonomously operating “smart” systems was that of resilience, which denotes the system’s capacity to change in order to survive through external perturbations [1]. While the stability or robustness of algorithmic systems is their ability to maintain fixed functioning upon external influences, resilience concerns the temporal dimension and the lifespan of these systems, and their ability to evolve and adapt their behavior to “live” through changing environmental conditions. The resilience of an algorithmic system then depends not only on the ability to find optimal solutions to problems, but also on the ability to maintain the system’s operation. In this work, human efforts in repair and maintenance are likely to be crucial.
Repair, temporality, and decay of software
A recurrent theme concerned algorithms as implemented in software, and the temporal dimension in the life-course of software systems. The issue of the temporality of software becomes central through the gradual decay of legacy technologies and the care required to keep them operational. Repair work is also involved as part of the lifetime of algorithms, with hardware and software systems implemented in evolving programming languages and as part of divergent organizational settings, consequently requiring constant maintenance and monitoring [cf. 2]. The notion of repair connects with multiple themes discussed during the workshop, for instance optimizing algorithmic processes and the role of human agency in algorithmic systems. As such, human-algorithm interactions can be thought of as involving not only a continuous process of interpretation, but also work in correcting and explaining errors and idiosyncrasies in results, coming up with workaround solutions to adapt tools to diverging goals, and maintaining software and hardware implementations operational.
Human agency and gaming in algorithmic systems
One of the topics brought up was the question of human agency in relation to the algorithmic systems. We discussed that systems should be rehumanized by dragging the human work going into these systems back to the spotlight, making visible the labor that is required to maintain, train and develop different kinds of systems. On the other hand, algorithms are also used to limit and make possible certain forms of actions raising questions about what people can do with technology to expand their possibilities, and how technology can also be used to limit the potential of humans. One suggested way to approach the relationship that algorithms have with humans was to focus on the interaction between them – we might learn a lot from observing empirically what happens when humans encounter algorithmic systems.
Further, we discussed human agency towards these systems through concepts of games and algorithmic resistance. These approaches highlight the human potential to find vulnerabilities or spaces of intervention, and to act against or otherwise manipulate systems, be it for personal gain or with an activist aim of creating a more just world. Whatever the reason for acting against the system is, a question arises: What does it mean to win against an algorithm? So-called victories against these systems may be short-lived, as games or resistance do not happen in a vacuum: It is possible to win a battle but lose the war. This discussion highlighted how algorithmic systems, just like human beings, are situated in the wider society and its networks of relationships.
Power, objectivity, and bureaucracy
The notions of power and objectivity of algorithmic systems were discussed on multiple different occasions during the workshop. These concepts are often brought up in the critical data and algorithmic studies literature as well, with critics arguing against utopian hopes of unbiased knowledge production [e.g. 3], and pointing to the far-reaching societal consequences of algorithmic data processing and classification [e.g. 4]. However, during the workshop, the questions of objectivity and power of algorithms were themselves questioned. For instance, debates about the power of algorithms would benefit from increased clarity, which could potentially be achieved by connecting the literature with extant accounts of power in political science, such as Stephen Lukes’ [5] theory of the three faces of power.
Similarly, the issue of algorithmic objectivity can take on several different meanings depending on whether the discussion focuses on hidden biases in data production, or for instance the mechanical objectivity [6] of algorithmic procedures. One particularly interesting metaphor for thinking about issues of objectivity in algorithmic systems is that of bureaucracy [e.g. 7], and the sense of objectivity imbued on action and decision-making through the establishment of rigid, explicit, and seemingly impartial rules [8]. The quest for such procedural objectivity [9] is likely to be present in efforts to automate decision-making in algorithmic systems as well. Comparing the effects of explicit rules on power relations within bureaucracies with algorithmic procedures in organizations could be one way to get a grasp on how power works within algorithmic systems.
Optimism, pessimism, and the notion of algorithm
Given the multitude of different approaches present during the workshop, the question arose of the usefulness of the notion of “algorithm” itself in thinking about the technological and social phenomena we are interested in. While algorithms and algorithmic systems were the backdrop for our discussion, it became evident that the phenomena we were discussing are at once broader and more multifaceted. While we started with algorithmic systems, we ended up discussing themes such as collaboration and preconditions of human work, motivations and justifications for action, maintenance and design of technology, temporality, and discontinuities between interpretive and formal processes. This is likely as it should be, given that the aim of the workshop was to think about metaphors for discussing algorithms. However, the variety and scope of the perspectives testifies to the fuzziness of the notion of algorithm, and calls attention to the need for delineating and clarifying the central concepts which figure in discussions about algorithmic systems and their connections to more longstanding discussions in various disciplines.
Related to these observations, our discussion on the second day about the critical/pessimistic and constructive/optimistic attitudes for approaching algorithms called attention to the various ways in which understandings of technology can be oversimplifying. In particular, the issue of “naive” optimism and technological solutionism, often attributed to the developers of technology in critical treatments, was called into question. While critical approaches are indeed important, self-sustained discussions about the limitations and problems of technology hold the danger of oversimplifying the understanding of the “other side”. Such simplifications are unlikely to foster fruitful engagement with communities engaged in developing new technologies. For us, this emphasizes the importance of reflexive thinking that take seriously the risk of “naive” criticism of critical accounts of technology and does not try to situate social scientists as outside of the troubles of algorithmic systems.
By: Juho Pääkkönen, Jesse Haapoja & Airi Lampinen
The next workshop in the series will take place in the autumn in Copenhagen, with a focus on approaches and methods.
– –
[1] Halpern, O. et al. (2017). The smartness mandate: Notes towards a critique. Grey Room 68.
[2] Jackson, S. (2014). Rethinking repair. In T. Gillespie, P. Boczkowski and K. Foot (eds.), Media Technologies: Essays on Communication, Materiality, and Society. MIT Press.
[3] Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski and K. Foot (eds.), Media Technologies: Essays on Communication, Materiality, and Society. MIT Press.
[4] Ananny, M. (2016). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology & Human Values 41(1).
[5] Lukes, S. (1974). Power: A radical view. London and New York: Macmillan.
[6] Daston, L. and Galison, P. (1992). The Image of Objectivity. Representations 40.
[7] Crozier, M. (1963). The bureaucratic phenomenon. Chicago: University of Chicago Press.
[8] Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.
[9] Douglas, H. (2004). The Irreducible Complexity of Objectivity. Synthese 138.