Nordic Perspectives on Algorithmic Systems: Notes from a Workshop on Metaphors and Concepts

The first meeting in our NOS-HS workshop series Nordic Perspectives on Algorithmic Systems: Concepts, Methods, and Interventions was organized on May 22 and 23 in Stockholm. The goal of the workshop series is to develop a Nordic approach to critical algorithm studies, with the first workshop focusing on coming up with metaphors and concepts that would be useful for pushing debates about algorithmic systems forward. In addition, the workshop series aims to establish a network in the Nordics for those interested in the topic of algorithm studies.

We think it is safe to say that the first workshop took some quite successful steps towards achieving these goals. We had an intense two days of brainstorming and exchanging ideas with 16 participants from Helsinki, Tampere, Stockholm and Copenhagen, representing a multitude of different fields ranging from Human-Computer Interaction and Software Development to Sociology and Philosophy of Science.

On the first day, we heard short presentations from each participant, along with introductions to specific concepts/metaphors they consider relevant for approaching and thinking about algorithmic systems. On the basis of these discussions, we collated conceptual maps of various ways to conceive of algorithms. On the second day, we discussed in pairs articles which each participant had brought along as examples of inspiring work. Further, we had a discussion about optimistic/constructive and pessimistic/critical approaches to discussing technology.

Here  are selected takeaways from our discussions, including both conceptual approaches to algorithmic systems, as well as thoughts on how to approach the debate more generally:

Control, care, and empowerment

One central issue in thinking about algorithmic systems concerns the motivations and justifications for the use of algorithms. In this respect, the distinctions between control, care and empowerment were brought into discussion as useful notions for elucidating the different logics of using algorithms. While the logic of control relates to algorithmic surveillance and the aim of governing or managing behavior, the aim inherent in the logic of care is that of supporting certain forms of behavior rather than preventing others. For instance, we discussed the work of content moderators on discussion forums, where moderators wish that automated methods could liberate them to work on fostering and guiding discussion instead of the current focus of deciding what content to delete and what to allow. The important point here is that these different aims encompass divergent justifications for the use of algorithms: while surveillance as control is often justified in terms of necessity and protection, the legitimacy of algorithmic care rests on the thriving and well-being of its subjects. Contrasting with these aims, the logic of empowerment justifies the use of algorithms in terms of performance increases and efficiency. The aim of empowerment is then grounded on ideals such as progress and development. Although empowerment aims at providing people with increased capabilities for action, its underlying principle of optimization can also become self-serving, with people turning into material in the quest for optimizing shallow and cheap quantitative metrics.

Optimization and resilience

The concept of optimization was discussed in particular by reference to the work of Halpern and others [1] on smart cities. In developing algorithms with the aim of optimization, improving the system‚Äôs performance in terms of quantified metrics can become an end in itself, which supersedes conscious planning and deliberation in organizing action. Optimal performance carries with it a rhetorical force, which can work to legitimate algorithmic management of increasingly many aspects of life. As such, as Orit Halpern and others note, optimization serves to justify the use of notions such as ‚Äúsmartness‚ÄĚ in relation to systems which seek to find optimal solutions to predefined problems. Thus, from the perspective of optimization, the crucial question to ask about algorithmic systems might not concern their performance in the technical sense, but rather the choices made when defining the system‚Äôs goals and means of finding optimal solutions.

Another notion connected to the idea of autonomously operating ‚Äúsmart‚ÄĚ systems was that of resilience, which denotes the system‚Äôs capacity to change in order to survive through external perturbations [1]. While the stability or robustness of algorithmic systems is their ability to maintain fixed functioning upon external influences, resilience concerns the temporal dimension and the lifespan of these systems, and their ability to evolve and adapt their behavior to ‚Äúlive‚ÄĚ through changing environmental conditions. The resilience of an algorithmic system then depends not only on the ability to find optimal solutions to problems, but also on the ability to maintain the system‚Äôs operation. In this work, human efforts in repair and maintenance are likely to be crucial.

Repair, temporality, and decay of software

A recurrent theme concerned algorithms as implemented in software, and the temporal dimension in the life-course of software systems. The issue of the temporality of software becomes central through the gradual decay of legacy technologies and the care required to keep them operational. Repair work is also involved as part of the lifetime of algorithms, with hardware and software systems implemented in evolving programming languages and as part of divergent organizational settings, consequently requiring constant maintenance and monitoring [cf. 2]. The notion of repair connects with multiple themes discussed during the workshop, for instance optimizing algorithmic processes and the role of human agency in algorithmic systems. As such, human-algorithm interactions can be thought of as involving not only a continuous process of interpretation, but also work in correcting and explaining errors and idiosyncrasies in results, coming up with workaround solutions to adapt tools to diverging goals, and maintaining software and hardware implementations operational.

Human agency and gaming in algorithmic systems

One of the topics brought up was the question of human agency in relation to the algorithmic systems. We discussed that systems should be rehumanized by dragging the human work going into these systems back to the spotlight, making visible the labor that is required to maintain, train and develop different kinds of systems. On the other hand, algorithms are also used to limit and make possible certain forms of actions raising questions about what people can do with technology to expand their possibilities, and how technology can also be used to limit the potential of humans. One suggested way to approach the relationship that algorithms have with humans was to focus on the interaction between them ‚Äď we might learn a lot from observing empirically what happens when humans encounter algorithmic systems.

Further, we discussed human agency towards these systems through concepts of games and algorithmic resistance. These approaches highlight the human potential to find vulnerabilities or spaces of intervention, and to act against or otherwise manipulate systems, be it for personal gain or with an activist aim of creating a more just world. Whatever the reason for acting against the system is, a question arises: What does it mean to win against an algorithm? So-called victories against these systems may be short-lived, as games or resistance do not happen in a vacuum: It is possible to win a battle but lose the war. This discussion highlighted how algorithmic systems, just like human beings, are situated in the wider society and its networks of relationships.

Power, objectivity, and bureaucracy

The notions of power and objectivity of algorithmic systems were discussed on multiple different occasions during the workshop. These concepts are often brought up in the critical data and algorithmic studies literature as well, with critics arguing against utopian hopes of unbiased knowledge production [e.g. 3], and pointing to the far-reaching societal consequences of algorithmic data processing and classification [e.g. 4]. However, during the workshop, the questions of objectivity and power of algorithms were themselves questioned. For instance, debates about the power of algorithms would benefit from increased clarity, which could potentially be achieved by connecting the literature with extant accounts of power in political science, such as Stephen Lukes’ [5] theory of the three faces of power.

Similarly, the issue of algorithmic objectivity can take on several different meanings depending on whether the discussion focuses on hidden biases in data production, or for instance the mechanical objectivity [6] of algorithmic procedures. One particularly interesting metaphor for thinking about issues of objectivity in algorithmic systems is that of bureaucracy [e.g. 7], and the sense of objectivity imbued on action and decision-making through the establishment of rigid, explicit, and seemingly impartial rules [8]. The quest for such procedural objectivity [9] is likely to be present in efforts to automate decision-making in algorithmic systems as well. Comparing the effects of explicit rules on power relations within bureaucracies with algorithmic procedures in organizations could be one way to get a grasp on how power works within algorithmic systems.

Optimism, pessimism, and the notion of algorithm

Given the multitude of different approaches present during the workshop, the question arose of the usefulness of the notion of ‚Äúalgorithm‚ÄĚ itself in thinking about the technological and social phenomena we are interested in. While algorithms and algorithmic systems were the backdrop for our discussion, it became evident that the phenomena we were discussing are at once broader and more multifaceted. While we started with algorithmic systems, we ended up discussing themes such as collaboration and preconditions of human work, motivations and justifications for action, maintenance and design of technology, temporality, and discontinuities between interpretive and formal processes. This is likely as it should be, given that the aim of the workshop was to think about metaphors for discussing algorithms. However, the variety and scope of the perspectives testifies to the fuzziness of the notion of algorithm, and calls attention to the need for delineating and clarifying the central concepts which figure in discussions about algorithmic systems and their connections to more longstanding discussions in various disciplines.

Related to these observations, our discussion on the second day about the critical/pessimistic and constructive/optimistic attitudes for approaching algorithms called attention to the various ways in which understandings of technology can be oversimplifying. In particular, the issue of ‚Äúnaive‚ÄĚ optimism and technological solutionism, often attributed to the developers of technology in critical treatments, was called into question. While critical approaches are indeed important, self-sustained discussions about the limitations and problems of technology hold the danger of oversimplifying the understanding of the ‚Äúother side‚ÄĚ. Such simplifications are unlikely to foster fruitful engagement with communities engaged in developing new technologies. For us, this emphasizes the importance of reflexive thinking that take seriously the risk of ¬†‚Äúnaive‚ÄĚ criticism of critical accounts of technology and does not try to situate social scientists as outside of the troubles of algorithmic systems.

By: Juho Pääkkönen, Jesse Haapoja & Airi Lampinen

The next workshop in the series will take place in the autumn in Copenhagen, with a focus on approaches and methods.

‚Äď ‚Äď

[1] Halpern, O. et al. (2017). The smartness mandate: Notes towards a critique. Grey Room 68.

[2] Jackson, S. (2014). Rethinking repair. In T. Gillespie, P. Boczkowski and K. Foot (eds.), Media Technologies: Essays on Communication, Materiality, and Society. MIT Press.

[3] Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski and K. Foot (eds.), Media Technologies: Essays on Communication, Materiality, and Society. MIT Press.

[4] Ananny, M. (2016). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology & Human Values 41(1).

[5] Lukes, S. (1974). Power: A radical view. London and New York: Macmillan.

[6] Daston, L. and Galison, P. (1992). The Image of Objectivity. Representations 40.

[7] Crozier, M. (1963). The bureaucratic phenomenon. Chicago: University of Chicago Press.

[8] Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.

[9] Douglas, H. (2004). The Irreducible Complexity of Objectivity. Synthese 138.

Digital technologies, data analytics and social inequality

We were recently involved in organizing a working group on what might be called ‚Äúdigital inequalities‚ÄĚ at the Annual Finnish Sociology Conference. Based on the working group, we reflect on the relationship between digital technologies and social inequalities, and on the role of critical scholarship in addressing the issue.

To paraphrase Kranzberg’s (1986) well-known first law of technology, while digital technologies and their capability to produce data are not a force for good or ill, they are not neutral either. With the increasing use of data analytics and new digital technologies, as well as the ever-intensifying hype over them, it is extremely important to examine the connection between technological and social divides

A rich body of research on ‚Äúdigital divides‚ÄĚ has focused on the issues of unequal access to technology and differences in its usage (e.g. van Dijk, 2013). With the aim of expanding the view beyond the ideas of access and usage, Halford and Savage (2010) have proposed the concept of ‚Äúdigital social inequality‚ÄĚ, emphasizing the interlinking between social disadvantages and digital technologies. This means that the development, use and effects of digital technologies are often related to social categories such as gender, race/ethnicity, age and social class

Examining the divisions connected to the use of data, Andrejevic (2014) points out ‚Äúthe big data divide‚ÄĚ, a concept with which he refers to the asymmetric relationship between those who are able to produce and use large quantities of data, and those who are the targets of data collection. This divide highlights not only access to data and the means of making use of data, but also differential access to ways of thinking about and using data. D‚Äôlgnazio and Klein (2019) further discuss the power structures inherent in the collection and usage of data, pointing out that these structures are often made invisible and thus taken as an objective viewpoint of how ‚Äúthe numbers speak for themselves‚ÄĚ. Through many empirical examples D‚Äôlgnazio and Klein demonstrate that even the choices of what topics data is collected on, analyzed and communicated rest on power relations in terms of whose voices and interest are represented and whose are marginalized

Partly inspired by the above-mentioned research, we recently organized a working group at the Annual Finnish Sociology Conference, The Shifting Divides of Our Digital Lives, to discuss old and new forms of inequalities, the reactions they provoke, and their societal consequences. To guide our presenters, we posed some additional questions: What hinders or facilitates equal participation in the digital society? How are social institutions adapting to digital change? What forms of civic engagement and activism arise given digital society’s asymmetries?

Here we summarize selected findings of presentations that provided insights into how digital technologies and the use of data analytics shape our differential opportunities for social participation even when we, as citizens, might not be fully aware of it.

In her presentation Contested technology: Behavior-based insurance in critical data studies, Maiju Tanninen (University of Tampere) pointed out the many concerns that data studies literature has identified in connection to the use of self-tracking technologies in personalized insurance. These include the possibility of data-based discrimination, heightened surveillance, and control of clients’ behavior. However, Tanninen argued that while these critiques paint a rather dystopian picture of the field, they are largely focused on the US context, they fail to differentiate between insurance types, and are often lacking in empirical engagement. In practice, the use of self-tracking devices for the development of personalized insurance looks often doubtful, amongst other reasons due to poor quality of data. Tanninen pointed out that in order for critical research on the topic to be constructive, and to better understand the benefits of these technologies and offer new insights, we need empirically grounded research in the European and more specifically Finnish contexts.

In his presentation Ageing migrants‚Äô use of digitalised public services: Ethnographic study, Nuriiar Safarov (University of Helsinki) emphasized the need for intersectional perspective in studying access and utilization of e-services among different groups of migrants. In his doctoral project, Safarov examines the impact of the digitalization of public services in Finland on the group of older Russian-speaking migrants who permanently live in Finland. Safarov pointed out that this specific group of migrants may face particular barriers to access e-services not only because of their age, but also because of lack of language skills and social networks. Empirical work on such groups can, in turn, offer insight into the interplay of digital-specific and more ‚Äėtraditional‚Äô social divides.

In her presentation Facebook Groups interaction affecting access to nature, Annamari Martinviita (University of Oulu) compared a popular Finnish Facebook group on the topic of national parks, and the official information website of Mets√§hallitus. Martinviita demonstrated that while both platforms might aim to be inclusive when they advertise access and exploration of nature, in practice they might produce various divides by means of presenting and constructing ‚Äėcorrect‚Äô ways of visiting national parks.

In their presentation Political orientation, political values and digital divides ‚Äď How does political orientation associate with the political use of social media? Ilkka Koiranen and colleagues (University of Turku) demonstrated that while social media provides new ways for political participation, there are significant differences between political parties in how their supporters use social media for political purposes. The research was based on a nationally representative survey dataset. The results showed that newer political movements with younger and more educated supporters representing post-material values are more successful in social media, echoing also previous findings in the digital divides research.

In his presentation How data activism allies with firms to seek equal participation in the digital society, Tuukka Lehtiniemi (University of Helsinki) discussed the case of MyData, a data activism initiative aiming to enhance citizens’ agency by providing them with the means to control the use of their personal data, in an attempt to address injustices related equal societal participation. Various interest parties are involved in MyData, including technology-producing firms that seek market and policy support for their products. Lehtiniemi argued that particular ways to frame MyData‚Äôs objectives are employed to support this involvement. While it is important to develop alternative imaginaries for the data economy, a central question remains to be resolved: how to move from abstract concepts such as citizen centricity and data agency to actual alternatives that challenge dominant imaginaries of data‚Äôs value.

These presentations highlight that the promises of equal participation so often associated with digital technologies and use of data analytics are often challenging to reclaim in practice. If approached without care, they may reproduce and extend existing patterns of biases, injustices or discrimination.

Thus, it is important to keep in mind that as digital technologies and data analytics are forged by humans in specific societal settings and power relations, these technologies contain traces of societal conditions in which they are coined and manufactured. Consequently, it is salient to explore what kinds of potentially biased assumptions are embedded in these technologies used so extensively in today’s society. This is why we think that it is urgent to advance critical approaches and support collective citizen actions to create and implement technologies and data analytics that improve opportunities for all

At the same time, as some of the presentations in the working group also indicated, criticism by itself may not lead to constructive input in the development and usage of digital technologies. We should therefore not only point out the ways how digital technologies and data analytics, their current usage, and the potential future trajectories can bring up or exacerbate societal problems. In addition, we should engage in conceptual and empirical research that can help identify preferable alternatives and steer technological developments toward societally more desirable and sustainable ones

By: Marta Choroszewicz, Marja Alastalo and Tuukka Lehtiniemi

Choroszewicz is a Postdoc Researcher at University of Eastern Finland, Alastalo is a University Lecturer at University of Eastern Finland, and Lehtiniemi is a Doctoral Candidate at University of Helsinki.

‚Äď ‚Äď


Andrejevic, M (2014) The big data divide. International Journal of Communication, 8: 1673‚Äď1689.

D’lgnazio, C and Klein, L (2019) Data Feminism. MIT Press Open. Available at:

Halford, S and Savage, M (2010) Reconceptualising digital social inequality. Information, Communication and Society 13(7): 937‚Äď955.

Kranzberg, M (1986) Technology and history: “Kranzberg’s laws‚Äú. Technology and Culture, 27(3): 544‚Äď560.

Van Dijk, JAGM (2013) A theory of the digital divide. In: Ragnedda, M., & Muschert, G. W. (Eds.) The digital divide: The Internet and social inequality in international perspective. Routledge, 36‚Äď51.

A critical researcher’s uncritical manifesto: We should fall in love with the Internet again

Video art by Taru N Hohtonen
Video art by Taru N Hohtonen presented at the club.

This is a blog post version of Salla-Maaria Laaksonen’s festive speech at the WorldWideWeb 30-year anniversary party at Lavaklubi, Helsinki, March 12th 2019.

Dear friends of the World Wide Web,

As all of us here today, I definitely would not be here if it wasn’t for the world wide web.

I am a social media researcher, and I represent here a researcher collective Rajapinta that focuses on Internet research. However, my own story with the Internet goes far beyond my researcher career. It is a love story that started over 20 years ago, somewhere between IRC and Java-based online chats. Me and the world wide web grew together, from IRC to KissFM chats, from Jaiku to Twitter.

I believe this is a common story for many. A couple of years ago I attended a professional workshop on¬†influencing. The consultant asked us to write down the name of the biggest influencer of our life. Four out of seven participants, independently, wrote down ‚Äúthe Internet‚ÄĚ. I bet many of you would as well.

Indeed, the Internet has influenced our lives in many ways. It has changed the way we communicate, how we shop and how we read news. It has even changed the way we die or at least how me memorize those who have passed.

Yet, if you follow the current discussions of the Internet or read the news that concern social media, it becomes difficult to find these narratives of the technology that so profoundly changed our lives.

Instead, we talk about hate speech and cyberbullying, we talk about influencing elections, we talk about misusing personal data, and technology addiction. We hear politicians talk about ‘nettiv√§ki’, the ‘social media folk‚Äô to refer to online users who emotionally herd from one topic to another, who need to be civilized and controlled. Behind these claims there¬†often is the idea that the¬†technology is somehow making us humans do these things.

But I think there is so much more to the web than these alarmist notions. The web is also a marvelous place, where many forms of culture and communication live side by side.

And it is precisely this what makes it interesting for a researcher.

For a researcher the Internet is a bird-watching tower to climb into and see what is happening in the world, or sometimes a small campfire for storytelling.

In my own studies, I have climbed that bird-watching tower and sat on that campfire to study political discussions, online protests, social media influencers and social media stirs.

In all these what I see is genuine conversations, I see learning and I see peer support, I see real political debates.

This brings me to my title and my manifesto:

For a researcher, the world wide web is a sociotechnical system, constituted by both humans and technology.

This means the web is a technology that affords and limits what it’s users can do, but it is also constantly shaped by us users, it has to adapt to the practices we invent on that technology. So¬†it’s not that¬†the web can dictate¬†what we do with it, but we have¬†power to use it for own purposes.

That is why we can also shape the web and make it a something we want it to be.

We can keep alive the anonymous peer support from the 90s forums.

We can support the flat communication arenas, where a citizen can go and talk to a politician.

We can build the tower of Babel, where people speaking different languages around the globe can exchange ideas.

What I’m describing here sounds like the lost¬†Internet imaginary of the 90s, but it is still alive somewhere over there. I see it on my researcher’s table and I want to bring it back to the public communication as well.

So, I will end with a call for us all: to celebrate the 30 years, let’s¬†cherish the best parts of online communication and make sure we are acting so that our actions are building and rebuilding that web that exists in those early utopias. It is up to us to shape the web.

Happy birthday, dear WWW! We will take care of you!

How Facebook predicts your politics: Aesthetics and glossy die cut prints

Facebook discloses to advertisers some information about the various groups they may target, including their most distinctive Facebook page likes. In this essay, I compare 7 Finnish political parties based on the affinities of their supporters. These collective caricatures reveal something about Finnish politics and society, but also how the profiling operations of Facebook functions. They point to a significance of aesthetics, and a surprising legacy of glossy die cut prints from the 19th century.

With the title ‚ÄúFashion Models and Cyber Warfare‚ÄĚ, Cambridge Analytica‚Äôs whistleblower Christopher Wiley gave a presentation in November 2018 at BoF voices, a fashion industry gathering at Oxfordshire. In describing the methods Wiley had used to support Trump‚Äôs election in 2016, he had to strike a delicate balance between condemning his previous employers yet celebrating the power of the AI methods he wielded could supposedly have. His message to an audience of fashion devotees was particularly flattering: You are not just working in any industry, but one that has changed the course of history.

Christopher Wiley demonstrating the links between the Big Five personality traits and fashion brands. Source: Christopher Wylie | Fashion Models and Cyber Warfare | #BoFVOICES 2018‚Ää‚ÄĒ‚ÄäYouTube

The company supporting American conservative causes (in addition to elections in India, Kenya and Mexico) infamously built personality profiles of people based on the likes and other details of their Facebook profiles. Though Cambridge Analytica once proudly claimed that it had thousands of data points for every American voter, Wiley emphasised in his talk that the psychological profiling derived its power from a focus on personal style or aesthetic: ‚ÄúWhen you look at personality traits, music and fashion are the most informative for predicting someone‚Äôs personality.‚ÄĚ The affinity with particular clothing labels was a signal to be used to identify individuals susceptible to Trump‚Äôs brand of populism, in what Wiley called the ‚Äúweaponisation of fashion brands‚ÄĚ.

Perhaps unsurprisingly for someone who had dropped out of a fashion research PhD, and who would soon after the talk be hired to do AI for the Swedish fast fashion label H&M, Wiley was keen to draw out the links between fashion and politics. His argument that fashion has a constitutive function for movements: Maoist and skinheads alike created their own distinctive aesthetics, before they could become groups that people identified with and that had a clear distinctive identity. That is also why the likes of Cambridge Analytica did not just focus on spreading propaganda but created styles. The company used stolen data to design new aesthetics that its target groups were susceptible to. Fashion choices are a means of distinguishing and identifying individual people, but also a modus operandi for exerting influence over them.

Wiley stopped short of saying exactly which fashion choices characterised the potential Trump supporters they targeted, though he makes an example of Wrangler fans scoring low on both openness and conscientiousness.

The aesthetics typical to the European counterparts of Trump’s supporters are quite different, based on the information that Facebook gives out about its own profiling operations. For this blog post, I studied specifically the groups that Facebook had categorised as supporters of various Finnish parties.

One Finnish Facebook page, KuruKukka (Valley Flower), was created by a self-published author living in Ranua, Finnish Lapland, a municipality above the arctic circle with fewer than 4,000 people. The page supplies a steady flow of images, several a day, with consistent visual pattern: floral arrangements and butterflies, in montages with angels and children. The images are superimposed with text, with a message specific to that day: Have a happy Tuesday! I hope your weekend is starting well! The 33 thousand followers of the Kurukukka give a typical image of the page a couple of hundred shares.

One of the images produced by KuruKukka in 2019 (Source: Kurukukka, copyright theirs).

Kurukukka is one of the pages on Facebook that, according to the platform, is most strongly associated with the anti-immigration conservative The Finns party. In a time of elections in Finland, millions of euros are spent on political advertising, and measures of party interest are the most typical way in which these advertisements are targeted. The clicks that land on the images arranged by the Lappish artist play a small and largely unknown part in directing the flow of millions of political messages. It is part of the process in which Facebook abstracts socially meaningful categories from ordinary human interaction‚Ää‚ÄĒ‚Ääinterests, hopes, political positions‚Ää‚ÄĒ‚Ääand leverages them for competitive advantage in its advertising products.

The list that Facebook provides about the party‚Äôs supporters affiliations include a wide range of things, from consumer brands to Facebook pages sharing funny pictures. The list of The Finn‚Äôs affiliations includes another Facebook page with a similar visual style, ‚Äúhappines‚ÄĚ (sic), resembling KuruKukka except for its focus on animals and small children.

Image from the happines group in 2019 (Source: happines, copyright theirs).

If there is anything like an aesthetic particular to supporters of The Finns, it could be this folksy style, a world of innocence where everything glimmers. Thinking of the cultural associations of this style from a Finnish point of view, the visual genre of ‚Äúkiiltokuvat‚ÄĚ (literally ‚Äúshining pictures‚ÄĚ). Such images were in the in the 19th century among the first colour prints and were often die cut along the figures‚Äô outlines. Scrapbooks called ‚Äúmuistokirja‚ÄĚ were passed around to friends and family, who were encouraged to attach both the glossy cut out pictures as well little passages of text. The ‚Äúmemory book‚ÄĚ was like a social networking technology of its own time, and the ‚Äúshining picture‚ÄĚ its aesthetic. A similar style can still be seen in postcards.

This association of aesthetic and political attitudes was not discovered by a data scientist, but was rather part of the routine functioning of Facebook‚Äôs profiling system. Every user profile on the site has a list of interests attached to it, attributed automatically based on a set of factors that are largely unknown, yet likely include in the least one‚Äôs Facebook ‚Äúlikes‚ÄĚ as well as surfing behaviour. Cambridge Analytica might have been in the limelight, but profiling is of course the bread and butter of the advertisement industry. Especially when it comes to politics, the categorisation of people and selling their attention enables what Turkish sociologist Zeynep Tufekci calls ‚Äúcomputational politics‚ÄĚ, turning ‚Äúpolitical communication into an increasingly personalized, private transaction and thus fundamentally reshap[ing] the public sphere‚ÄĚ.

Facebook users are encouraged to view the way the platform classifies them, or in the least the accessible parts of their profiles. These individual categorisations can be scrutinised, though with no certainty whether the same biases or omissions apply to other users. So far little attention seems to have been given to the way in which Facebook defines groups and the representations that it creates of collectives of its users. What do the supporters of various political positions look like, according to Facebook? What kind of knowledge flows into this profiling? And what is the power of these representations of groups? Some answers, in the least speculative ones, can be attempted with the information that Facebook discloses to advertisers.

Comparing supporters of Finnish political parties

To understand how Facebook creates profiles of and predicts people‚Äôs political positions, I compared the people that Facebook classifies as having an interest in different Finnish political parties. Since 2019 includes both general and European parliament elections in Finland, this comparison was topical.


A list of pages that are most distinctive for Facebook users with an interest in 7 Finnish political parties. Click to browse the full list.

Using some freely available and simple tools, I made lists of the most distinctive connections that Facebook reports for supporters of each party. In the embedding below, I display these lists of pages next to each other so that they can be compared across parties. This presentation makes it possible for readers to browse through a large amount of information in an interactive manner, especially if they are reading this on a desktop.

There are 10 parties currently represented in the Finnish parliament, out of which I could represent 7 in this list. Every column represent the affinities of party supporters, with the most distinctive ones listed on top. The parties are ordered from left to right following the classical representation of the political spectrum.

What exactly is the content of these lists? They are derived from a tool that Facebook provides to advertisers called Audience Insight. It allows advertisers to define potential audiences, such as people of a certain age who live in a certain city and who have an interest in a particular topic. Interests include over 200,000 different labels (including most of the world‚Äôs political parties) that Facebook attaches to individual profiles.

For every audience, Facebook reveals demographic and economic details. The platform also discloses the Facebook page likes that are most distinctive for a particular audience, i.e. the likes of various brands, organisations or Facebook communities that are much more common among the defined group than among the public more widely. The magnitude of this difference, between a particular group and other Facebook users, is called an Affinity score (also displayed in Figure 1, the list of pages displayed above).

The information about page likes and demographics is intended to help companies create ads that are relevant to the various audiences they are targeting. As such, they reveal something about the characteristics of the groups in question. But it is possible to read the information in another way too: these are the characteristics Facebook uses to profile and categorise people. Even if someone would not have expressed a direct interest in, for instance, a political party, a user profile with similar kinds of affinities would suggest a potential supporter of the party, a ripe target for advertisements or organic recommendations.

Differences in the political causes connected to parties

When comparing party supporters in terms of their Facebook likes, some connections felt accurate and insightful and others quite arbitrary. The first thing one might notice is the similarity between the Green League and The Left Alliance. The former of these is a left-of-centre party with an ecological focus, while the Left Alliance was formed through the joining of SKDL (effectively the Finnish communist party) and other left groups. Both parties are known to have similar supporters: young, predominantly female, university-educated urbanites. The Left Alliance also has strong links with trade unions and industrial workers, but this demographic didn’t seem to be represented in Facebook’s profiling of party supporters, perhaps simply because of less frequent Facebook use.


A list of lifestyle-related pages that are most distinctive for Facebook users with an interest in 7 Finnish political parties. Click to browse the full list.

More than other parties, The Greens and the Left Alliance are defined through affiliations connections to what could be called causes, i.e. on-line campaigns or non-profits working on social issues. Both of them are fans, for instance, of the League of Finnish Feminists and the umbrella organisation for development aid, Fingo.

Systematic differences between these two parties do seem to exist, especially if you take into account the relative intensity of the links. In terms of policy, the two parties differ most clearly in economic policy, and a similar difference could be read in a description of their supporters too. The Greens are particularly strongly connected to causes related to veganism and companies that specialise in vegan products. NGOs and movements related to LGTBQ issues also stand out. The Left Alliance is connected to causes related to corporate power, such as opposition to the TTIP trade agreement, some of which don‚Äôt register with the Greens. Also the connection to anti-racist groups seems relatively stronger, including the page Loldiers of Odin that parodies the growing anti-immigration street patrol activities under the name of Soldiers of Odin.

The Finns, the conservative party opposing both immigration and the European Union, are unsurprisingly connected to Suomi Ensin (‚ÄúFinland First‚ÄĚ) page, a channel for ethnonationalist propaganda. A number of other causes doshow up on their list, including a movement advocating for removing restrictions from the sale of alcohol and a page on the topic of protecting welfare benefits. Keskusta, the formerly agrarian Center party, is predictably connected with the farmer‚Äôs and entrepreneurs interest group, but also the war veteran‚Äôs and reservist associations. The other parties are more connected to causes that could be described as purely charitable, such as Unicef and charities working in the health care or social work.

Links between lifestyles and political parties

Differentiating between political causes associated with the left or the right might not yet be particularly insightful. The algorithmically generated profiles of party supporters, however, also feature heavily page likes that are more about lifestyle than politics. I have displayed another list of pages likes below focusing specifically lifestyle-related pages (such as food, music, consumer products).

In particular supporters of the Finnish Social Democratic Party (SDP) are identified almost exclusively through a pattern of connections to brands. This might in a small part be the result of a glitch: The most important affiliation for SDP is Halonen, a Finnish textile business. Incidentally, this company has a namesake in the first Finnish president, Tarja Halonen, whose legacy will have been a frequent topic of discussion in the party’s Facebook conversations. One can only speculate, but this might have been enough to confuse the platform’s algorithms.Figure 1. A list of lifestyle-related pages and affinity scores most distinctive for Facebook users with an interest in 7 Finnish political parties. Source: Facebook Audience Insight, March 2019

In other respects, Finnish social democrats seem distinctively keen on various family-run clothing businesses and domestic items, such as cleaning products and medicine. With prominent links to publishers, the party of aging female voters appear to be avid readers.

The Finns party attempt to distinguish themselves as the party of motorists, and possibly because they know their supporters. The Finns supporters frequently like the Facebook pages of gas stations, in additions to domestic meat producers and pages dedicated to discussing the Finnish lottery system. Supporters of the pro-business conservative party Kokoomus also seem to have a keen interest in America, as one of the most strongly affiliated pages is a blog that gives travel tips to New York.

Music does not feature heavily in the page links but provides some interesting details. For the agrarian Keskusta, a pop musician dedicated to the Finnish version of the schlager genre (iskelmä) stands out, while for The Finns its the band of aging punks, Klamydia. The woke hiphop artist Paleface is unsurprisingly associated with the left, while the biggest pop music favourites of the baby boomer generation perhaps unexpectedly connect with the conservative Kokoomus. The Centre party has particularly strong links to professional athletes, particularly people doing skiing.

Many pages make the list that really only exist in the world of Facebook and are dedicated to the sharing of digital content. Particularly The Finns are widely connected to sites that are dedicated to sharing of jokes (in my estimation, of a laddish or raunchy variety).

Also the glimmering images of Kurukukka and happines that I have already described feature on the lists. I will return to them in the end of the text, after some thoughts on the nature of this information.

Statistical knowledge and ‚ÄĚthe space of life-styles‚ÄĚ

Popular discourse about the power of social media platforms and ‚Äúbig data‚ÄĚ analysis often describe something like a revolution in the amount of information that is collected. This change in scale is enabled by attending to the traces left by human behaviour on digital systems, as opposed to information that is collected specifically for research, for instance through surveys. Making use of the data that is collected by platforms such as Facebook, however, does not only imply a change in the quantity of information we can go through, but also in the types of knowledge that we can produce‚Ää‚ÄĒ‚Ääa change that is also partly an impoverishment of our understanding.

‚ÄúThe space of social positions‚ÄĚ, from Pierre Bourdieu‚Äôs 1979 book Distinction, placing lifestyles on coordinates of economic and cultural capital

One helpful point of reference for charting this change is Pierre Bourdieu, a canonical figure in sociology, in particular on the theme of aesthetics and politics. Bourdieu‚Äôs impressive charts of describing the regularities and patterns of taste and consumption in 1960s French society are perhaps something like big data, to put it fancifully, ‚Äúavant le lettre‚ÄĚ.

What I find joyful about Bourdieu‚Äôs books are the graphs and visualisations, produced from large-scale survey data. These often map out habits or practices in two dimensions, as in the graph called ‚Äúthe space of social positions‚ÄĚ displayed above. The axes are derived from Bourdieu‚Äôs overarching theory, describing how access to different forms of capital‚Ää‚ÄĒ‚Ääeconomic, cultural, social‚Ää‚ÄĒ‚Äägive rise to a single structure that shapes numerous domains of human life, not only to our professional activity but equally what we eat, the films we watch, and which make of a French car we choose to buy. Reading the admittedly obscure picture above, one can discern that cycling is typical for those bestowed with cultural capital and no money, and we will encounter country walks, camping or swimming progressively while looking at the less acculturated.

This is a picture of French society, but it also indirectly says something about the type of knowledge the sociologists are producing. The space lays out a picture that is in some sense complete, attempting to describe the nation state of French as a whole, displaying the entire spectrum of social positions within it. The various different classes, the theoretical categories chosen by Bourdieu, all have their particular places within it.

Even a biased picture makes a difference

The connection between psychological traits and fashion brands, source: Source: Christopher Wylie | Fashion Models and Cyber Warfare | #BoFVOICES 2018‚Ää‚ÄĒ‚ÄäYouTube

A direct comparison of Cambridge Analytica to French sociology would of course be inappropriate, yet in some ways the differences in their approach are telling. In the analysis Chris Wiley described in his presentation, the link between aesthetics and politics is made through individual personality. Fashion is an indicator for psychological types, and these psychological types allow political campaigns to customise their message as well as identify the people that are most susceptible to it. There is, however, no attempt to understand voters as parts of some larger and stable social groupings, be they social class, demographic group or some statistical definitions such as the unemployed.

This may be because of the project‚Äôs aims‚Ää‚ÄĒ‚Ääand Zeynep Tufekci does highlight individualised targeting over reasoning of aggregates as the holy grail of computational political campaigns. The difference also has to do with way that social groupings are represented in the source material, the vast troves of latent trace data from digital platform.

In survey work and statistics more widely, the definition of groups and demographic categories is typically the starting point. The ability for surveys to generalise over and describe entire populations based on sampling is one of its central features. In contrast, the interpretation of the traces left by users of social media platforms doesn’t have any single scale of analysis or fixed categories. The analysis can often aggregate information about the identities that people ascribe to themselves, for instance through the creation and joining of shared pages or hashtags.

Even the parties that I have briefly studied in this post are a group that has been algorithmically identified after Facebook users themselves found it significant. Our comparisons could just as well have focused on the hundreds of thousands of other such interests. It is important to keep in mind how these categories have been derived when reading the list of page likes that I describe above. There is significant overlap between the separate groups of party supporters identified by Facebook, and they don‚Äôt behave like well-defined, mutually exclusive statistical categories. Even though the parties are displayed from left to right, in some way like Bourdieu‚Äôs mappings, it‚Äôs not like what we are seeing is a representation where every single part makes up the entirety of a political spectrum.

This fluidity of groupings and identities could of course be representative our era: allegiances to traditional political mediators including parties themselves is on the wane. As Will Davies points out in an essay on the growing power of ‚Äúbig data‚ÄĚ at the expense of statistics, ‚Äúthis is a form of aggregation suitable to a more fluid political age, in which not everything can be reliably referred back to some Enlightenment ideal of the nation state as guardian of the public interest.‚ÄĚ

Social media platforms reflect the proliferation of new groupings, but they are also actively producing it. For all its fault, the one undeniable upside of social media is how quick and easy it is to discover likeminded people, form some kind of vague identity and share thoughts. The fact that social media platforms facilitate this, however, points to another fundamental problem about the data collected from them. You can’t think of this technological layer as representing social phenomena, when the likes of Facebook are actively shaping them.

Imagine, for instance, the choice of whether or not to declare your affiliations with a particular party, through something as simple as like of its Facebook page. Whether or not people undertake this action will greatly be influenced by whether or not the platform encourages this choice through a recommendation of the page. Even when visible, a person‚Äôs choice may well be skewed by the details Facebook chooses to display, such as the current count of likes or whether one‚Äôs own acquaintances have shown their support. Social media provides an effective system social information, telling you what other people around you are doing, opening up the possibility of feedback loops and viral phenomena.

The groups of party supporters I have briefly described could hence be the result of a complicated interaction between digital devices and genuine human dispositions, perhaps resulting in its current composition through some complicated process where early group formation and unexpected events have determined who joins later. It may be impossible to pry apart what could be labelled as the ‚Äúdigital bias‚ÄĚ of the platform itself from some kind of ground truth about people. In some ways this kind of distinction might not even be desirable: After all, the collective representations that Facebook have a certain reality of their own. It has real-world consequences, shaping the way recommendations, ad money and attention is channelled. Perhaps it works like a self-fulfilling prophesy, amplifying and making real the platform‚Äôs caricatures of political groups.

Glossy Santas wishing you are merry Christmas, until the cursing begins (Source: KAIKKI MENI)

Aesthetics is not just about your designer clothes

When clicking through the pages associated with political identities, I came across the page of a Finnish comic artist Kaikki Meni, with a following of about 20,000 people. This artists’ work was quite strongly associated with the Left Alliance, the opposite end of the political spectrum relative to The Finns that I described in the beginning of this blogpost.

What made me curious that there was a certain inversion of an aesthetic approach as well. While KuruKukka felt like a fairly direct reuse of the imagery of old postcards and glossy cutout prints, Kaikki Meni uses the twee aesthetic for comical effect. The kitschy style sets up an expectation of harmony, which is broken through absurd statements and cursing. A similar usage of Victorian era imagery in other Internet comics, such as Wondermark: a conflict between form and content, and a parody of the world of kitsch.

It might be possible to read this opposition following the theory of Bourdieu, claiming that a privileged position with cultural capital enables a reflexive and playful approach to different aesthetic systems. I would, however, like to make a different point, referring more to the historical role that these types of images (the glossy cutout pictures) have played in previous systems of communication, and the potential continuity that there is in ways of using Facebook.

Another reuse of a 19th century image from KAIKKI MENI

When people share images from Kurukukka, their activity is in some ways comparable to sending postcards. The aesthetics of the images is the same. The form of communication is similar: the messages on the images celebrate particular occasions or the passing of time. For the people participating in the sharing of these images, including also the conservatives supporters, using Facebook replicates or is in continuity with earlier ways of communicating.

This observation could provide a key to reading the differences between the groups of party supporters described above. It seems to me that what is differences between particular political groups are not so about preferences in one domain of life (for instance differences in, say, what clothing, music or food they like). Rather what distinguishes the groups is what domains are associated with their group profiles to begin with: The profile of one party’s supporter is heavily interested in company brands, while another focuses on sports, and yet another on pages focusing on humour.

Another way of describing this is that there are significant and systematic differences in the way that Facebook is used. For some its a source of entertainment, others want to show support for causes, while for many it is about keeping in touch with their closest friends, and possibly utilising the imagery of pages like KuruKukka in the process. There is a group of people for whom the medium remains like an extension of older mediums and ways of communicating, while there is also playful experimentation and pushing the boundaries of what can be done on the platform.

The signal that Facebook picks up most clearly in its automated profiling may be related to these patterns of media usage, instead of tracing for instance people‚Äôs consumer choices or support for social causes. Facebook is not only a medium that extracts data on people‚Äôs preferences‚Ää‚ÄĒ‚Ääwhat the system is measuring is what kind of medium Facebook is for people.

When Chris Wiley talked about the potential for fashion and aesthetics for the profiling of people, he was not only talking about the dangers of artificial intelligence but also making an argument along the lines of consumerist ideology with a long history, claiming that the our consumer preferences and behaviour in the market reveal something about people that is fundamental. This in itself may not be Wiley’s greatest crime, but its curious to see that the Facebook’s profiling algorithms themselves have picked up on a wider sense of aesthetics, which extends to the more mundane world of meme pages and montage postcards.

– –
Aleksi Knuutila is an anthropologist by training and runs a research consultancy that explores new methods studying political culture. He works with to crowdsource political advertisements and scrutinise political communications during the 2019 elections. He tweets at @knuutila.


Pit√§isik√∂ algoritmien pelastaa meid√§t ep√§varmuudelta?
Kuva (cc) Belgapixel @Flickr

Viimeisten vuosien aikana on puhuttu paljon algoritmien vallasta, mutta keskusteluissa esiintyy monia erilaisia näkökulmia siitä, minkälaista tuo valta oikeastaan on. Yhtäältä on keskusteltu algoritmien kyvystä rajata ja muokata ihmistoiminnan mahdollisuuksia, esimerkiksi luokittelemalla ihmisiä ja ohjaamalla tiedon kulkua [1,2,3]. Toisaalta huomiota on kiinnitetty algoritmeja koskevien käsitysten ja odotusten rooliin toiminnan ohjaamisessa [4]. Tässä kirjoituksessa pohdimme yhtä mahdollista syytä sille, miksi algoritmit ylipäätään saavat valtaa.

Michel Crozier käsittelee kirjassaan The Bureaucratic Phenomenon [5] sitä, miten byrokraattisissa organisaatioissa valtaa keskittyy henkilöille, joilla on kyky hallita organisaation toimintaan liittyvää epävarmuutta. Hän kirjoittaa esimerkiksi tehtaan koneiden huoltohenkilökunnasta ryhmänä, jolle valtaa keskittyi, koska he kykenivät vähentämään tuotantokoneisiin liittyvää epävarmuutta.

Tuotantokoneiston huoltaminen oli tehtaiden toiminnan kannalta keskeistä ja huoltohenkilökunta muodosti asiantuntijaryhmän, jolla yksin oli huoltamiseen tarvittavaa osaamista. Tämä osaaminen antoi huoltohenkilöstökunnalle strategisen etulyöntiaseman suhteessa tehtaan muihin henkilöstöryhmiin. Byrokraattisesta rakenteesta huolimatta organisaatio oli kykenemätön hallitsemaan henkilöstöryhmien epämuodollista kanssakäymistä. Tästä johtuen koneiden rikkoutumiseen liittyvän epävarmuuden hallinta loi huoltohenkilökunnalle valtaa, jota he käyttivät neuvotellessaan ryhmänsä eduista.

Crozierin analyysissa byrokraattisten organisaatioiden keskeinen pyrkimys on kontrolloida organisaation toimintaan liittyviä epävarmuuden lähteitä. Epävarmuus organisaation toiminnassa luo hallitsematonta valtaa, joka tekee byrokraattisen järjestelmän toiminnasta epätehokasta.

Yksi byrokraattisten järjestelmien toimintaan liittyvän määrällistämisen tavoitteena on etäännyttää järjestelmien toiminta subjektiivisista ihmisarvioista [6]. Sama ilmiö näkyy myös erilaisten algoritmisten sovellusten käytössä. Algoritmien toivotaan paitsi eliminoivan epävarmuuden lähteitä, myös parantavan toiminnan tehokkuutta.  Usein toiveena on, että ihmisen päätöksenteon subjektiivisuuteen tai muihin heikkouksiin liittyvät ongelmat voidaan ratkaista uusilla datapohjaiseen analytiikkaan perustuvilla teknologisilla sovelluksilla [7,8]. Tämä epävarmuuden kontrollointi näkyy tapauksissa, joissa algoritmien käyttöä perustellaan niiden systemaattisuudella tai tasalaatuisuudella, kuten esimerkiksi algoritmisen analytiikan tehokkuutta ja ennustekykyä koskevissa odotuksissa [9]. Ennustekyvyn tarkentumisen ja toiminnan tehostamisen onkin esitetty olevan nykyanalytiikkaa keskeisesti ohjaavia odotuksia [10]. Yksi käytännön esimerkki ovat itseohjautuvat autot, joiden toivotaan olevan ihmisten ohjaamia autoja turvallisempia [esim. 11]. Personalisoidun terveydenhuollon taas toivotaan tarjoavan yksilöille entistä parempia tapoja hallita terveyttään [12]. Myös esimerkiksi tekoälyn käyttö yritysten rekrytointiprosesseissa on yleistymässä. Automatisoituja rekrytointiprosesseja perustellaan vedoten tehokkuuteen ja algoritmisen arvioinnin tasalaatuisuuteen [esim. 13].

Erving Goffman on käsitellyt esseessään Where the action is? [14] kohtalokkuutta. Hän liittää käsitteen päätöksiin, jotka ovat ongelmallisia ja seuraamuksellisia. Puhtaan ongelmalliset päätökset ovat sellaisia, joissa oikea päätös ei ole selvä, mutta päätöksellä ei ole laajemman elämän kannalta juurikaan väliä. Valinta sen suhteen, mitä katsoa televisiosta, on esimerkki tällaisesta päätöksestä. Esimerkiksi päätös lähteä joka aamu töihin taas on esimerkki seuraamuksellisesta päätöksestä, jossa oikea valinta on selvä. Kotiin jäämisellä voisi olla haitallisia seurauksia, joten valinnalle lähteä töihin on selkeät perusteet. Kohtalokkaat päätökset ovat sellaisia, joissa valinnalle ei ole selkeitä perusteita, mutta sen tekemisellä on laajakantoisia seurauksia Goffmanin mukaan pyrimme järjestämään arkemme niin, että päätöksemme eivät yleensä olisi kohtalokkaita.

Sama kohtalokkuuden vähentäminen on läsnä niissä toiveissa, joita esitämme algoritmeille. Toivomme niiltä apua tilanteissa joissa oikea päätös on epäselvä. Emme kuitenkaan pysty pakenemaan kohtalokkuutta kokonaan. Päätöksillä voi aina olla ennakoimattomia seurauksia. Koska olemme aina läsnä omana, fyysisenä itsenämme, yllättävissä tilanteissa kehomme voi esimerkiksi aina vahingoittua. Kaikkeen olemiseen liittyy riskejä.

Ajatuksella kohtalokkuuden eliminoimisesta on yhtym√§kohta Crozierin byrokratia-analyysiin. Byrokraattiset j√§rjestelm√§t kehittyv√§t juuri olosuhteissa, joissa toimintaan liittyv√§√§ ep√§varmuutta pyrit√§√§n eliminoimaan. Paradoksaalisesti juuri ep√§varmuuden eliminointiin k√§ytetty menetelm√§ ‚Äď tiukka toimintaa ohjaava formaali s√§√§nn√∂st√∂ ‚Äď johtaa vallan keskittymiseen organisaation niihin osiin, joista ep√§varmuutta ei saada kitketty√§. Samaten kohtalokkuuden eliminoiminen algoritmien avulla voi johtaa vallan toimimiseen juuri niiden teknologioiden v√§lityksell√§, joilla ep√§varmuutta pyrit√§√§n hallitsemaan. T√§st√§ n√§k√∂kulmasta yksi syy sille, ett√§ algoritmeille syntyy valtaa, on pyrkimys kontrolloida ep√§varmuutta, jota ei kuitenkaan t√§ydellisesti kyet√§ hallitsemaan. Algoritmisissa j√§rjestelmiss√§ valta toimii algoritmien kautta, mutta syntyy osana laajempaa ihmistoiminnan kontekstia. N√§in ollen algoritmista valtaa voitaisiinkin kenties tutkia kysym√§ll√§, mink√§laisia ep√§varmuustekij√∂it√§ algoritmien k√§yt√∂ll√§ pyrit√§√§n hallitsemaan, ja mik√§ mahdollisesti j√§√§ hallitsematta?

Jos joku lupaa auttaa meitä tekemään aina oikean päätöksen epävarmassa maailmassa, ei ole ihme että kuuntelemme. On kuitenkin syytä kiinnittää huomiota siihen, että samalla auttajille keskittyy valtaa.

Teksti: Jesse Haapoja & Juho Pääkkönen

– –
Kiitokset kommenteista Salla-Maaria Laaksoselle, Airi Lampiselle ja Matti Nelimarkalle. Tämä teksti kirjoitettiin osana Koneen Säätiön rahoittamaa Algoritmiset järjestelmät, valta ja vuorovaikutus -hanketta.

Eettinen teko√§ly toteutuu punnituissa k√§yt√§nn√∂iss√§

Tekoälyä kuvataan maiden tai maanosien välisenä kilpajuoksuna, jonka ennakkosuosikkeina ovat USA ja Kiina, sekä haastajana EU. Asetelma näkyy EU-maissa tekoälystrategioina, ohjelmina ja rahoitusinstrumentteina.

Valtioneuvoston tuoreen eettistä tietopolitiikkaa koskevan selonteon mukaan Suomi tavoittelee kilpailuetua eettisesti kestävällä tekoälyn kehittämisellä ja soveltamisella. Päämääränä ovat hyödyt yhteiskunnalle ja tavallisille ihmisille, esimerkkinä maailman parhaat julkiset palvelut. Eettisyyttä tavoitellaan yhteisesti sovituilla periaatteilla, joita palveluiden kehittäjät ja ihmisiä koskevien tietoaineistojen hyödyntäjät noudattavat.

Eettisesti kest√§v√§n teko√§lyn viitekehys korostaa yleisi√§ periaatteita kuten l√§pin√§kyvytt√§, ihmiskeskeisyytt√§, ymm√§rrett√§vyytt√§, syrjim√§tt√∂myytt√§ ja ihmisarvoa ‚Äď ylevi√§ p√§√§m√§√§ri√§, joiden arvoa tuskin kukaan kiist√§√§. Periaatteita edistet√§√§n vetoamalla yritysten itses√§√§telyn tarpeeseen muuttuvassa teknologiaymp√§rist√∂ss√§, jossa ajantasainen s√§√§ntely lakien tai m√§√§r√§ysten avulla on vaikeaa.

Eettiset viitekehykset ovat erityisen tärkeitä silloin, kun sääntely tai yhteiskunnalliset oikeudenmukaisuuden normit eivät auta jäsentämään toiminnan reunaehtoja. Periaatteet rajaavat toimintatapoja, jotka ilmiselvästi rikkovat ihmisten itsemääräämisoikeutta tai tuottavat epäterveitä käytäntöjä arkeen ja työelämään. Yleisten periaatteiden ongelma voi kuitenkin piillä niiden tulkinnallisessa avoimuudessa. Se mikä on yhdelle yritykselle vastuullisuutta tai syrjimättömyyttä, ei välttämättä ole sitä toiselle.

Olemme seuranneet vuosien ajan eettisen tietopolitiikan vahvuudeksi tunnistetun MyData-ajattelun kehittymistä Suomessa ja kansainvälisesti. MyDatan, tai omadatan, perusajatuksen mukaan kansalaisten tulee saada hallita itseään koskevien tietojen käyttöä yrityksissä ja julkisella sektorilla. MyDatassa yksilöä ajatellaan digitaalisen talouden keskuksena ja datavirtojen keskipisteenä. Tavoitteena on haastaa henkilökohtaisten tietojen taloudellisen hyödyntämisen epätasa-arvoisuus siirtämällä kontrolli yrityksiltä ihmisille, joista aineistoja kerätään.

MyDatan edistäjät ovat tehokkaasti osoittaneet ihmiskeskeisyyden tarpeellisuuden datatalouden rakenteissa. Samalla ihmiskeskeisyyttä kuitenkin tulkitaan varsin joustavasti. Se voi tarkoittaa kansalaiselle tasavertaista osallistumista digitaaliseen yhteiskuntaan, yritykselle taas väylää päästä yksilön kautta käsiksi datajättien hallussa oleviin aineistoihin.

Mikä merkitsee yhdelle toimijalle kaikkien digitaalisten oikeuksien suojaamista, voi toiselle tarkoittaa mahdollisuutta tarjota maksukykyisille yksityisyyttä turvaavia palveluja. Ihmiskeskeisyydestä tulee eräänlainen musteläiskä, jossa toimijat näkevät omasta näkökulmastaan edistämisen arvoisia piirteitä.

Yleiset eettiset periaatteet eivät siis takaa tavoiteltujen yhteiskunnallisten seurausten toteutumista. Pikemminkin yleisellä tasolla pysyminen tuottaa epämääräistä puhetta ja mitäänsanottamia vastauksia. Siksi eettisiä periaatteita tulee konkretisoida ja koetella käytännössä. Jotta käytännön toimijat saavat tukea päätöksilleen, tarvitaan yksityiskohtaisia esimerkkejä palveluista, joissa eettiset periaatteet toteutuvat. Inspiraatiota eettisyyteen voi hakea myös yhteistä hyvää tuottavista digitaalisista palveluista kuten Wikipediasta, tai osuuskuntaperiaatteella toimivista yrityksistä.

Henkilökohtaisten tietojen käytön eettiset periaatteet toteutuvat, kun pääsy aineistoihin pohditaan huolellisesti ja samalla määritetään, kuka voi hyötyä aineistojen käytöstä ja miten. Keskeisiä ovat aineistojen käyttöön liittyvän päätöksenteon säännöt. Tässä ei itse asiassa ole mitään uutta. Vaikka teknologia kehittyykin nopeasti, henkilökohtaisten aineistojen käytön rajoja ja mahdollisuuksia on pohdittu vuosikymmenien ajan.

On päätettävä millaista aineistoa voi kerätä tai käyttää, mihin tarkoituksiin ja kenen toimesta, missä kulkevat hyväksyttävän ja vältettävän rajat, ja kuka niihin voi vaikuttaa ja millä aikavälillä. Vastaukset eivät kumpua yleisistä periaatteista, eivätkä ole yleispäteviä. Se mikä esimerkiksi liikenteen älypalveluissa on hyväksyttävää, voi terveyden kentällä olla eettisesti arveluttavaa.

Tämän ajan suuri haaste on digitaalisen ympäristön ohjaus ja hallinnointi. Pikemminkin kuin teknologian kehittäjien kilpajuoksusta, tässä on kysymys eri näkökulmien ja käytäntöjen huolellisesta yhteensovittamisesta. Kilpailuetua tulisi hakea eettisten tavoitteiden toteutumisesta eri alojen osaamisten risteyskohdissa. Siinä missä tekoälykisaajat näkevät maalin edessään, eettinen kestävyys löytyy pikemminkin yhdistelemällä kekseliäästi vanhaa ja uutta.

– –
Tuukka Lehtiniemi (@tlehtiniemi) & Minna Ruckenstein (@minruc).
Kirjoittajat ovat tutkijoita Helsingin yliopiston Kuluttajatutkimuskeskuksessa.

Kirjoitus on rinnakkaisjulkaistu

Cambridge Analytica -vuoto sai suuren yleis√∂n kiinnostumaan ongelmasta, josta kriittinen teknologiatutkimus on puhunut jo vuosia

Screen Shot 2018-03-21 at 12.07.46
Screenshot from Twitter #deletefacebook

Cambridge Analytican Facebook-datan väärinkäyttö Yhdysvaltain 2016 presidentinvaaleissa on nostattanut ison kritiikkivyöryn teknologiajättejä kohtaan. Miksi kohu nousee vasta nyt, vaikka kriitikot ja tutkijat ovat kirjoittaneet aiheesta jo vuosia?

Cambridge Analytica -kohu nousi uusiin ulottuvuuksiin lauantaina, analytiikkayhtiön entisen työntekijän Christopher Wylien tehtyä paljastuksen yhtiön datankäytöstä The Guardianille ja The New York Timesille. Uutismedia on raportoinut paljastuksesta laajalti, ja jatkojutuissa on muun muassa annettu neuvoja omien Facebook-asetuksien säätämiseen. Twitterissä leviää hashtag #deletefacebook, jossa ihmiset kehottavat tuhoamaan Facebook-tilinsä kokonaan. Facebookin pörssikurssi laski, mikä on viesti sijoittajien kokemista riskeistä ja sitä kautta melko vahva viesti kohun laajuudesta. Kenties tärkeimpänä seurauksena näyttäytyy kuitenkin se, että poliitikot niin Euroopassa kuin Yhdysvalloissakin ovat heränneet vaatimaan Facebookia tilivelvolliseksi. Tähän ei riittänyt vielä NSA-kohu eikä aiemmat tiedot venäläisten kohdentamista vaalimainoksista. Miksi nyt kriittinen reaktio kasvoi näin suureksi?

Ensinnäkin on hyvä pitää mielessä, että breach-termin käytöstä huolimatta kyseessä ei ole tietovuoto siinä mielessä, että data on alun perin kerätty Facebookin ohjelmointirajapinnan käyttösääntöjen puitteissa. Dataa on vain myöhemmin päädytty luovuttamaan kolmansille osapuolille sääntöjen vastaisesti. Myöskin alkuperäinen väite siitä, että applikaatio kerää dataa vain tutkimustarkoituksiin on tässä vaiheessa rikottu.

Vastaava data on siis vuoteen 2014 asti ollut kenen tahansa Facebookin ohjelmointirajapintaa käyttävän ladattavissa, jos hän on saanut Facebookilta hyväksynnän applikaatiolleen, ja saanut houkuteltua ihmiset sitä käyttämään. Vuonna 2014 Facebook rajoitti API:n kautta saatavan tiedon määrää suuresti, mutta vanhempia datasettejä todennäköisesti vielä pyörii kovalevyillä.

Tietyllä tavalla keskustelu siitä onko kyseessä teknisessä mielessä tietovuoto, ja missä vaiheessa sääntöjä on rikottu, on kuitenkin sivujuonne. Käyttäjien datan kerääminen perustui käyttäjien antamaan suostumukseen, jonka pitäisi olla harkittu ja perustua tietoon (englanniksi informed consent). Tähän sisältyy useita ongelmia, esimerkiksi mahdollisuus edes teoriassa olla tietoinen tulevista datan käytöistä sekä se, että data käytännössä koskee myös muita kuin suostumuksen antajaa. Kuinka monella Cambridge Analytican sovelluksen asentaneella kävi mielessä pohtia sitä, mihin omien tai Facebook-kavereiden tietojen tullaan käyttämään? Kuinka moni olisi edes voinut ennakoida teknologian kehitystä ja siihen liittyen tietojen tulevia käyttöjä? Kuinka usein jokainen meistä tulee antaneeksi suostumuksen datan keräämisen ja käyttöön pohtimatta näitä asioita?

Osa ongelmaa piilee pohjimmiltaan myös siinä, että yksityisyys ajatellaan asiaksi josta kukin käyttäjä päättää itse. Yksityisyys on kuitenkin monessa mielessä myös yhteinen asia. Tässä tapauksessa konkretisoituu hyvin myös se, mitä tämä voi tarkoittaa käytännössä.

Isossa mittakaavassa Facebookin asiakkailleen, siis mainostajille, antama lupaus on se, että maksavien asiakkaiden viestejä kohdennetaan tehokkaasti ja tämä vaikuttaa ihmisten käyttäytymiseen. Tätä lupausta yritys on toteuttanut mm. Yhdysvaltain vaalien alla, myös ennen viimeisimpiä presidentinvaaleja. On kenties väistämätöntä, että tällaista kohdentamiseen perustuvaa järjestelmää käytetään myös tavoilla joita pidämme väärinkäyttönä, ja nyt käynnissä oleva tapaus osoittaa konkreettisesti mitä tämä voi tarkoittaa. Se on myös osoittanut, ettei Facebookia ole erityisemmin kiinnostanut puuttua asiaan.

Facebookin ja muiden ns. GAFA-yritysten toimia kritisoiva techlash-ilmiö ei ole uusi: erityisesti teknologiajättien entiset työntekijät ovat kritisoineet avoimesti yritysten toimintatapaa ja eettisyyttä. Muun muassa ex-googlelainen Tristan Harris on varoittanut siitä, miten teknologiajätit hallitsevat mieliämme ja perustanut Center for Humane Technology -aloitteen ratkaisemaan teknologian vinoutunutta kehitystä. Like-nappulan kehittänyt Justin Rosenstein on myöhemmin kritisoinut keksintöään addiktiivisuudesta.

Myös tutkijat ovat kirjoittaneet kriittisiä havaintoja teknologiajättien toiminnasta. Esimerkiksi hollantilaiset José Van Dijck ja David Nieborg analysoivat artikkelissaan jo vuonna 2009, miten teknologiayritysten konehuoneessa pyörivä bisneslogiikka taitavasti piilotetaan sosiaalisia suhteita ja kulttuuria korostavan retoriikan taakse. Samasta teemasta kirjoittaa myös esimerkiksi Sarah Myers West, joka kuvaa kaupallisen valvonnan tuottamaa yhteiskuntaa datakapitalismiksi.

Harvardin emeritaprofessori Shoshana Zuboff on myös kirjoittanut kriittiseen ja melko dystooppiseenkin sävyyn valvontaan perustuvasta kapitalismista ja demokraattisen informaatioyhteiskunnan tulevaisuudesta käyttäen Googlea esimerkkitapauksena (ks. myös Zuboffin akateeminen, kieltämättä hieman työläslukuinen artikkeli aiheesta). Professori Joseph Turow on kirjoittanut ja puhunut jo vuosia mediayhtiöiden ja kohdentamisen logiikasta. Hän on tehnyt myös lukuisia empiirisiä analyyseja siitä, kuinka käyttäjät eivät ymmärrä sitä, millä laajuudella he tietojaan teknologiayrityksille luovuttavat, ja miten niitä voidaan jatkokäyttää.

Yleisemm√§n yhteiskuntateoreettisen n√§k√∂kulman lis√§ksi tutkijat ovat tarttuneet my√∂s yksityisyyden ja teknologian rajapintoihin. Muun muassa apulaisprofessori Bernhard Rieder on tehnyt kriittisi√§ havaintoja Facebookin luovuttamista datoista jo vuonna 2013. Blogipostauksessaan Rieder osoittaa, ett√§ viattomalta n√§ytt√§v√§ ‚Äúaccess to posts in your newsfeed‚ÄĚ tarkoittaa itse asiassa p√§√§sy√§ suureen m√§√§r√§√§n kyseisen k√§ytt√§j√§n verkoston tuottamaa sis√§lt√∂√§ ja informaatiota. My√∂s Jen King kollegoineen kiinnitti asiaan huomiota jo vuonna 2011 julkaistussa applikaatioita itsekin hy√∂dynt√§neess√§ tutkimuksessa. Yksityisyyden yksil√∂n yli menev√§st√§ sosiaalisesta ja verkottuneesta luonteesta on ylip√§√§ns√§ kirjoittanut moni tutkija vuosien varrella. Hyv√§n√§ johdantona toimii esim. t√§m√§ teknologian tutkija danah boydin teksti vuodelta 2011.

Jostakin syyst√§ t√§m√§ kriittinen puhe ei kuitenkaan ole kovin hyvin mennyt l√§pi ‚Äď kenties emme ole olleet kovin herkki√§ kuuntelemaan vastarannan kiiski√§ startup-buumin ja teknologiahypen keskell√§? Kenties vasta maailman merkitt√§vimm√§t vaalit ja poliittinen vaikuttaminen ovat tarpeeksi vakava k√§ytt√∂kohde, johon jokaisella on tarttumapinta?

Joka tapauksessa nyt vallalla olevan tekoälypöhinän kohdalla voisimme kenties kuunnella kritiikkoja ja akateemikkoja vähän aikaisemmin. Esimerkiksi professori Luciano Floridin teksti Should we be afraid of AI on hyvä paikka aloittaa.

Teksti: Salla-Maaria Laaksonen & Tuukka Lehtiniemi

Keskustelukuplia ja kaikukammioita ‚Äď miss√§ on demokratian dialogi verkossa?

(cc) Amit Borade @Flickr

Blogikirjoitus on rinnakkaispostaus Oikeusministeriön #suomi100-blogista.

Yhteiskunnallisen verkkokeskustelun kuplautuminen on ollut vahvasti huolenaiheena julkisessa keskustelussa. Onko teknologia, jonka piti mahdollistaa kaikkien kansalaisten osallistuminen yhteiskunnalliseen keskusteluun, sittenkin sulkenut meidät kaikukammioihin huutelemaan samanmielisten kanssa?

Kuplakeskustelun avasi Eli Pariser (2011) kirjallaan Filter Bubbles, jossa hän osoitti, kuinka eri puolueita kannattavat käyttäjät saavat hakukoneesta samalla hakusanalla aivan erilaisia tuloksia. Samaa ilmiötä on kauhisteltu muun muassa Facebookin kohdalla. Yleisradion toimittaja loi muukalaisvihamielisen feikkiprofiilin Facebookiin ja osoitti, miten muutamassa kuukaudessa käyttäjä sulkeutui vihakuplaan.

Kuplautumisen taustalla on teknologiaj√§ttien bisneslogiikka, jossa pyrkimyksen√§ on maksimoida k√§ytt√§j√§n palveluissa viett√§m√§ aika. Uutisvirta ei harjoita journalistista harkintaa, vaan oppii aiemmasta k√§ytt√§ytymisest√§. Facebookissa on tuhansia eri attribuutteja m√§√§ritt√§m√§ss√§ uutisvirtaamme sis√§lt√∂√§ ‚Äď mit√§ valtaosa k√§ytt√§jist√§ ei edes tiedosta. Sen sijaan he kehitt√§v√§t luovasti erilaisia sosiaalisia perusteluja sis√§lt√∂jen piiloutumiselle.

Kuplissa ei kuitenkaan ole kysymys ainoastaan teknologiasta. Sosiaalipsykologia on pitkään tarkastellut sosiaalisen identiteetin muodostumista ja ryhmien merkitystä. Ryhmässä mielipiteet yhtenäistyvät ja ryhmä alkaa suosia omaa ryhmäänsä toisten ryhmien kustannuksella. Lisäksi meillä on vahva taipumus tykästyä ärsykkeisiin, joille altistumme toistuvasti. Kun luemme samaa sisältöä uudelleen ja uudelleen, se alkaa tuntua normaalilta ja hyväksyttävältä.

Kuplautuminen on siis luonnollista, mutta on selvää, että viestintäteknologialla on sitä tukevia ominaisuuksia. Sosiaalinen media mahdollistaa sen, että samalla tavalla ajattelevat ihmiset voivat päätyä kaikukammioihinsa jakamaan virheellisiä väitteitä keskenään myös omaa lähituttavien piiriä laajemmalle.

Kuplasta ulos pääseminen vaatii työtä. Informaatiotulvan keskellä on mahdollista etsiä kattavasti eri mielipiteitä ja vertailla niitä. Käytännössä ihmiset eivät kuitenkaan tee niin, vaan tyytyvät ensimmäisiin tarjokkaisiin. Edelmanin luottamustutkimuksen mukaan hakukoneiden puolueettomuuteen luotetaan enemmän kuin uutismediaan.

Kuplilla pelotteluun liittyy kuitenkin riski siitä, että kaikki verkossa käytävä keskustelu latistetaan kuplissa tapahtuvaksi arvottomaksi huuteluksi, johon teknologia meidät ajaa. Verkkokeskusteluissa käydään myös asiallista poliittista keskustelua ja nostetaan esille kansalaisten huolia. Kuplat tai algoritmit eivät tee niistä vähemmän todellisia. Teknologia ei ole irrallinen yhteiskunnasta eikä mullista sitä kertaheitolla, vaikka vastuuta halutaan mielellään sälyttää teknologialle.

Algoritmeilla ja teknologialla pelottelun sijaan meidän tulisi paremmin ymmärtää niiden hybridi luonne: algoritmit ovat tasan yhtä hyviä kuin mekin. Ihmisten toimintatavat ja virhekäsitykset siirtyvät niihin ohjelmoinnin tai koneoppimisen kautta. Hakukone ja uutisvirrat suoltavat sisältöä, josta ne arvelevat etsijän pitävän aiemman verkkokäyttäytymisen perusteella. Teknologia tuottaa kaikukammioita, koska ihmiset ovat sosiaalisessa toiminnassa mieluiten oman viiteryhmänsä kanssa. Tekoälybotti oppii päivässä rasistiseksi vihapuhujaksi muita Twitter-käyttäjiä seuraamalla. Työnhakualgoritmi syrjii tummaihoisia, koska se oppii käyttäytymismallin aiemmasta aineistosta.

Kupla- ja algoritmikauhistelun sijasta tarvitsemme paitsi sosiaalipsykologista ymmärrystä omasta toiminnastamme, myös algoritmilukutaitoa: ymmärrystä siitä, miten julkisuus rakentuu sosiaalis-teknologisena järjestelmänä, ja miten voimme itse siihen vaikuttaa. Kriittisyys sisältöjä ja lähteitä kohtaan on tärkeää. Tieto kannattaa aina varmistaa monesta eri lähteestä, eikä hakukonekaan ole puolueeton. Omia ennakkoluulojaan voi haastaa etsiytymällä tarkoituksella toisen sosiaalisen ryhmän keskusteluihin. Siihen teknologia tarjoaa parempia mahdollisuuksia kuin paperimedia.


Salla-Maaria Laaksonen (VTT) on viestinnän ja teknologian tutkija Viestinnän Tutkimuskeskus CRC:ssa ja Kuluttajatutkimuskeskuksella. Laaksonen on tutkinut muun muassa yritysmainetta, digitaalista vaalijulkisuutta ja organisoitumista verkossa.

Lue lisää:
‚Äʬ†¬† ¬†Tristan Harris: How a handful of tech companies control billions of minds every day ¬†
‚Äʬ†¬† ¬†TechCrunch: Ultimate Guide to the News Feed
‚Äʬ†¬† ¬†Edelman 2017 Trust Barometer

Teknologia, demokratia ja teknologinen kansalaisuus
Photo: Nick Harris

Eilen tulevaa syyslukukautta juhlistettiin valtiotieteellisen tiedekunnan sisäpihalla sidosryhmille suunnatun kesäjuhlan merkeissä. Pidimme yhdessä Mika Pantzarin kanssa tapahtumassa lyhyen dialogipuheenvuoron aiheesta teknologia, demokratia ja kansalaisuus. Ohessa oma pointtini lyhyesti.

Teknologia on osa sosiaalisia, taloudellisia ja poliittisia valtarakenteita.

Suomalaiset yhteiskuntatietelijät ovat yllättävän vähän kiinnostuneita teknologiasta ottaen huomioon kuinka keskeisessä roolissa se arjessamme on. Samaan aikaan julkisuudessa on tällä hetkellä on kovasti vallalla eräänlainen uus-deterministinen puhe teknologiasta: väitteet siitä, kuinka algoritmit määrittelvät kaiken julkisen tilan, kertovat olemmeko sairaita vai emme, ja kuinka tekoäly saavuttaa pian tietoisuuden ja valloittaa maailman. Näissä puheissa teknologia nähdään toimijana, joka tuntuu olevan ihmiskontrollin ulottumattomissa.

Tutkijoina, asiantuntijoina ja kansalaisina meidän pitäisi ymmärtää paremmin niitä taloudellis-poliittisia rakenteita, jotka vaikuttavat teknologian taustalla. Vain siten osaamme paremmin suhteuttaa myös yllä kuvattua puhetapaa todellisuuteen.

Teknologisen kansalaisuuden n√§k√∂kulmasta on merkitt√§v√§√§, ett√§ teknologialla tai tarkemmin teknologiayrityksill√§ on rakenteellista valtaa, joka m√§√§rittelee arkeamme ja julkisuuden rakennetta. t√§t√§ valtaa k√§ytt√§v√§t ennen kaikkea isot amerikkalaisyritykset ‚Äď le GAFA, eli Google, Apple, Facebook ja Amazon ‚Äď joiden toimintalogiikkaa ohjaavat taloudelliset intressit. T√§m√§ huolimatta siit√§, ett√§ niiden mainospuheissa k√§ytet√§√§n jatkuvasti kulttuurillista ja sosiaalista retoriikkaa [1].

Siksi teknologinen kansalaisuus tapahtuu tällä hetkellä kaupallisessa kontekstissa ja on kiinni markkinavetoisessa logiikassa: somekohut, Trumpin twiitit ja yhtä lailla myös perinteinen media ohjautuvat pääasiassa talouden logiikalla.

Markkinavetoinen teknologinen kansalaisuus on somekohuja ja närkästymistä, joita sitten mitataan ja analysoidaan, ja joista tehdään uutisia. On vaarana, että  tälainen pulina jää vain pulinaksi, jossa äänekäs vähemmistö vie näkyvyyden (cf. Marcuse ja repressiivinen toleranssi [2]). Luulemme osallistuvamme ja tekevämme politiikkaa, mutta todellisuudessa luomme vain kohun ja datapisteitä markkinoijalle.

Ehkä demokraattisempi teknologinen kansalaisuus voisi olla teknologian käyttöä suoraan kontaktiin valtaapitävien kanssa, erilaisia osallistavia teknologiaprojekteja ja jalkautumista poliitikkojen ja virkamiesten taholta?

Teknologian tutkimuksessa onkin viime aikoina korostettu niin sanottua vastavuoroisen rakentumisen näkökulmaa (mutual constitution of technology). Sanarimpsu tarkoittaa sitä, että teknologia ja ihmistoimijat vaikuttavat toisiinsa ja teknologian merkitys ja käyttötavat rakentuvat sosiaalisen toiminnan kautta.

Tässä mielessä meillä on kansalaisina ja kuluttajina väistämättä myös valtaa vaikuttaa siihen minkälaiseksi teknologia muodostuu, miten sitä käytetään ja miten se ymmärretään. Tätä valtaa kannattaa käyttää: kehittää uusia tapoja käyttää olemassa olevaa teknologiaa demokratian edistämiseen, pitää yllä tietoisuutta teknologian taustalla olevista voimista, ja tuoda rohkeammin yhteiskuntatieteellistä näkökulmaa myös teknologian kehitykseen.

* *

[1] Van Dijck, J., & Nieborg, D. (2009). Wikinomics and its discontents: a critical analysis of Web 2.0 business manifestos. New Media & Society, 11(5), 855‚Äď874.