Nordic Perspectives on Algorithmic Systems: Notes from a Workshop on Metaphors and Concepts

The first meeting in our NOS-HS workshop series Nordic Perspectives on Algorithmic Systems: Concepts, Methods, and Interventions was organized on May 22 and 23 in Stockholm. The goal of the workshop series is to develop a Nordic approach to critical algorithm studies, with the first workshop focusing on coming up with metaphors and concepts that would be useful for pushing debates about algorithmic systems forward. In addition, the workshop series aims to establish a network in the Nordics for those interested in the topic of algorithm studies.

We think it is safe to say that the first workshop took some quite successful steps towards achieving these goals. We had an intense two days of brainstorming and exchanging ideas with 16 participants from Helsinki, Tampere, Stockholm and Copenhagen, representing a multitude of different fields ranging from Human-Computer Interaction and Software Development to Sociology and Philosophy of Science.

On the first day, we heard short presentations from each participant, along with introductions to specific concepts/metaphors they consider relevant for approaching and thinking about algorithmic systems. On the basis of these discussions, we collated conceptual maps of various ways to conceive of algorithms. On the second day, we discussed in pairs articles which each participant had brought along as examples of inspiring work. Further, we had a discussion about optimistic/constructive and pessimistic/critical approaches to discussing technology.

Here  are selected takeaways from our discussions, including both conceptual approaches to algorithmic systems, as well as thoughts on how to approach the debate more generally:

Control, care, and empowerment

One central issue in thinking about algorithmic systems concerns the motivations and justifications for the use of algorithms. In this respect, the distinctions between control, care and empowerment were brought into discussion as useful notions for elucidating the different logics of using algorithms. While the logic of control relates to algorithmic surveillance and the aim of governing or managing behavior, the aim inherent in the logic of care is that of supporting certain forms of behavior rather than preventing others. For instance, we discussed the work of content moderators on discussion forums, where moderators wish that automated methods could liberate them to work on fostering and guiding discussion instead of the current focus of deciding what content to delete and what to allow. The important point here is that these different aims encompass divergent justifications for the use of algorithms: while surveillance as control is often justified in terms of necessity and protection, the legitimacy of algorithmic care rests on the thriving and well-being of its subjects. Contrasting with these aims, the logic of empowerment justifies the use of algorithms in terms of performance increases and efficiency. The aim of empowerment is then grounded on ideals such as progress and development. Although empowerment aims at providing people with increased capabilities for action, its underlying principle of optimization can also become self-serving, with people turning into material in the quest for optimizing shallow and cheap quantitative metrics.

Optimization and resilience

The concept of optimization was discussed in particular by reference to the work of Halpern and others [1] on smart cities. In developing algorithms with the aim of optimization, improving the system’s performance in terms of quantified metrics can become an end in itself, which supersedes conscious planning and deliberation in organizing action. Optimal performance carries with it a rhetorical force, which can work to legitimate algorithmic management of increasingly many aspects of life. As such, as Orit Halpern and others note, optimization serves to justify the use of notions such as “smartness” in relation to systems which seek to find optimal solutions to predefined problems. Thus, from the perspective of optimization, the crucial question to ask about algorithmic systems might not concern their performance in the technical sense, but rather the choices made when defining the system’s goals and means of finding optimal solutions.

Another notion connected to the idea of autonomously operating “smart” systems was that of resilience, which denotes the system’s capacity to change in order to survive through external perturbations [1]. While the stability or robustness of algorithmic systems is their ability to maintain fixed functioning upon external influences, resilience concerns the temporal dimension and the lifespan of these systems, and their ability to evolve and adapt their behavior to “live” through changing environmental conditions. The resilience of an algorithmic system then depends not only on the ability to find optimal solutions to problems, but also on the ability to maintain the system’s operation. In this work, human efforts in repair and maintenance are likely to be crucial.

Repair, temporality, and decay of software

A recurrent theme concerned algorithms as implemented in software, and the temporal dimension in the life-course of software systems. The issue of the temporality of software becomes central through the gradual decay of legacy technologies and the care required to keep them operational. Repair work is also involved as part of the lifetime of algorithms, with hardware and software systems implemented in evolving programming languages and as part of divergent organizational settings, consequently requiring constant maintenance and monitoring [cf. 2]. The notion of repair connects with multiple themes discussed during the workshop, for instance optimizing algorithmic processes and the role of human agency in algorithmic systems. As such, human-algorithm interactions can be thought of as involving not only a continuous process of interpretation, but also work in correcting and explaining errors and idiosyncrasies in results, coming up with workaround solutions to adapt tools to diverging goals, and maintaining software and hardware implementations operational.

Human agency and gaming in algorithmic systems

One of the topics brought up was the question of human agency in relation to the algorithmic systems. We discussed that systems should be rehumanized by dragging the human work going into these systems back to the spotlight, making visible the labor that is required to maintain, train and develop different kinds of systems. On the other hand, algorithms are also used to limit and make possible certain forms of actions raising questions about what people can do with technology to expand their possibilities, and how technology can also be used to limit the potential of humans. One suggested way to approach the relationship that algorithms have with humans was to focus on the interaction between them – we might learn a lot from observing empirically what happens when humans encounter algorithmic systems.

Further, we discussed human agency towards these systems through concepts of games and algorithmic resistance. These approaches highlight the human potential to find vulnerabilities or spaces of intervention, and to act against or otherwise manipulate systems, be it for personal gain or with an activist aim of creating a more just world. Whatever the reason for acting against the system is, a question arises: What does it mean to win against an algorithm? So-called victories against these systems may be short-lived, as games or resistance do not happen in a vacuum: It is possible to win a battle but lose the war. This discussion highlighted how algorithmic systems, just like human beings, are situated in the wider society and its networks of relationships.

Power, objectivity, and bureaucracy

The notions of power and objectivity of algorithmic systems were discussed on multiple different occasions during the workshop. These concepts are often brought up in the critical data and algorithmic studies literature as well, with critics arguing against utopian hopes of unbiased knowledge production [e.g. 3], and pointing to the far-reaching societal consequences of algorithmic data processing and classification [e.g. 4]. However, during the workshop, the questions of objectivity and power of algorithms were themselves questioned. For instance, debates about the power of algorithms would benefit from increased clarity, which could potentially be achieved by connecting the literature with extant accounts of power in political science, such as Stephen Lukes’ [5] theory of the three faces of power.

Similarly, the issue of algorithmic objectivity can take on several different meanings depending on whether the discussion focuses on hidden biases in data production, or for instance the mechanical objectivity [6] of algorithmic procedures. One particularly interesting metaphor for thinking about issues of objectivity in algorithmic systems is that of bureaucracy [e.g. 7], and the sense of objectivity imbued on action and decision-making through the establishment of rigid, explicit, and seemingly impartial rules [8]. The quest for such procedural objectivity [9] is likely to be present in efforts to automate decision-making in algorithmic systems as well. Comparing the effects of explicit rules on power relations within bureaucracies with algorithmic procedures in organizations could be one way to get a grasp on how power works within algorithmic systems.

Optimism, pessimism, and the notion of algorithm

Given the multitude of different approaches present during the workshop, the question arose of the usefulness of the notion of “algorithm” itself in thinking about the technological and social phenomena we are interested in. While algorithms and algorithmic systems were the backdrop for our discussion, it became evident that the phenomena we were discussing are at once broader and more multifaceted. While we started with algorithmic systems, we ended up discussing themes such as collaboration and preconditions of human work, motivations and justifications for action, maintenance and design of technology, temporality, and discontinuities between interpretive and formal processes. This is likely as it should be, given that the aim of the workshop was to think about metaphors for discussing algorithms. However, the variety and scope of the perspectives testifies to the fuzziness of the notion of algorithm, and calls attention to the need for delineating and clarifying the central concepts which figure in discussions about algorithmic systems and their connections to more longstanding discussions in various disciplines.

Related to these observations, our discussion on the second day about the critical/pessimistic and constructive/optimistic attitudes for approaching algorithms called attention to the various ways in which understandings of technology can be oversimplifying. In particular, the issue of “naive” optimism and technological solutionism, often attributed to the developers of technology in critical treatments, was called into question. While critical approaches are indeed important, self-sustained discussions about the limitations and problems of technology hold the danger of oversimplifying the understanding of the “other side”. Such simplifications are unlikely to foster fruitful engagement with communities engaged in developing new technologies. For us, this emphasizes the importance of reflexive thinking that take seriously the risk of  “naive” criticism of critical accounts of technology and does not try to situate social scientists as outside of the troubles of algorithmic systems.

By: Juho Pääkkönen, Jesse Haapoja & Airi Lampinen

The next workshop in the series will take place in the autumn in Copenhagen, with a focus on approaches and methods.

– –

[1] Halpern, O. et al. (2017). The smartness mandate: Notes towards a critique. Grey Room 68.

[2] Jackson, S. (2014). Rethinking repair. In T. Gillespie, P. Boczkowski and K. Foot (eds.), Media Technologies: Essays on Communication, Materiality, and Society. MIT Press.

[3] Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski and K. Foot (eds.), Media Technologies: Essays on Communication, Materiality, and Society. MIT Press.

[4] Ananny, M. (2016). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology & Human Values 41(1).

[5] Lukes, S. (1974). Power: A radical view. London and New York: Macmillan.

[6] Daston, L. and Galison, P. (1992). The Image of Objectivity. Representations 40.

[7] Crozier, M. (1963). The bureaucratic phenomenon. Chicago: University of Chicago Press.

[8] Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.

[9] Douglas, H. (2004). The Irreducible Complexity of Objectivity. Synthese 138.

Digital technologies, data analytics and social inequality

We were recently involved in organizing a working group on what might be called “digital inequalities” at the Annual Finnish Sociology Conference. Based on the working group, we reflect on the relationship between digital technologies and social inequalities, and on the role of critical scholarship in addressing the issue.

To paraphrase Kranzberg’s (1986) well-known first law of technology, while digital technologies and their capability to produce data are not a force for good or ill, they are not neutral either. With the increasing use of data analytics and new digital technologies, as well as the ever-intensifying hype over them, it is extremely important to examine the connection between technological and social divides

A rich body of research on “digital divides” has focused on the issues of unequal access to technology and differences in its usage (e.g. van Dijk, 2013). With the aim of expanding the view beyond the ideas of access and usage, Halford and Savage (2010) have proposed the concept of “digital social inequality”, emphasizing the interlinking between social disadvantages and digital technologies. This means that the development, use and effects of digital technologies are often related to social categories such as gender, race/ethnicity, age and social class

Examining the divisions connected to the use of data, Andrejevic (2014) points out “the big data divide”, a concept with which he refers to the asymmetric relationship between those who are able to produce and use large quantities of data, and those who are the targets of data collection. This divide highlights not only access to data and the means of making use of data, but also differential access to ways of thinking about and using data. D’lgnazio and Klein (2019) further discuss the power structures inherent in the collection and usage of data, pointing out that these structures are often made invisible and thus taken as an objective viewpoint of how “the numbers speak for themselves”. Through many empirical examples D’lgnazio and Klein demonstrate that even the choices of what topics data is collected on, analyzed and communicated rest on power relations in terms of whose voices and interest are represented and whose are marginalized

Partly inspired by the above-mentioned research, we recently organized a working group at the Annual Finnish Sociology Conference, The Shifting Divides of Our Digital Lives, to discuss old and new forms of inequalities, the reactions they provoke, and their societal consequences. To guide our presenters, we posed some additional questions: What hinders or facilitates equal participation in the digital society? How are social institutions adapting to digital change? What forms of civic engagement and activism arise given digital society’s asymmetries?

Here we summarize selected findings of presentations that provided insights into how digital technologies and the use of data analytics shape our differential opportunities for social participation even when we, as citizens, might not be fully aware of it.

In her presentation Contested technology: Behavior-based insurance in critical data studies, Maiju Tanninen (University of Tampere) pointed out the many concerns that data studies literature has identified in connection to the use of self-tracking technologies in personalized insurance. These include the possibility of data-based discrimination, heightened surveillance, and control of clients’ behavior. However, Tanninen argued that while these critiques paint a rather dystopian picture of the field, they are largely focused on the US context, they fail to differentiate between insurance types, and are often lacking in empirical engagement. In practice, the use of self-tracking devices for the development of personalized insurance looks often doubtful, amongst other reasons due to poor quality of data. Tanninen pointed out that in order for critical research on the topic to be constructive, and to better understand the benefits of these technologies and offer new insights, we need empirically grounded research in the European and more specifically Finnish contexts.

In his presentation Ageing migrants’ use of digitalised public services: Ethnographic study, Nuriiar Safarov (University of Helsinki) emphasized the need for intersectional perspective in studying access and utilization of e-services among different groups of migrants. In his doctoral project, Safarov examines the impact of the digitalization of public services in Finland on the group of older Russian-speaking migrants who permanently live in Finland. Safarov pointed out that this specific group of migrants may face particular barriers to access e-services not only because of their age, but also because of lack of language skills and social networks. Empirical work on such groups can, in turn, offer insight into the interplay of digital-specific and more ‘traditional’ social divides.

In her presentation Facebook Groups interaction affecting access to nature, Annamari Martinviita (University of Oulu) compared a popular Finnish Facebook group on the topic of national parks, and the official information website of Metsähallitus. Martinviita demonstrated that while both platforms might aim to be inclusive when they advertise access and exploration of nature, in practice they might produce various divides by means of presenting and constructing ‘correct’ ways of visiting national parks.

In their presentation Political orientation, political values and digital divides – How does political orientation associate with the political use of social media? Ilkka Koiranen and colleagues (University of Turku) demonstrated that while social media provides new ways for political participation, there are significant differences between political parties in how their supporters use social media for political purposes. The research was based on a nationally representative survey dataset. The results showed that newer political movements with younger and more educated supporters representing post-material values are more successful in social media, echoing also previous findings in the digital divides research.

In his presentation How data activism allies with firms to seek equal participation in the digital society, Tuukka Lehtiniemi (University of Helsinki) discussed the case of MyData, a data activism initiative aiming to enhance citizens’ agency by providing them with the means to control the use of their personal data, in an attempt to address injustices related equal societal participation. Various interest parties are involved in MyData, including technology-producing firms that seek market and policy support for their products. Lehtiniemi argued that particular ways to frame MyData’s objectives are employed to support this involvement. While it is important to develop alternative imaginaries for the data economy, a central question remains to be resolved: how to move from abstract concepts such as citizen centricity and data agency to actual alternatives that challenge dominant imaginaries of data’s value.

These presentations highlight that the promises of equal participation so often associated with digital technologies and use of data analytics are often challenging to reclaim in practice. If approached without care, they may reproduce and extend existing patterns of biases, injustices or discrimination.

Thus, it is important to keep in mind that as digital technologies and data analytics are forged by humans in specific societal settings and power relations, these technologies contain traces of societal conditions in which they are coined and manufactured. Consequently, it is salient to explore what kinds of potentially biased assumptions are embedded in these technologies used so extensively in today’s society. This is why we think that it is urgent to advance critical approaches and support collective citizen actions to create and implement technologies and data analytics that improve opportunities for all

At the same time, as some of the presentations in the working group also indicated, criticism by itself may not lead to constructive input in the development and usage of digital technologies. We should therefore not only point out the ways how digital technologies and data analytics, their current usage, and the potential future trajectories can bring up or exacerbate societal problems. In addition, we should engage in conceptual and empirical research that can help identify preferable alternatives and steer technological developments toward societally more desirable and sustainable ones

By: Marta Choroszewicz, Marja Alastalo and Tuukka Lehtiniemi

Choroszewicz is a Postdoc Researcher at University of Eastern Finland, Alastalo is a University Lecturer at University of Eastern Finland, and Lehtiniemi is a Doctoral Candidate at University of Helsinki.

– –

References:

Andrejevic, M (2014) The big data divide. International Journal of Communication, 8: 1673–1689. https://ijoc.org/index.php/ijoc/article/view/2161

D’lgnazio, C and Klein, L (2019) Data Feminism. MIT Press Open. Available at: https://bookbook.pubpub.org/data-feminism

Halford, S and Savage, M (2010) Reconceptualising digital social inequality. Information, Communication and Society 13(7): 937–955. https://doi.org/10.1080/1369118X.2010.499956

Kranzberg, M (1986) Technology and history: “Kranzberg’s laws“. Technology and Culture, 27(3): 544–560. https://doi.org/10.2307/3105385

Van Dijk, JAGM (2013) A theory of the digital divide. In: Ragnedda, M., & Muschert, G. W. (Eds.) The digital divide: The Internet and social inequality in international perspective. Routledge, 36–51.

CFP: Rajapintapäivät 15.-16.11.2018

rajapintalogo150px01round-reunatRajapintapäivät on avoin ja maksuton tapahtuma kaikille, jotka ovat kiinnostuneita yhteiskuntatieteellisestä informaatioteknologian tutkimuksesta tai digitaalisten ja laskennalisten menetelmien käytöstä yhteiskuntatieteissä. Rajapintapäivät järjestetään Espoon Otaniemessä 15.-16.11.2018.

Torstaina 15.11. on työpajapäivä, ja perjantaina 16.11. tapahtuman muoto on epäkonferenssi. Tervetuloa mukaan!

(Scroll down for the English version)

ESITYSKUTSU EPÄKONFERENSSIIN

Perjantain 16.11. epäkonferenssi (unconference) on avoin ja osallistumiseen pohjaava tapahtuma, jonka agenda muodostuu osallistujien yhteistyössä. Kaikki teknologiaa, yhteiskuntaa ja digitaalisia menetelmiä pohtivat tai kehittävät aiheet ovat erinomaisen tervetulleita!

Epäkonferenssi järjestetään osallistujien tapahtumaan tuoman sisällön ympärille. Pyydämmekin tapahtuman osallistujilta ehdotuksia sisällöksi mielellään 31.10. mennessä. Tapahtumassa on 25 minuutin mittaisia sessioita, joita osallistujat voivat varata etukäteen. Ajan voi käyttää perinteiseen esitykseen keskusteluineen, mutta kannustamme kokeilemaan myös muunlaisia formaatteja ja jättämään runsaasti aikaa keskustelulle.

Aiheen tulisi sopia sisällöllisesti Rajapinta ry:n teemoihin: teknologia, yhteiskunta ja digitaaliset menetelmät. Sessioita on neljää eri tyyppiä:

  • Tätä tutkin (esim. paperi, väitöskirjan aihe tai gradun esittely, keskeneräinen tutkimus)
  • Tätä haluaisin tutkia tai ymmärtää paremmin (esim. keskustelu uudesta tutkimusnäkökulmasta tai projekti-idean tai aineistoyhteistyön esittely)
  • How to (esim. analyysimenetelmän tai aineistonkeruutyökalun demo, mahdollisesti jonkin esimerkkitapauksen avulla)
  • Kokeellinen sessio (esim. jotain ihan muuta)

Ehdota omaa aihettasi lisäämällä se suoraan Rajapintapäivien Wiki-sivulle:

https://wiki.helsinki.fi/display/rajapintapaivat

Esitysten kieli on vapaa.

ESITYSKUTSU ETIIKKATYÖPAJAAN JA OPINNÄYTETYÖSEMINAARIIN 15.11.

Torstaina 15.11. järjestämme kolme työpajaa aiheista, jotka kumpuavat Rajapinta ry:n tavoitteesta kehittää teknologian ja yhteiskunnan tutkimuksen toimintaedellytyksiä Suomessa.

Tutkimusetiikkatyöpaja 15.11.18 klo 10:00 – 12:00
Digitaalisiin aineistoihin ja menetelmiin liittyvät tutkimuseettiset käytännöt ovat liikkeessä ja eri tutkimusasetelmista nouseviin kysymyksiin löytyy harvoin selkeitä vastauksia. Tässä työpajassa pohdimme ja ratkomme käytännönläheisesti tutkimusprosessin eri vaiheista nousevia tutkimuseettisiä kysymyksiä. Jos haluat tulla esittelemään omasta tutkimuksestasi nousevia eettisiä haasteita ja/tai ratkaisuja, ota yhteyttä Minttu Tikkaan (minttu.mt.tikka (a) helsinki.fi).

Opinnäytetyöseminaari 15.11.18 klo 10:00 – 12:00/13:00
Teetkö opinnäytetyötä liittyen digitaaliseen yhteiskuntaan, käytätkö laskennallisia menetelmiä tai digiaineistoja? Tule esittelemään ja keskustelemaan työstäsi Rajapintapäivien opinnäytetyöseminaariin. Jokaiselle työlle yritetään varata sopiva aika (riippuen osallistujien määrästä) esittelyä sekä keskustelua varten. Kutsumme paikalle myös ilmoittautuneiden aihealueisiin erikoistuneita tutkijoita kommentoimaan töitä.

Laskennallisen yhteiskuntatieteen infrastruktuurit 15.11. klo 13:00-16:00
Avoimessa työpajassa työstetään skenaariotyöskentelyn avulla yhdessä ajatuksia siitä, minkälaisia teknologisia infrastruktuureja ja toisaalta osaamista tarvittaisiin suomalaisen laskennallisen yhteiskuntatieteen edistämiseksi. Työpajassa ei tarvita ennakkotietoja laskennallisesta yhteiskuntatieteestä tai koodaustaitoja – itse asiassa se on samalla erinomainen intro siihen, mistä asiassa oikein on kyse! Työpajaa ovat järjestämässä myös CSC Tieteen tietotekniikan keskus ja Futusome Oy.

Lisätietoja Rajapintapäivien wikissä.

Työpajojen tarkoituksena saada aikaa konkreettista yhteistä toimintaa työpajan aiheen ympärille. Osallistujilta ei vaadita aiempaa aiheeseen tutustumista.

Ilmoittautuminen

Ilmoittaudu mukaan verkkolomakkeella 31.10. mennessä. Rajapintapäiville voi osallistua myös ilman omaa sisältöehdotusta.

Tilaisuus on maksuton.

– –

Call for Proposals: Rajapinta Days 2018

Our annual unconference will be organized in Otaniemi, Espoo 15.-16.11.2018. The event is open for all interested in the study of digital society and digital methods.

Thursday 15.11. is a workshop day that includes Rajapinta’s thesis seminar for undergrads, as well as workshops on research ethics challenges of research projects and on infrastructures for computational social science.

Friday 16.11. is an unconference day, which builds upon the ideas and proposals of the participants. You can also participate as a non-presenter.

More information and proposal guidelines here:

https://wiki.helsinki.fi/display/rajapintapaivat

*We hope submissions and registrations to be done on October 31th the latest.*

– – Rajapinta ry is an association that focuses in the study of digital society and application of digital methods to social research. We are supported by Kone Foundation. Rajapinta Days are also supported by Helsinki Institute for Information Technology HIIT.

AoIR 2017 preconference: Less Hate in Politics!

nettikett.pngIf heading to AoIR 2017, consider also joining our preconference on hate speech recognition and prevention:

Less Hate in Politics! Machine Learning and Interventions as Tools to Mitigate Online Hate Speech in Political Campaigns

  • Oct 18th, 9 am – 12:30 pm
  • Dorpat Convention Centre, Tartu, Estonia

Discriminating, hateful speech online, often targeting specific groups and minorities, has become a pressing problem in the societies. Hateful speech is a form of verbal violence that creates enemities, silences debates, and marginalizes individuals and groups from participation online.

What is challenging is that ‘hate speech’ has come to mean a variety of speech acts and other ill-behaviours online, ranging from penal criminal acts to speech which is uncivil and disturbing, but yet to be tolerated. This definitional difficulty is further abused in claims that any limitations of hate speech endanger people’s right to freedom of expression.

Hate speech has been criminalized in many countries and major Internet companies also engage in efforts to limit it. While many social media platforms allow users to flag content as hate speech for moderation purposes, no official follow-up actions take place. Furthermore, the automatic identification of hate speech is limited by lacking tools.

Pre-conference workshop aim and content:

The aim of the workshop is to facilitate the development of tools and processes how the academic community could run interventions which aim to decrease the toxicity in the online space.

We will provide participants a kick-start with computational tools for hate speech recognition. We will also discuss and reflect the challenges of such interventions and examine the opportunities and problems of deploying such systems. Our own experiences – which we reflect in the workshop – emerge from a project where the social media activity of candidates was monitored during the Finnish municipal election campaigns in April 2017.

The workshop will follow an interactive style using both online and offline tools to facilitate discussion.


Clipart by Eggib.

Hybridejä mainenarratiiveja, tunteella ja teknologialla – väitöstilaisuus 16.6.2017

Screen Shot 2017-06-06 at 00.19.27VTM Salla-Maaria Laaksonen eli allekirjoittanut väittelee 16.6.2017 kello 12 Helsingin yliopiston valtiotieteellisessä tiedekunnassa aiheesta “Hybrid narratives – Organizational Reputation in the Hybrid Media System“. Tervetuloa mukaan väitöstilaisuuteen kuulemaan akateemista debattia organisaatiomaineesta ja verkkojulkisuudesta! Alla lyhyt yhteenveto tutkimuksesta.

***

Tutkin väitöskirjassani sitä, miten yrityksiä ja muita organisaatioita koskevat mainetarinat muodostuvat hybridissä mediatilassa. Tutkimusongelma on kaksitahoinen: tutkin, miten uusi viestintäympäristö vaikuttaa organisaatiomaineen muodostumiseen, ja toisaalta sitä, minkälaisia kognitiivisia ja emotionaalisia vaikutuksia maineella ja mainetarinoilla on. Hybridi mediatila on viestinnän tutkimuksen tuoreehko käsite (Chadwick 2013), joka pyrkii ymmärtämään nykyistä mediamaisemaa. Hybridiys viittaa eri mediamuotojen sekoittumiseen: sosiaalisen median ja perinteisen median sisällöt ja muodot elävät verkkojulkisuudessa vahvasti sekoittuneena.

Tarkastelen väitöskirjassani hybridia mediatilaa tarinankerronnan paikkana. Tästä näkökulmasta jokainen blogikirjoitus tai twiitti on pieni kertomus, jollaisia teknologia kutsuu meitä kertomaan arjesta ja kokemuksistamme. Monet kertomuksista käsittelevät suorasti tai epäsuorasti yrityksiä ja muita organisaatioita – jolloin ne ovat määritelmällisesti mainetarinoita. Niitä on jaettu arjessa aikaisemminkin, mutta teknologia mahdolistaa uudenlaista tarinankerrontaa: tarinat leviävät lähipiiriä laajemmalle, ne arkistoituvat, ja nistä tulee etsittäviä ja muokattavia.

Osa verkon teknologioista toimiikin tarinankerronnan apuvälineinä hyvin erityisellä tavalla: ne järjestävät, kuratoivat ja muovaavat kertomuksia yhdistämällä erilaisia tarinanpalasia yhteen näkymään. Näin toimii esimerkiksi joukkovoimin ylläpidetty tietosanakirja Wikipedia tai verkon sisältöä penkovat hakukoneet. SIksi hybridissä mediassa maineen tarinankertojina toimivat sekä ihmistoimijat että teknologia yhdessä. Väitöskirjani pohjalta esitänkin, että teknologia muuttaa niitä tapoja, joilla sidosryhmät kertovat tarinoita organisaatioista. Verkkojulkisuuden alustoilla syntyvissä tarinoissa sekoittuvat paitsi eri mediamuodot, myös faktat ja mielipiteet sekä rationaalinen ja emotionaalinen sisältö.

Väitöskirjani korostaakin tunteiden merkitystä maineelle. Niin maineen tutkimuksessa kuin erilaisissa mainemittareissakin on perinteisesti keskitty rationaalisiin ominaisuuksiin: tuotteiden laatuun, johtajuuteen, taloudelliseen menestykseen. Maine näyttäisi kuitenkin olevan yhtä paljon myös emotionaalinen käsite. Organisaatioita käsittelevät kertomukset verkossa ovat hyvin tunnepitoisia: yritysten kanssa ihastutaan ja vihastutaan, niiden ympärille rakentuu faniyhteisöjä ja vihaisia kohuyhteisöjä. Teknologian ominaisuudet emoji-hymiöistä tykkää-nappulaan myös kannustavat ilmaisemaan tunteita.

Eikä tunteissa ole kyse vain ilmaisusta. Väitöskirjan osatutkimuksessa osoitettiin, että hyvä ja huono maine näkyvät eri tavoin koehenkilöiden kehollisissa reaktioissa, kun he lukevat yritystä koskevia verkkouutisia tai verkkokommentteja. Maine on siis myös tulkintakehys: tiedostamaton, kehollinen reaktio, joka ohjaa ihmisen toimintaa esimerkiksi ostoksilla valintatilanteessa.

Mainetutkimuksen näkökulmasta rakennankin työssäni uudenlaista kulmaa mainetutkimukseen. Mainetta on perintisesti tutkittu joko organisaation taloudellisena voimavarana tai tulkinnallisena elementtinä sidosryhmien mielissä. Tässä työssä määrittelen maineen viestinnällisenä ilmiönä, joka on olemassa yksilöiden tulkintakehyksenä sekä sosiaalisesti rakentuneina narratiiveina. Mainenarratiiveilla on kuitenkin myös mitattavia vaikutuksia niitä lukeviin ihmisiin ja heidän tulkintakehyksiinsä. Siksi sekä maine että mainetarinat ovat organisaatioille aineetonta pääomaa.

Väitöskirja koostuu viidestä artikkelista ja yhteenvetoluvusta. Artikkeleissa on käytetty neljää eri aineistoa: viestinnän ammattilaisten haastatteluja, sosiaalisen median verkkokeskusteluaineistoja, Wikipedia-aineistoa sekä psykofysiologisia mittauksia. Näin ollen tutkimus yhdistää metodisesti laadullista, narratiivista analyysia kokeelliseen tutkimukseen.

***

TLDR; “Hybridi mainetarina syntyy kun 😩 ja 👾 yhdessä käyttäen apunaan📱💻, muodostavat 📜💌📜 , jotka verkkojulkisuudessa 💾 ja 📢 ja joilla on 📉 vaikutuksia 🏭🏨 🏢:lle.” (ref. Your Research, emojified)

Väitöskirjan elektroninen versio on luettavissa E-thesis -palvelussa.

Väitöskirjaa ovat rahoittaneet Liikesivistysrahasto ja Tekes.

CFP: Sosiologipäivät: Yhteiskunnan ja menetelmien digitalisaatio

profiiliSosiologipäivät 23.–24.3.2017, Tampereen yliopisto

Yhteiskunnan ja menetelmien digitalisaatio

Koordinaattorit: Veikko Eranti (veikko.eranti@gmail.com), Epp Lauk (epp.lauk@jyu.fi), Erle Rikmann (erle.rikmann@jyu.fi), Tuukka Ylä-Anttila (tuukka.yla.anttila@gmail.com)

Kun ihmiset elävät sosiaalisia elämiään entistä enemmän digitaalisesti ja verkossa, digitaalinen elämä on otettava osaksi kaikkea sosiaalitutkimusta, mutta samalla on tutkittava digitaalisen elämän erityisyyttä: millä kaikilla tavoilla digitaalisuus vaikuttaa sosiaalisuuteen? Miten jokapäiväisen elämän, kulutuksen ja työn digitalisaatio muuttaa elämäntapojamme? Millaisia epätasa-arvon ilmiöitä digitalisaatio tuo mukanaan? Kasvavatko ja mukautuvatko sosiaaliset instituutiot ja kansalaisten kyvyt digitalisaation tahdissa? Tuottaako anonyymi verkostokommunikaatio salaliittoteorioita ja vihapuhetta?

Big data ja computational social science tuovat sosiaalitutkimuksen kentälle muitakin ammattilaisia kuin sosiaalitieteilijöitä. Aineisto- ja menetelmämahdollisuuksien mukana tulee myös ongelmia: miten datatiede ja yhteiskuntatiede saadaan puhumaan keskenään? Vai pitäisikö mieluummin varmistua yhteiskuntatieteilijöiden datataidoista ja datatieteilijöiden yhteiskunnallisista analyysitaidoista? Digitaalisuus hämärtää määrällisten ja laadullisten menetelmien rajaa, kun lisääntyvältä määrälliseltä tutkimukselta vaaditaan entistä enemmän tulosten tulkintaa, kun taas laadulliselta tutkimukselta on alettu vaatia edustavien aineistojen käyttöä ja mitattavuutta. Mitä kvantisosiologi voi tehdä some-aineistolla ja miten kulttuurisosiologi voi käyttää algoritmeja?

Toivotamme tervetulleiksi esitykset yhteiskuntien ja menetelmien digitalisaatiosta suomeksi tai englanniksi! Abstraktien deadline 22.1.

https://www.lyyti.fi/reg/sosiologipaivat2017cfp

Infrastructures and publics – notes from Conference in Siegen

I attended the First Annual Conference 2016 Infrastructures of Publics — Publics of Infrastructures to gain some more insight about the most current thinking in Europe around topics like platforms, society, and algorithms. The University of Siegen organized this conference as they have a new center of excellence around these themes.

Putting it all together

I’ve tried to do some categorization and choose highlights from the conference, but before moving to these smaller bits, I think it’s worth to say something about the whole conference. Throughout the journey of the conference, it became apparent terms such as infrastructures and the public have various meanings. Their focus on infrastructures focused surprisingly lot on physical things to my taste, but give insights e.g., to archeology research (not that far from coding, actually) and enablers for digital interaction. The public was developed primarily from media scholarship.

If I understood it correctly, the center of excellence aims to mingle these two approaches together, to create some ideas how infrastructures and publics interact and shape each other. Sadly, these people seem more prominent in media and culture studies, which I don’t follow that actively, and are publishing less in my favorite venues (CSCW). I do hope that these ideas will move towards the CS people as well, they (we?) tend to forgot these type of research far too often.

Politics and infrastructures

There were several presentations focused on the political aspects of using various tools and infrastructures. So, a huge collection to come here.

Christopher Le Dantec presented his work around using sensors for the public participation. Sadly, I had already read his work in CSCW/CHI domain (e.g. the biking case), and thus the presentation had less interesting novelty aspects. Fundamentally, he has instrumented city bikers to map routes they use and used the data in collaborative design sessions to develop new routes for bikers. He did, however, use a word I have not heard before data literacy. Sadly, there was not a clear definition of this and thus I’m inclined to consider this in similar way computational literacy – just recently under criticism by Matti Tedre and Peter J. Denning.

The more interesting presentation was by M. Six Silberman, a Ph.D. computer science now working in German labor union on new platforms for work. He presented work on Turkopticon, a platform which manages the reputation of those putting out tasks in Mechanical Turk. The idea of this is to balance the current platform by providing those employed through it also insights about their employers. I really like this thinking as it shows how information technology can be used to challenge the society (created by other IT researchers) and try to find balance in platforms.

Finally, Hagen Schölzel presented the concept of communication control, more applied in the business or public relationships literature. The idea behind communication control is actions are planned before hand behind the curtains to shape the communication towards hoped directions; it is precise but does not look like it. Such idea can be applied to various social computing applications, where their interaction is often more strategic than it seems.

Studying the app ecologies

Carolin Gerlitz and Fernando van der Vlist have made an interesting study about the applications types which emerge to support the primarily platform, e.g., in the case of Twitter, all the various Twitter-based applications out there. They concluded that there are at least three application types

  1. strategic engagement, where applications aim to utilize the various forms of data in the app
  2. enhancing functionalities, where applications improve the existing platform functionalities
  3. innovative apps, which add new novel ways to use the application.

These relate further to the grammar of actions, which tell more about how applications are supposed to be used. These relate to the APIs and various rules related to the platform. Finally, the examination fo the extended applications describe the grammar in rather a clear manner.

Archeology and infrastructures

Jürgen Richter presented how the cabinets used by archeologist also have shaped the direction of the research domain as a whole. For example, the early focus to classify objects based on their materials have directed research towards different ages, like stone age. The organization of the cabins has become, almost accidentally, materialized politics. I started to think what similar type of things might exist in fields I’m familiar, and suspect that the overuse of demographic variables to explain phenomena might be such historic relic, passed over generations and still shaping how we examine human activity in various social processes.

Furthermore, he presented an interesting temporal observation: as the archaeology collection is generated over generations, the current curator collaborates with the previous curators and aims to understand their logic of data categorisation and storing. To adapt this idea to more digitalised area, programmers collaborate with all the previous coders with the aim to understand what the heck is going on. Naturally, this collaboration might be difficult: the previous actors might be out of reach, i.e., in another company or even passed away.

I’m grateful for the travel grant from the Doctoral Programme in Computer Science at the University of Helsinki. This post has been crossposted to my personal blog.