Lectio Praecursoria: Hybrid Narratives – Organizational Reputation in the Hybrid Media System

lectiokuvaPresented in the public defense of Salla-Maaria Laaksonen on June 16th 2017, at the University of Helsinki. [Lektio suomeksi täällä]

* *

Dear Custos, Mr Opponent, ladies and gentlemen,

Once upon a time the Internet was born.

The Internet that allowed us to find information, send messages, to express ourselves. The Internet that grew up and became an essential part of our everyday. Or so it is told.

Last autumn, I attended a professional workshop. The consultant asked us to write down the name of the biggest influencer of our life. Four out of seven participants wrote down “Internet”. I believe this is a feeling many of us can identify.

Indeed, it is a compelling narrative to tell how technology changed everything. We humans are programmed to tell and hear stories. We imagine narratives even to places where they do not exist. For us, narratives are a way to organize our lifeworld, and a means to explain changes such as the one brought by the rise of modern communication technologies and social media.

Actually, it is a more complex narrative to tell. It would be far too easy to say technology did it all. Instead, as I argue in my thesis, we are in the middle of not technological but a technosocial revolution that affects, among other things, the ways how we, as customers and citizens, interact with organizations.

* *

A central concept in my thesis is the concept of hybrid media system. This concept refers to our current media environment, mediated and maybe even amplified by technology. It is a reality, where the forms and logics of traditional media become merged with the forms of social media.

An illustrative example of the hybrid media space is the newsfeed of Facebook, in which updates written by ones peers are shown side by side with news produced by traditional media – either shared by media themselves or by one’s friends. Another prominent example is the collaborative online encyclopedia Wikipedia, where the content produced by users is to a large extent built by referring to news content or content elsewhere on the web. A third example is Google, the search engine that plays a very central role in our everyday, showing and sorting various media content for our queries and needs.

In my dissertation I explore the hybrid media system as a place of telling stories. From this perspective each blog post, status update or tweet is a small story, a fragment of a story, that the technology invites us to share about our everyday experiences. These small stories have become important building blocks of our daily lives. Certain online technologies act as storytelling tools in a very special way: they organize, curate and modify the stories we tell by combining and remixing them. For example Wikipedia functions exactly in such a manner, as well as does any search function in a service.

As we are using these technologies, every day, narratives are formed, and the narrators of these stories are both us humans and the technology on which we narrate. Next, let me explain how I came to this conclusion.

* *

In this dissertation I investigate, how reputation narratives concerning companies and other organizations are formed in the hybrid media system. That means that I am not that interested in the ways how the organizations themselves do marketing or communication. Instead, I am interested in the ways how human actors and non-human actors such as technology together write stories about the organizations.

This approach is actually quite common to reputation studies. Reputation is a concept that refers to the views the stakeholders of the organization have regarding that organization. What makes reputation special compared to its sister concepts such as brands or company images is that reputation always reflects the full historical performance of the organization. That is, reputations connect to the actual doings and deliverables of the organization. Brands and images can be constructed, but reputations need to be earned.

Thus, reputation narratives are not stories told by the organizations themselves. They are narratives told by customers, partners, reporters, analysts and by laypeople. They are stories, that are often based to the real encounters between the organization and its stakeholders – to the real experiences people have had with the organization or with its products and services.

Of course, such stories have always existed. They have been told on market squares, on coffee tables, and in swimming hall saunas. Maybe a friend has told he had a good experience in a restaurant. A neighbor recommended a good handyman to help with renovations.

Technology, however, changes the ways how stories about organizations are born and how they spread. What happens now is that emotional tweets made by fired employees are embedded in the news about shutting down a factory unit. A customer dissatisfied with a hotel room can go and make a public YouTube video that shows the ugly room, and then ends up in the Facebook feeds of thousands of people, and most likely will be eventually covered by traditional media. A horror story of a dishonest car dealer is anonymously spelled out in Suomi24 and ends up in the Google search of a random user – and this happens even years after the original post has been made.

In this dissertation I study these stories from two perspectives. First, using online discussions, Wikipedia material and interviews of professionals I study how such reputation narratives are formed in the hybrid media system. Second, using an experimental setting I investigate how these stories affect the people who read them, and how they are shown as psychological and physiological reactions in our bodies.

* *

Thus, from the perspective of organizational reputation studies I am building a novel approach to reputation by seeing it from the perspective of communication. Traditionally organizational reputation has been studied either as a form of capital, an intangible asset, or as an interpretative element of the organization. In this dissertation I put forth a suggestion that reputation can be seen as a communicative phenomena, which exists as individual mental frame but also as socially constructed narratives. These narratives can have measurable effects to the people consuming them, and hence, to the mental frames of reputation.

* *

One important factor behind these effects is emotion. The results of my dissertation also show that reputation itself is not only rational but also an affective concept. Traditionally reputation research as well as various reputation measurements have focused on rational aspects of reputation: quality of products, leadership, financial success.

However, the psychophysiological measurements conducted in the sub studies of this dissertation show, that good and bad reputation companies elicit different physiological responses in out test participants while they are reading online news and comments concerning these organizations.

Reputation is thus not only about rational evaluation, but also an emotional assessment, embodied in our physiology. That is why reputation unconsciously affects our decisions when for example making choices between brands. And that is why for organizations both reputation and reputation narratives indeed are a form of intangible capital.

Emotions are also a prominent element of the hybrid media system, and of the reputation narratives themselves. The narratives concerning organizations online are often very emotional. Organizations make us love and hate, they drive us to create fan communities and noisy hate groups. The properties of the technology from emojis to like buttons are also inviting us to express our emotions.

The importance of feelings shows also in the ways how communication professionals interpret and evaluate different media forms of the hybrid media. There is an aura of rationality attached to traditional media and an aura of emotionality attached to social media. In particular, the professionals see social media as an arena overwhelmed with emotion and therefore difficult to grasp.

* *

As the main result of this dissertation I propose that the reputation narratives that are born in the online are very specific forms of narrative by nature: they are hybrid reputation narratives.

Hybrid reputation narratives are polyphonic and emotional narratives born in the interaction between human and non-human actors. They are narratives in which the story elements can be stored in databases, searched, and hyperlinked by various, interacting actors, who through their use of the technical platforms generate the reputation narrative from fragmentary story pieces, one time after another. That is why there are no two similar reputation narratives.

So, narratives in this dissertation, are not conceptualized in a traditional sense, as a coherent story that has a beginning, the middle, and the end. Instead, they are new kinds of stories enabled by technologies, which allow for the participation of many authors and many platforms, collecting a narrative from various story pieces here and there. I argue that in such a technological environment the narrator can also be the user who is searching, selecting and clicking; navigating through different texts and images and creating their own, non-linear storyline.

This is a process in which opinions and facts, as well as rational and emotional content become merged, and in which the storytelling power of the technology interacts and intervenes with the storytelling power of the human actors. In the hybrid media system, the user is bestowed with agency and storytelling capacity, but this agency is both limited and enabled by the technology through which the storytelling takes place.

* *

For years, social scientists have been arguing over the relationship between technology and the society. The most extreme stance is known as technological determinism, that is the assumption that the technology determines the development of the social structure in a given society.

In science and technology studies a reconciling approach has been called the mutual shaping approach, suggesting that society and technology are not mutually exclusive to one another but, instead, they influence and shape each other.

This dissertation suggests that technology changes the ways how stakeholders are telling reputation narratives. In the hybrid media system the users’ storytelling capabilities are both enabled and constrained by the technology, on which the stories are being told. Technology and society studies explain this agency with the term affordance, the possibility of an action given by an object or environment.

This influence, however, is not purely technological. It does not refer only to like buttons and smart phones, but also to the forms and social practices born on a given platform, or in the hybrid media system as a system: practices such as taking pictures of our everyday lives, sharing media content to our friends, updating Wikipedia pages according to the editing rules, or expressing emotions using small yellow faces, are all examples of the media logics of the hybrid media system.

In the end the social action and human choices while using the technology affect what kind of stories are told. The specific ways of using technology, the media logics, are affected by the cultural and social context of the hybrid media system. Therefore, hybrid media cannot be studied only as technology, but they cannot be studied without the technology either.

A concrete example that shows the importance of social action is that social media tools were created for personal communication. They were not created to serve as media where people could express their dissent towards organizations or politicians or to start revolutions. Nonetheless, they have grown to have a role as such tools.

This is why it can be stated that the technology changes the way how business and society relationships unfold in the current media system. That is why technology matters, and why also social scientists should show pay attention to it.

Hate speech detection with machine learning — a guest post from Futurice

This blog post is a cross-posting from Futurice and written by Teemu Kinnunen (edits, comments and suggestions given by project participants Matti and Salla from Rajapinta)

* *

(Foreword by Teemu Turunen, Corporate Hippie of Futurice)

The fast paced and fragmented online discussion is changing the world and not always to the better. Media is struggling with moderation demands and major news sites are closing down commenting on their articles, because they are being used to drive an unrelated political agenda, or just for trolling. Moderation practice cannot rely on humans anymore, because a single person can easily generate copious amounts of content, and moderation needs to be done with care. It’s simply much more time consuming than cut and pasting your hate or ads all across the internet. Anonymity adds to the problem, as it seems to bring out the worst in people.

Early this year the nonprofit Open Knowledge Finland approached [Futurice] with their request to get pro bono data science help in prototyping and testing a machine learning hate speech detection system during our municipal elections here in Finland.

The solution would monitor public communications of the candidates in social media and attempt to flag those that contain hate speech, as it is defined by the European Commission and Ethical Journalism Network.

The Non-Discrimination Ombudsman (government official appointed by our government to oversee such matters) would review the results. There are also university research groups involved. This would be an experiment, not something that would remain in use.

After some discussion and head scratching and staring into the night we [at Futurice] agreed to take the pro bono project.

A tedious and time consuming repetitive task is a good candidate for machine learning, even if the task is very challenging. Moderation by algorithms is already done, just not transparently. An example? Perspective API by Jigsaw (formerly Google Ideas) uses machine learning models to score the perceived impact a comment might have on a conversation. The corporations that run the platforms we broadcast our lives on are not very forthcoming in opening up these AI models. The intelligence agencies of course even less so.

So we feel there’s a need for more open science. This technology will reshape our communication and our world. We all need to better understand its capabilities and limitations.

We understand that automatic online discussion monitoring is a very sensitive topic, but we trust the involved parties – specifically the non-discrimination ombudsman of Finland – to use the technology ethically and in line with the Finnish law.

In this article [Futurice’s] Data Scientist Teemu Kinnunen shares what we have done.

Technology

The hate speech detection problem is very challenging. There are virtually unlimited ways how people can express thoughts including also hate speech. Therefore, it is impossible to write rules by hand or a list of hate words, and thus, we crafted a method using machine learning algorithms.

The main goal in the project was to develop a tool that can process messages in social media and highlight the most likely messages containing hate speech for manual inspection. Therefore, we needed to design a process to find potential hate speech messages and to train the hate speech detector during the experiment period. The process we used in the project is described in Fig. 1.

Figure 1: Process diagram for hate speech detection.

At first, a manually labeled training set was collected by a University researcher. A subset from a dataset consists of public Facebook discussions from Finnish groups, collected for a University research project HYBRA, as well as another dataset containing messages about populist politicians and minorities from the Suomi24 discussion board. The training set was coded by several coders to confirm agreement of the data (kappa > .7). The training set was used to select a feature extraction and machine learning method and to train a model for hate speech detection. Then we deployed a trained model that was trained with manually labeled training samples. Next, we downloaded social media messages from a previous day and predicted their hate speech scores. We sorted the list of messages based on predicted hate speech scores and send messages and their scores to a manual inspection. After the manual inspection, we got new training samples which we used to retrain the hate speech detection model.

Feature extraction

Bag-of-features

There are many methods to extract features from text. We started with standard Natural Language Processing methods such as stemming and Bag-of-Words (BoW). At first, we stemmed words in the messages using Snowball method in the Natural Language Toolkit library (NLTK). Next, we generated a vocabulary for bag-of-words using the messages in manually labelled training samples. Finally, to extract features for each message, we computed a distribution of different words in the message i.e. how many times each word in the vocabulary exists in the message.

Some of the words appear nearly in each message, and therefore, provide less distinctive information. Therefore, we gave different weights for each word based on how often they appear in different messages using the Term Frequency – Inverse Document Frequency weighting (TF-IDF). TF-IDF gives higher importance for the words which are only in few documents (or messages in our case).

Word embeddings

One of the problems in bag-of-features is that it does not have any knowledge about semantics of words. The similarity between two messages is calculated based on how many matching words there are in the messages (and their weights from TF-IDF). Therefore, we tried word embeddings which encodes words that are semantically similar with similar vectors. For example, a distance from an encoding of ‘cat’ to an encoding of ‘dog’ is smaller than a distance from an encoding of ‘cat’ to an encoding of ‘ice-cream ’. There is an excellent tutorial to word embeddings on Tensorflow site for those who wants to learn more.

In practice, we used the fastText library with pre-trained models. With fastText, one can convert words into vector space where semantically similar words tend to appear close by each other. However, we need to have a single vector for each message instead of having varying number of vectors depending on the number of words in a message. Therefore, we used a very simple, yet effective, method: we computed a mean of word encodings.

Machine learning

The task in this project was to detect hate speech, which is a binary classification task. I.e the goal was to classify each sample into a no-hate-speech or a hate-speech class. In addition to the binary classification, we gave a probability score for each message, which we used to sort messages based on how likely they were hate speech.

There are many machine learning algorithms for binary classification task. It is difficult to know which of the methods would perform the best. Therefore, we tested a few of the most popular ones and choose the one that performed the best. We chose to test Naive Bayes, because it has been performing well in spam classification tasks and hate speech detection is similar to that. In addition we chose to test Support Vector Machine (SVM) and Random Forest (RF), because they tend to perform very well in the most challenging tasks.

Experiments and results

There are many methods for feature extraction and machine learning that can be used to detect hate speech. It is not evident which of the methods would work the best. Therefore, we carried out an experiment where we tested different combinations of feature extraction and machine learning methods and evaluated their performance.

To carry out an experiment, we needed to have a set of known sample messages containing hate speech and samples that do not contain hate speech. Aalto researcher Matti Nelimarkka, Juho Pääkkönen, HU researcher Salla-Maaria Laaksonen and Teemu Ropponen (OKFI) labeled manually 1500 samples which were used for training and evaluating models.

1500 known samples is not much for such as challenging problem. Therefore, we used k-Fold cross-validation with 10 splits (k=10). In this case, we can use 90% sample for training and 10% for testing the model. We tested Bag-of-Words (BOW) and FastText (FT) (Word embeddings) feature extraction methods and Gaussian Naive Bayes (GNB), Random Forest (RF) and Support Vector Machines (SVM) machine learning methods. Results of the experiment are shown in Fig. 2.

Figure 2: ROC curves for each feature extraction – machine learning method combination. True Positive Rate (TPR) and False Positive Rate (FRP). The FPR axis describes the ratio of mistake (lower is better) and the TPR axis describe the overall success (higher is better). The challenge is to find a balance between TPR and FPR so that TPR is high but FPR is low.

Based on the results presented in Fig. FIGEXP, we chose to use BOW + SVM to detect hate speech. It clearly outperformed other methods and provided the best TPR which was important for us, because we wanted to sort the messages based on how likely they were hate speech.

Model deployment

Based on the experiment, we chose a feature extraction and machine learning method to train a model for hate speech detection. In practice, we used the score of the binary classifier to sort the messages for manual inspection and annotation.

We ran the detector once a day. At first, we downloaded social media messages from a previous day, then predicted hate speech (scored each message) and stored the result in a CSV file. Next, we converted this CSV file to Excel for manual inspection. After manual inspection, we got new training samples which were used to retrain the hate speech detection model.

During the field experiment, we found out that the model was able to sort the messages based on the likelihood of containing hate speech. However, the model was originally trained with more biased set of samples, and therefore, it gave rather high scores also for messages not containing hate speech. Therefore, manual inspection was required to make the final decision for the most prominent messages. Further measures concerning these messages were done by the Non-Discrimination Ombudsman, who in the end contacted certain parties regarding the findings.

Conclusions

In a few weeks, we built a tool for hate speech detection to assist officials to harvest social media for hate speech. The model was trained with rather few samples for such a difficult problem. Therefore, the performance of the model was not perfect, but it was able to find a few most likely messages containing hate speech among hundreds of messages published each day.

In this work, training -> predicting -> manual inspection -> retraining – iteration loop was necessary, because in the beginning, we had quite limited set of training samples and the style of the hate speech can change rapidly e.g. when something surprising and big happens (A terrorist attack in Sweden happened during the pilot). The speed of the iteration loop defines how efficiently the detector works.

Hybridejä mainenarratiiveja, tunteella ja teknologialla – väitöstilaisuus 16.6.2017

Screen Shot 2017-06-06 at 00.19.27VTM Salla-Maaria Laaksonen eli allekirjoittanut väittelee 16.6.2017 kello 12 Helsingin yliopiston valtiotieteellisessä tiedekunnassa aiheesta “Hybrid narratives – Organizational Reputation in the Hybrid Media System“. Tervetuloa mukaan väitöstilaisuuteen kuulemaan akateemista debattia organisaatiomaineesta ja verkkojulkisuudesta! Alla lyhyt yhteenveto tutkimuksesta.

***

Tutkin väitöskirjassani sitä, miten yrityksiä ja muita organisaatioita koskevat mainetarinat muodostuvat hybridissä mediatilassa. Tutkimusongelma on kaksitahoinen: tutkin, miten uusi viestintäympäristö vaikuttaa organisaatiomaineen muodostumiseen, ja toisaalta sitä, minkälaisia kognitiivisia ja emotionaalisia vaikutuksia maineella ja mainetarinoilla on. Hybridi mediatila on viestinnän tutkimuksen tuoreehko käsite (Chadwick 2013), joka pyrkii ymmärtämään nykyistä mediamaisemaa. Hybridiys viittaa eri mediamuotojen sekoittumiseen: sosiaalisen median ja perinteisen median sisällöt ja muodot elävät verkkojulkisuudessa vahvasti sekoittuneena.

Tarkastelen väitöskirjassani hybridia mediatilaa tarinankerronnan paikkana. Tästä näkökulmasta jokainen blogikirjoitus tai twiitti on pieni kertomus, jollaisia teknologia kutsuu meitä kertomaan arjesta ja kokemuksistamme. Monet kertomuksista käsittelevät suorasti tai epäsuorasti yrityksiä ja muita organisaatioita – jolloin ne ovat määritelmällisesti mainetarinoita. Niitä on jaettu arjessa aikaisemminkin, mutta teknologia mahdolistaa uudenlaista tarinankerrontaa: tarinat leviävät lähipiiriä laajemmalle, ne arkistoituvat, ja nistä tulee etsittäviä ja muokattavia.

Osa verkon teknologioista toimiikin tarinankerronnan apuvälineinä hyvin erityisellä tavalla: ne järjestävät, kuratoivat ja muovaavat kertomuksia yhdistämällä erilaisia tarinanpalasia yhteen näkymään. Näin toimii esimerkiksi joukkovoimin ylläpidetty tietosanakirja Wikipedia tai verkon sisältöä penkovat hakukoneet. SIksi hybridissä mediassa maineen tarinankertojina toimivat sekä ihmistoimijat että teknologia yhdessä. Väitöskirjani pohjalta esitänkin, että teknologia muuttaa niitä tapoja, joilla sidosryhmät kertovat tarinoita organisaatioista. Verkkojulkisuuden alustoilla syntyvissä tarinoissa sekoittuvat paitsi eri mediamuodot, myös faktat ja mielipiteet sekä rationaalinen ja emotionaalinen sisältö.

Väitöskirjani korostaakin tunteiden merkitystä maineelle. Niin maineen tutkimuksessa kuin erilaisissa mainemittareissakin on perinteisesti keskitty rationaalisiin ominaisuuksiin: tuotteiden laatuun, johtajuuteen, taloudelliseen menestykseen. Maine näyttäisi kuitenkin olevan yhtä paljon myös emotionaalinen käsite. Organisaatioita käsittelevät kertomukset verkossa ovat hyvin tunnepitoisia: yritysten kanssa ihastutaan ja vihastutaan, niiden ympärille rakentuu faniyhteisöjä ja vihaisia kohuyhteisöjä. Teknologian ominaisuudet emoji-hymiöistä tykkää-nappulaan myös kannustavat ilmaisemaan tunteita.

Eikä tunteissa ole kyse vain ilmaisusta. Väitöskirjan osatutkimuksessa osoitettiin, että hyvä ja huono maine näkyvät eri tavoin koehenkilöiden kehollisissa reaktioissa, kun he lukevat yritystä koskevia verkkouutisia tai verkkokommentteja. Maine on siis myös tulkintakehys: tiedostamaton, kehollinen reaktio, joka ohjaa ihmisen toimintaa esimerkiksi ostoksilla valintatilanteessa.

Mainetutkimuksen näkökulmasta rakennankin työssäni uudenlaista kulmaa mainetutkimukseen. Mainetta on perintisesti tutkittu joko organisaation taloudellisena voimavarana tai tulkinnallisena elementtinä sidosryhmien mielissä. Tässä työssä määrittelen maineen viestinnällisenä ilmiönä, joka on olemassa yksilöiden tulkintakehyksenä sekä sosiaalisesti rakentuneina narratiiveina. Mainenarratiiveilla on kuitenkin myös mitattavia vaikutuksia niitä lukeviin ihmisiin ja heidän tulkintakehyksiinsä. Siksi sekä maine että mainetarinat ovat organisaatioille aineetonta pääomaa.

Väitöskirja koostuu viidestä artikkelista ja yhteenvetoluvusta. Artikkeleissa on käytetty neljää eri aineistoa: viestinnän ammattilaisten haastatteluja, sosiaalisen median verkkokeskusteluaineistoja, Wikipedia-aineistoa sekä psykofysiologisia mittauksia. Näin ollen tutkimus yhdistää metodisesti laadullista, narratiivista analyysia kokeelliseen tutkimukseen.

***

TLDR; “Hybridi mainetarina syntyy kun 😩 ja 👾 yhdessä käyttäen apunaan📱💻, muodostavat 📜💌📜 , jotka verkkojulkisuudessa 💾 ja 📢 ja joilla on 📉 vaikutuksia 🏭🏨 🏢:lle.” (ref. Your Research, emojified)

Väitöskirjan elektroninen versio on luettavissa E-thesis -palvelussa.

Väitöskirjaa ovat rahoittaneet Liikesivistysrahasto ja Tekes.

Let’s get organized – Rajapinta ry perustettu

2017-01-16-15-55-52

Tapaamisessamme 16.1.2017 pidimme perustamiskokouksen Rajapinta ry -nimiselle yhdistykselle, jonka tarkoituksena on tukea tieto- ja viestintäteknologian yhteiskuntatieteellistä tutkimusta. Täsmällisemmin, PRH:lla nyt hyväksyttävänä olevista säännöistä lainaten:

Yhdistyksen tarkoituksena on ylläpitää, tukea ja kehittää tieto- ja viestintäteknologian yhteiskunnallista tutkimusta, ylläpitää, tukea ja kehittää tietoteknologiaa soveltavia tutkimusmenetelmiä yhteiskuntatieteelliseen tutkimukseen, sekä seurata ja ottaa kantaa tieto- ja viestintäteknologian yhteiskunnallista vaikutusta tai niiden soveltamista koskeviin kysymyksiin. Yhdistys edistää alan tutkimuksen kotimaista sekä kansainvälistä yhteistyötä.

Miksi tällainen yhdistys tarvitaan? Toisin kuin useissa muissa maissa, Suomessa ei ole vielä muodostettu erillistä akateemista tutkimusyksikköä tieto- ja viestintäteknologian yhteiskunnallisista vaikutuksista kiinnostuneille yhteiskuntatieteilijöille. Teemaan erikoistuminen esimerkiksi maisteriopinnoissa ei ole mahdollista missään yhteiskuntatieteellisessä koulutusohjelmassa tällä hetkellä. Siksi päätimme ketterästi ryhtyä rakentemaan tutkimusyhteisöä itse, monitieteiselle pohjalle eri tutkimusorganisaatioiden yhteistyönä. Rajapinta on tutkimusyhteisö tieto- ja viestintäteknologian ja yhteiskunnan välissä oleville tutkijoille ja opiskelijoille.

Toimintaamme kuuluu muun muassa tieteen popularisointia ja ajankohtaisten ilmiöiden asiantuntevaa käsittelyä tämän Rajapinta.co-blogin kautta, kuukausittaiset tutkijatapaamiset sekä vuosittainen Rajapinta-päivä, jossa esitetään alan suomalaista tutkimusta ja kutsutaan kansainvälisiä vierasluennoitsijoita. Tervetuloa meetup-tapaamisiin tutustumaan toimintaamme!

Yhdistyksen toimintaa tukee seuraavat neljä vuotta Koneen Säätiö vuoden 2016 lopulla myönnetyllä apurahalla. Yhdistyksen puheenjohtajana toimii Matti Nelimarkka.

– –

Same briefly in English: in out meetup on Monday Jan 16th we founded an association for social science orientated ICT research, under the name Rajapinta ry. The purpose of the association is to support and develop social science orientated research on information and communication technologies in the Finnish academia. Currently there are no institutions in Finnish universities that would specialize on ICT with a social scientific orientation, and for example no master programs allow students to specialize in these questions. Therefore, we decided to start a lean format of collaboration between Finnish researchers and universities to promote and support theoretical, methodological and practical issues related to ICTs and the society. Most of our activities are bilingual in Finnish and in English – you are most welcome to our future meetups!

Smarter Social Media Analytics -hanke starttaa joulukuussa

4601859272_4228421089_z
Kuva: Matt Wynn

Saimme viime viikolla virallisesti tiedon, että Tekes rahoittaa projektiamme Smarter Social Media Analytics, jossa yhdessä yrityskumppaneiden kanssa lähemme nimen mukaisesti rakentamaan fiksumpaa sosiaalisen median analytiikkaa – tavoitteena tutkia ja kehittää uusia menetelmiä trendien ja ilmiöiden tunnistamiseen laskennallisesti sosiaalisen median tekstimassoista.

Hankkeen toteuttavat Kuluttajatutkimuskeskus KTK (HY) ja Tietotekniikan tutkimuslaitos HIIT (HY), ja rajapintalaisista mukana projektissa virallisesti ainakin Salla, Matti ja Arto. Alla hankkeen tiivis kuvaus tutkimussuunnitelmasta. Huraa!

**

Sosiaalisessa mediassa vahvistetaan ja rakennetaan yrityksiin, organisaatioihin ja brändeihin liittyviä käsityksiä ja jaetaan niihin liittyviä kokemuksia. Digitaalinen mediaympäristö tarjoaa mahdollisuuden seurata ja tutkia eri toimijoihin kohdistuvia arvioita, arvosteluja, kokemuksia ja tuntemuksia laskennallisesti. Tässä hankkeessa rakennamme isojen verkkoaineistojen avulla menetelmiä keskusteluissa syntyvien ilmiöiden ja trendien automaattiseen, reaaliaikaiseen tunnistamiseen.

Käytössämme ovat satojen miljoonien viestien laajuiset sosiaalisen median aineistot: Suomi24-verkkoyhteisön koko keskusteluaineisto, Futusome Oy:n keräämä satojen miljoonien viestien kokoinen aineisto suomenkielistä sisältöä eri sosiaalisen median palveluista. Näiden lisäksi hyödynnämme Taloustutkimus Oy:n keräämiä edustavia kyselytutkimusaineistoja ja isoja media-arkistoja. Näitä aineistoja rinnastamalla pystymme rakentamaan ja validoimaan algoritmeja, joiden avulla nousevia trendejä ja ilmiöitä on mahdollista koneoppimisen avulla tunnistaa verkkokeskusteluista. Laskennallisen data-analyysin ja sitä tukevan laadullisen analyysin ohella hankkeessa kerätään laadullista havainnointi- ja haastatteluaineistoa toimintatutkimuksellista näkökulmaa käyttäen.

Tutkimuskokonaisuus limittyy osaksi sekä laskennallisen yhteiskuntatieteen kehittymistä Suomessa että sosiaalisen mediaa hyödyntävien yritysten (ns. asiakasyritykset) diagnostisten valmiuksien parantamiseen tähtäävää valmentamista. Tutkimuksellinen näkökulma varmistaa myös analytiikan sikäli viisaamman kehittämisen, että analytiikka huomioi sosiaalisen median aineistojen käyttöön liittyvät eettiset ja taloudelliset näkökulmat myös tavallisten käyttäjien näkökulmasta.

Helsingin yliopiston Kuluttajatutkimuskeskuksen ja Tietotekniikan tutkimuslaitos HIIT:in yhteistyötahoina hankkeen valmistelussa ovat olleet Aller Media Oy, Taloustutkimus Oy ja Futusome Oy (ns. analytiikka- ja aineistoyritykset jotka osallistuvat hankkeeseen työpanoksellaan ja aineistoilla). Lisäksi konsortiossa mukana ovat pienemmät kasvuvaiheen analytiikkayritykset (Underhood.co, Sometrik, Leiki, Arvo Partners, myös Futusome), jotka osallistuvat hankkeeseen työpanoksellaan ja luovuttamalla tutkimusaineistoja tutkijoiden käyttöön, sekä isommat asiakasyritykset (Atria Suomi Oyj, Ilmarinen Keskinäinen Vakuutusyhtiö Oy, SOK, TeliaSonera Oyj, myös Aller ja Taloustutkimus), jotka osallistuvat hankkeeseen rahapanoksella.

Trump ja sosiaalisen median analytiikka

screen-shot-2016-11-14-at-17-34-50
Screenshot from Tagboard.

Yhdysvaltain presidentinvaalit ja sosiaalisen median osuus niissä ovat herättäneet viime päivinä paljon keskustelua. Debatti kiteytyy kahden teeman ympärille. Ensinnäkin, mitä sosiaalisen median kuplautumisesta kertoo se, että Donald Trumpin voitto tuli monelle yllätyksenä. Toisekseen, olisiko Trumpin voiton voinut ennustaa sosiaalista mediaa seuraamalla?

Avaan tässä postauksessa jälkimmäistä kysymystä eli sosiaalisen median roolia ja analytiikkaa vaalivoiton ennustuksessa. YLE julkaisi tästä vastikään jutun, jossa oli hyödynnetty Ezyinsightsin analytiikkaa, ja johon itsekin kommentoin. Puhuin samasta tematiikasta myös viime maaliskuussa valtiotieteellisessä tiedekunnassa järjestetyssä USA:n vaalit -luentosarjassa sekä Helsingin Sanomien toimittajan kanssa myöhemmin toukokuussa.

Jo maaliskuussa oli selvää, että millä tahansa sosiaalisen median mittarilla Trump on vaalien voittaja – vaikka silloin mukana kisassa olivat vielä kaikki esivaaliehdokkaat. Kuten Ezyinsightsin analytiikka osoittaa, sama näkyi monella mittarilla myös vaalisyksynä.

Sosiaalisen median analytiikan ongelma on kuitenkin se, että se antaa helposti kivoja numeroita, joiden päälle voi perustaa väittämiä. Tämä pätee erityisesti palveluiden kuten Facebookin itsensä antamiin tietoihin.

Facebook mittaa viesteihin “sitoutumista” (engagement, termi ei oikein käänny kunnolla suomeksi), joka on käytännössä kaikkien sen viestin aiheuttamien reaktioiden (kommentit, tykkäykset, jaot) yhteissumma. Twitter puolestaan kertoo impressions-luvun, joka mittaa twiitin potentiaalisesti nähneiden silmäparien määrää.

Molemmat ovat ongelmallisia mittareina. Twitterin impressioluku kertoo suurimman mahdollisen yleisön määrän twiitin saamilla reaktioilla, mutta ei mitään todellisista lukijoista. Facebookin “sitoutuminen” puolestaan on jonkinlainen kiinnostuksen mittari, mutta lopulta vain numero, jolla ei ole mitään laadullista sisältöä.

Puhtaan määrällisistä mittareista on kuitenkaan vaikea sanoa mitään yleisöjen suhteen tai kiinnostuksen laadusta. Todennäköisesti monet ovat seuranneet Trumpia myös mielenkiinnosta tai kauhistuksesta – hän on ollut melkoinen mediailmiö viimeisen ainakin vuoden ajan sekä perinteisessä että sosiaalisessa mediassa. Moni on varmasti seurannut ja jakanut Trumpin tekemisiä myös kauhistellakseen hänen lausuntojaan.

Emme siis voi lukujen perusteella sanoa mitään niistä tulkinnoista tai syistä, miksi ihmiset tiettyä videota tai päivitystä katsovat ja klikkaavat.

Juuri tästä syystä menestystä sosiaalisessa mediassa on aika vaikea määritellä. Seuraajia ja tykkääjiä on, mutta heidän motiiveistaan emme tiedä mitään. Toimijan näkyvyyteen jokainen kriittinenkin klikki kuitenkin väistämättä vaikuttaa, sillä sosiaalisen median julkisuus suosii suositumpaa ja nostaa reaktioita herättäneitä viestejä ja uutisia ihmisten uutisvirtoihin.

Ongelmallista on myös se, että mikään sosiaalisen median alusta ei ole edustava otos väestöstä. Varsinkaan jollakin alustalla aktiivisesti toimivien otos ei ole edustava, vaan vinoutunut vähintäänkin poliittisen kiinnostuksen tai teknologisten taitojen perusteella. Esimerkiksi Yhdysvalloissa Facebookia käyttää 68% aikuisväestöstä, mutta valtaosa heistä on todennäköisesti epäaktiivisia.

Tutkimuksissa sosiaalisen median metriikoiden ja äänestystulosten välistä yhteyttä ei olla saatu luotettavasti osoitettua. Tulevaisuudessa tilanne saattaa parantua erilaisten tekstinlouhinnan menetelmien (esim. sentimenttianalyysi) yleistyessä ja arkipäiväistyessä.

Sitä odotellessa vaikuttaa tällä kertaa siltä, että sosiaalinen media oli hiukan enemmän oikeassa kuin gallupit, mutta yllämainituista syistä rohkenen väittää, että se kertoo enemmän sattumasta ja Trumpista hybridinä mediailmiönä. Kuten Hesarille totesin: “Näissä vaaleissa Trump on täydellinen klikkisampo ja tämän ajan mediamagneetti. Hän suoltaa suoraan twiiteiksi ja klikkiotsikoiksi sopivia iskulauseita, ja sopii siksi mediakoneiston tarpeisiin erittäin hyvin.”

Yhteensä opimme sen, että poliittinen todellisuus ja ihmisten käyttäytyminen on monimutkaisempaa kuin mitä sosiaalisen median analytiikka tai gallup-kyselyt osaavat selvittää. Onhan se myös jollakin tapaa lohdullista ainakin näin yhteiskuntatieteilijälle.

– –

ps. Laadullinen tutkija minussa uskoo, että Trumpin sosiaalisen median menestystä selittää yleisen mediailmiön lisäksi kaksi asiaa: taitavat retoriset keinot ja aitous (authenticity) tai ainakin aidolta vaikuttava, kansaan vetoava viestintä. Aitouden vetovoimaa on tutkittu goffmanlaisittain Internetin sosiaalipsykologiassa, myös politiikan ja kampanjoinnin kontekstissa.

pps. Trumpista ja sosiaalisesta mediasta huomenna aamulla juttua ainakin Huomenta Suomessa ja YLEn Ykkösaamussa, äänessä Digivaalit-projektin Mari Marttila!

DCCS October meetup: topic models, data economy and computer vision

Last Friday our Rajapinta/DCCS meetup was organized for a third time. We were kindly hosted by Aleksi Kallio and CSC IT Center for Science. CSC is a non-profit company owned by the state of Finland and administered by the Ministry of Education and Culture. CSC maintains and develops the centralised IT infrastructure for research, libraries, archives, museums and culture. Their services have mostly been used by researchers in sciences or life sciences, but recently we have been discussing and collaborating with them in social sciences, especially computational social sciences as well. For instance, the data processing in Digivaalit 2015 was mostly done on CSC servers.

In the meetup we had three presentation each followed by a lively discussion.

dccs2810-matti.gifFirst, Matti Nelimarkka discussed topic models and the ways how to employ them in social sciences, and in particular the different ways of selecting the “k”, i.e. the number of topics you want to extract from the data.

Computer science uses measures such as loglikelihood, perplexity or gibbs sampler to find the best estimate for k. Social science people, however, often select a few k numbers, check and compare the results (i.e., word lists) and using some heuristics pick the one that seems best.

Matti ran an experiment to where he asked participants to examine topic model results from a given data set for 10-30 k’s and select the k that seemed to best with the given research problem. After this, the participants were interviewed about the process they used to select the k.

There were some general heuristics all participants seemed to use: they first, tried to avoid overlapping topics (if they existed, they cut down the number of topics) and second, tried to avoid topics that seem to include multiple themes (and increased the number of topics in such cases). Most importantly, all the five participants selected a different k with a large variance.

Hence, results show a sort of method opportunism in selecting the k of topics: depending on what people want to find from the data they perceive it differently. Matti’s suggestion is, that computational methods should be used to select the k.

*

dccs2810-tuukka.gifNext, Tuukka Lehtiniemi discussed the troublemakers of data economy based on a manuscript he’s preparing. As troublemakers he refers to players who disrupt the market and gain ground by acting against the normal way of doing things. In normal business markets such actors would be Spotify, Uber, Igglo, or Onnibus – or national broadcasting companies such as YLE for commercial media.

But what is the conventional mode or the market in data economy? The market is to a large extent defined by the large players known as “le GAFA”: Google, Amazon Facebook and Apple. Their business is mostly based on datafication, which means turning social behaviour into quantifiable data (see e.g., Mayer-Schönberger & Cukier, 2013). Such data is born online within these services based on the activities of the users. The markets that exist upon this data are largely base on selling audience data to advertisers and various third party data services. Tuukka, following Shoshana Zuboff’s thoughts, calls this surveillance capitalism.

In his paper, Tuukka examines three potential alternatives to the surveillance model: two commercial startup initiatives (Meeco and Cozy Cloud) and a research-originated one (OpenPDS developed at MIT). These cases are explored to identify overarching features they strive to achieve in relation the above questions. The identified new roles for users are data collector, intermediary of data between services, controller of data analysis, and source of subjective data.

A version of the related paper is available on the Oxford Internet Institute IPP conference site.

*

dccs2810-markus.gifIn the third presentation Markus Koskela from CSC presented some recent advances in automated image analysis tools – or as he neatly put it, analyzing the dark matter of the internet.

Automated image analysis is commonly done nowadays using machine learning and deep neural networks. A big leap forward has been taken around 201,2 made possible by first, the availability of open visual data, second availability computational resources, and third, some methodological advances. From a machine learning perspective there is nothing completely new but a few simple tricks to improve visual analysis.

Nowadays lots of open source tools are available for visual analysis: codes available in GitHub, pre-trained networks are openly available, several annotated datasets to use in the analysis (e.g. Imagenet, Google Open Images). Markus recommends Keras (keras.io) as his favorite choice, and mentioned TensorFlow and Theano as other usable tools.

As a final note of caution Markus reminded that researchers still haven’t solved what vision actually is about. It’s always that particular data set or a particular task, where a computer vision solution works, but generalization is very difficult. For example he presented some funny results of image recognition algorithms’ work in the sample images from Google Research’s automated caption generator: algorithm can’t tell the difference between a traffic sign with stickers and an open refrigerator, if the light sheds over the sign in a particular way (same pics available in this Techcrunch article)

*

Next DCCS meetup will be held in Tampere on November 25th in connection with the Social Psychology Days – stay tuned!