Beyond Trolling: Malware-Induced Misperception Attacks on Polarized Facebook Discourse

Beyond Trolling: Malware-Induced Misperception Attacks on Polarized Facebook Discourse

Abstract

Social media trolling is a powerful tactic to manipulate public opinion on issues with a high moral component. Troll farms, as evidenced in the past, created fabricated content to provoke or silence people to share their opinion on social media during the US presidential election in 2016. In this paper, we introduce an alternate way of provoking or silencing social media discourse by manipulating how users perceive authentic content. This manipulation is performed by man-in-the-middle malware that covertly rearranges the linguistic content of an authentic social media post and comments. We call this attack Malware-Induced Misperception (MIM) because the goal is to socially engineer spiral-of-silence conditions on social media by inducing perception. We conducted experimental tests in controlled settings () where a malware covertly altered selected words in a Facebook post about the freedom of political expression on college campuses. The empirical results (1) confirm the previous findings about the presence of the spiral-of-silence effect on social media; and (2) demonstrate that inducing misperception is an effective tactic to silence or provoke targeted users on Facebook to express their opinion on a polarizing political issue.

I Introduction

Social medial trolling became widely known phenomenon, reaching organized dimension in 28 countries by 2017 [11]. Trolling refers to users who respond to social media posts with fabricated and often inflammatory posts and comments to get a rise out of users [22]. Organized trolling campaigns usually target specific populations, attempting to sway the opinion of entire groups, for example, domestic voters. Over time, the coordinated ”nudging” of public opinion has become systematized, from military units that experiment with psychological operations to strategic communication firms that take contracts from governments for social media campaigns aiming to induce misperception [7].

A malicious actor interested in trolling seeks to ”nudge” opinions on polarizing or controversial issues discussed via social media, e.g. election campaigns, climate change, vaccination, immigration policy, reproductive health and freedom of political expression. An overt strategy is to do what the Russian trolling army did in 2016: manufacture a large number of political trolling posts to rile up Americans [57], [11]. This is an arduous task as it requires many people and resources in order to be successful (i.e. a bot network and a lot of fabricated content) [41]. There is also a risk that the social media administrators will remove any suspicious posts [1]. The malicious actors will likely continue to search for covert alternatives to manipulate public opinion through social media in a more targeted fashion. One alternative is to still use fabricated content and induce ”information gerrymandering” [54]. This still requires a large network of people who need to create and strategically infuse posts and comments on social media. A more economic alternative is a malware that acts as a man-in-the-middle in exchanging online information and manipulates how authentic content is perceived by targeted individual. The advantage of the malware is that it is platform-agnostic (i.e. can work on Facebook, Twitter, or Reddit) and can be strategically packaged as a web browser extension or a third-party social media application for smartphones.

Studies on manipulating online information point that induced misperceptions represent an effort of a malicious actor to ”lead an individual towards making false or implausible interpretations of a set of true facts” [5]. In the same manner, this malware covertly swaps, rearranges, or removes words presented to an individual to induce interpretation of a set of true facts to the objective of a malicious actor. Using a malware to induce misperception, to our knowledge, is a zero-day social engineering attack because it allows the targeted individual to verify the authenticity of online information thus bypassing all conventional cues people use to detect ”phishy” or fabricated content [14]. Like phishing, the malware also employs the psychological principles of persuasion to obtain individuals’ assets (e.g. system permissions) but not to damage the local files or exfiltrate data [9]. Instead, the goal is to use the system permissions to covertly manipulate textual data in transit and induce interpretation of legitimate content biased towards the objective of the malicious actor, e.g. poach disgruntled workers or bias voters [60].

This paper introduces the concept of malware-induced misperception and reports a test of the attack on polarized discourse on Facebook. The goal was to investigate whether this malware can be used to engineer or disrupt the spiral-of-silence effect on social media, that is, to manipulate how users perceive an authentic Facebook post and comments instead of using any fake information or inflammatory content. The spiral-of-silence theory argues that individuals fear becoming socially isolated, and as a consequence, they constantly monitor the public opinion climate on mass media to determine whether the majority shares their own opinions or not [40]. If the individuals perceive that their own opinion is in the minority, they end up silencing themselves, especially when discussing polarizing or controversial issues. The theory, originally developed for face-to-face interpersonal communication, is also applicable in social media settings [34].

A sample of 311 participants was randomly assigned to a control and treatment group. The participants in the control group were exposed to a legitimate Facebook post and comments in a web browser while the participants in the treatment group saw a malware-manipulated version of the same Facebook post and comments. The discourse was on the polarizing issue of freedom of speech on college campuses [4], [42]. The malware was packaged as a web browser extension as a low-cost option that allowed controlled use only in laboratory settings (alternative packaging is also discussed in the paper) [39]. The results show that the malware could successfully engineer the spiral-of-silence effect for individuals on the far ends of the political spectrum. The results are in line with the previous findings that people with divergent opinions ”use Facebook as a forum to monitor the prevailing public opinion on important polarizing issues without expressing their own comments [16], [30]. In the reminder of the paper, Section 2 elaborates the social engineering background of the MIM attack. Section 3 discusses the spiral-of-silence theory underpinning the MIM attacker’s social engineering strategy when applied to polarizing discourse on Facebook. Section 4 covers the study design and Section 5 presents the empirical results. Section 6 discusses the implications of materializing malware-induced misperceptions beyond social media and ways to counter these attacks. Section 7 concludes the paper.

Ii Malware-Induced Misperception

Ii-a Concept

Conventional social engineering attacks target individuals’ assets, e.g. passwords or system privileges. These assets enable social engineers to obtain unauthorized access so as to damage or exfiltrate confidential data. For this purpose social engineers usually write various types of malware (e.g. adware, trojans, keyloggers, rootkits, etc.). The most common vector for malware delivery and installation is through ”phishing”, i.e. an email or a text where the social engineers employ various principles of persuasion to secretly obtain the target individual’s compliance to run the malware code on their machine [14]. The phishing campaigns can be massive and target the largest number of individuals possible or they can target specific and well-researched individual(s) [23]. Social engineering attacks are notoriously successful and abundant effort is invested in detecting suspicious content as well as training individuals to spot both massive and targeted or ”spear” phishing emails [2], [27].

Because phishing attacks are low-cost/high-reward, social engineers have the possibility to try different persuasion routes and choose how to utilize the target individual’s assets. In this paper we introduce a social engineering attack utilizing a malware that targets the integrity but not the confidentiality of the target individual’s data. The attack is executed in two stages. First, like in conventional phishing, the target individual is persuaded to install a seemingly benign software plug-in, that is, yield their system privileges for manipulating textual data. Second, these privileges are used to covertly manipulate the linguistic content of the online communication the target individual exchanges through a browser or an email client. The goal of this covert manipulation, by contrast to conventional phishing, is to induce misperception about an event, news report, a communicating party, or a communication context  [5]. Such an attack, to our knowledge, has not yet surfaced in the cyber realm. We therefore named it a Malware-Induced Misperception (MIM) attack. The covert linguistic manipulation of online communication is specific to a target individual (e.g. linguistic style, pragmatics, cultural norms, etc.), therefore, the MIM attack is more feasible in a spear phishing form. The attack is low-cost in that the malware could be packaged either as a browser extension, an email client ”add-in” (e.g. Outlook), or perhaps in the future an entirely new application. The high-reward of the attack, if successful, is the opportunity to distort the target’s mental picture or map of reality to establish psychological domination.

Distorting individual’s map of reality by inducing misperception has become a significant problem on social media over the past few years. Malicious actors like trolls, sock puppets, and alternative media flooded Facebook and Twitter prior to the US presidential election with rumors, fake news, and inflammatory comments with the objective to bias people and sway their votes [53]. After these efforts were shored by Facebook and Twitter, malicious actors proceeded with a strategic infusion of fabricated content for particular events and towards well-researched individuals in a tactic called ”information gerrymandering” [54]. The idea is to manufacture echo chambers to create a (mis)perception that ”most of the others were going to for the other party” (an improved version of Cambridge Analytica’s strategy targeting voters in sway districts [20]). In all of these cases, the malicious actors relied on a considerable number of people who fabricated these posts or relentlessly posted inflammatory comments on social media.

The MIM attack is inspired by these misperception campaigns but takes advantage of the social engineering tactics. The malware replaces the need for constantly fabricating content or infusing inflammatory social media posts and comments. The malware also elevates worries that the social media platform can detect a misperception campaign. Instead, the misperception takes place on a local machine or smartphone where the malware covertly rearranges the words and the ”tone” of an authentic social media post while the targeted individual is reading it in real time. Studies on manipulating online information point that induced misperceptions represent an effort of a malicious actor to ”lead an individual towards making false or implausible interpretations of a set of true facts” [5]. By targeting authentic content, the malware allows the targeted individual to verify the facts and the credibility of a source thus bypassing all conventional cues people use to detect ”phishy” content [14]. The goal of the malware is to covertly manipulate the data in transit and induce interpretation of authentic content biased towards the objective of the malicious actor, e.g. bias voters, poach a high-profile target, or introduce fear, doubt, and uncertainty.

Ii-B Implementation

This malware can be packaged as a browser extension, an email client ”add-in” (e.g. Outlook), or an entirely new application. The malware usually is disguised as seemingly benign (e.g an extension for accessibility support, Outlook add-in for managing email threads, or a lightweight, power-saving mobile app). This packaging/disguise is preferred because the malware requires text manipulation permissions that later will be leveraged for the MIM attack [59]. Developing extensions, add-ins, and apps is free and a benign software can pass all the security checks before publishing [39]. For example, a browser extension variant of the malware can disguise the misperception-inducing logic and pass the security checks by posing as an ”accessibility (a11y) extension” that claims the rewording is done to help non-native English speakers make sense of English slang on social media [26]. An email add-in variant of the malware can pass the security checks, similarly, on the grounds of grouping and classifying social media email reports for better management through an Outlook client [58]. Certainly, the malware could be packaged as a third-party smartphone app that, for example, claims to reduce the battery usage by summarizing the content of social media posts [50].

The coordinated effort to sway people’s opinions about polarizing issues on social media makes a compelling case for the MIM attack to be implemented either as a browser extension or a third-party social media app. For the purpose of our study we developed the malware as a browser extension in JavaScript as a more economic proof-of-concept variant. The goal was to investigate whether the malware can induce the spiral-of-silence effect on social media, that is, influence a target individual to divulge a comment or personal opinion on social media that they otherwise wouldn’t post, fearing social isolation. We conducted a pilot study with 15 volunteer participants where we tested the malware’s potential to induce misperception on a simple Facebook post. All participants were 18 years or older, regularly read and commented on Facebook posts through a web browser, and had prior knowledge of social engineering, phishing, and past social media trolling, misperception, and fake news campaigns.

The preliminary question was to gauge whether participants are open to using browser extensions for standard utilities, for example an add-blocker or a sticky notes extension like ”Stickies” [59]. Most of them responded they already do use various extensions that improve their productivity and install them almost immediately after downloading or start using a web browser on their computers. Some of the participants were aware that browser extensions could potentially contain spyware and affect their privacy or steal personal information like remembered passwords or credit cards, and they look for legitimate extensions only on the browser application stores. Some of them were aware of extensions that manipulate content, like the Facebook demetricator, that hides the number of likes on Facebook posts to enable a more immersive interaction with the social media platform [21]. None of them were aware of browser extensions that covertly rearrange text before it is rendered in a browser. This was an important feedback suggesting that it is plausible for a malicious actor to employ a legitimacy-by-design (seeming legitimate both in visual design and in what the user expects to see from a legitimate application) to persuade the target user to install a benign extension in the first place [39].

The pilot participants first encountered an authentic Facebook post, shown in Figure 1, and reported that they are not inclined to comment on it, explaining that the post fits the campaign narrative of Mr. Sanders for the forthcoming US elections in 2020. The malware then was used to covertly swap the position of the words ”Commander” and ”Organizer,” as shown in Figure 2, to induce misperception that Mr. Sanders is shifting his campaign strategy from peaceful to militaristic [48]. Noticing that the accent of the post is on the ”Commander in Chief” instead of ”Organizer in Chief,” the participants felt compelled to express concerns about Mr. Sanders’ true intentions as a potentially future president and ask questions about this shift through comments.

Fig. 1: MIM extension ”off”
Fig. 2: MIM extension ”on”

An important feature of the malware is that it allowed the participants, trained in spotting ”phishy” and inflammatory content, to verify the post (i.e. this is a valid campaign message by Senator Sanders) and verify the credibility of the sender (this is the official Facebook page of Senator Sanders [46]). The MIM attack in the pilot study successfully disrupted the spiral-of-silence effect by inducing misperception of the next steps of Senator Sanders’s presidential campaign. This motivated us to test the potential of the MIM attack to socially engineer similar effects with a larger sample of participants on similar, but rather implicit, polarizing political discourse on Facebook.

Ii-C Threat Model

The MIM advantage, from the perspective of an attacker, is that the relationship between the target user and a web resource, or another person, can be manipulated without alerting any of the involved parties. The malware can be employed, for example, to influence a target user to divulge a comment or personal opinion on social media that they otherwise wouldn’t post, fearing social isolation. MIM can be categorized as a threat where an adversarial group or nation-state (threat source) conducts externally-based electronic communication modification i.e. man-in-the-middle attacks. MIM is a complex and micro-targeted attack and it requires a sophisticated level of expertise and well-resourced adversary [25]. The intent for launching a MIM attack can be a low-intensity trolling campaign, provoking (or silencing) comments on social media for posts with a strong moral component. A target for MIM can be any public person like a political party leader, a celebrity, or a social media influencer, but MIM can equally target disgruntled employees, a spouse, or friends. The predisposing conditions for a successful MIM attack are: (1) a targeted user to install a software that can be dynamically modified to manipulate text (a web browser extension in our case); (2) the targeted user accesses social media regularly, Facebook in particular; and (3) the targeted user is interested in polarizing issues extensively discussed on social media.

Ii-D Linguistic Manipulation Strategies

The malware works on a string array of ”valence words” and word replacement logic if a target word is detected on Facebook page. The malware parses the HTML document with a findMatch() function to detect a potential word match. If a match is detected, findMatch() returns the opposite valenced/target array word of the source word. A textSwap() function then replaces the occurrences of the initially detected word based on a configurable logic (all occurrences, only the first occurrence, or only if the occurrence is in the comments section of a Facebook page). This is the simplest, low-cost low-complexity version of the malware. A MIM attacker can implement more complex logic where the linguistic manipulation can take place only in certain parts of the Facebook content or only in Facebook posts reporting on a specific person or issue, for example, only campaign posts by Senator Sanders but not the other presidential candidates. The string array of ”valence words” need not to be predefined in that an attacker could use natural language processing to analyze authentic Facebook content and adapt the linguistic rearrangement that makes the most sense in the context of target individuals’ Facebook diet [redacted]. Using a Markov chain a model can be trained to choose replacement words based on an identified corpus of Facebook content. This natural language processing strategy was previously used by other researchers to generate a series of quotes that sound like President Trump’s State of the Union [12].

Ii-E Social Media Vector

The MIM attack differentiates itself from targeted ad campaigns on social media like the ones produced by Cambridge Analytica or requested by the UK Labour Party leaders to save on campaign costs and target only the party leader Jeremy Corbyn’s Facebook account [3], [20]. The attack is distinct from ”information gerrymandering” where content is infused strategically in a social network to exploit the ”homophily” - people’s natural tendency to surround themselves with others who share their perspectives and opinions about the world (the ”echo chambers” effect) [18]. While the aforementioned tactics aim to manipulate the perception of social media content, MIM doesn’t require access to external user data nor uses ads or fabricated comments aimed to reinforce an echo chamber. Instead, MIM works directly on the social media post exploiting the main attention of a targeted user. This is beneficial to for micro-targeting individuals without worrying that the social media platform might detect the attack.

As with the early period of political trolling, fake news, and alternative media, this creates a situation where people are left to resist and reject suspicious content by themselves. However, the malware could plausibly evade this detection because it preserves factual structure of the social media content. Even if someone is aware and carefully looking for inflammatory content or fake news the attack removes the grounds for such suspicion by working on authentic content [51]. In other words, the attack covertly ”nudges” a targeted individual to make interpretations of a set of true facts to the objective of the malicious actor [5]. The MIM attack has the potential to be used for a purpose of trolling and spreading rumors, if the targeted words are aggression or produce misinformation. Nonetheless, the malware’s primary goal of inducing misperception is the focus of the study.

Iii Spiral-of-Silence

Iii-a Theoretical Background

Spiral-of-silence theory, developed by Noelle-Neumann, argues that people use their media environment as a barometer for the prevailing climate of opinion on controversial issues [40]. Printed newspapers and TV, and now the Internet and social media, operate as a social monitor by alerting the public about the perceived appropriateness of publicly expressing certain opinions. This is the case because society threatens with isolation those individuals who violate the societal consensus on values and goals. This consensus, expressed through the majority opinion in the media, influences how people form their individual opinion and action. Individuals whose opinions do not coincide with the majority opinion, as they perceive it, tend to silence their opinions, fearing social isolation [49]. This silence effect results from one’s perceptions of public opinion climates and susceptibility to social pressure.

Numerous public opinion studies have applied spiral-of-silence theory to empirical examination [49], [32], [34]. The primary dependent variable for the predisposing spiral-of-silence conditions in most of them is the willingness to express one’s opinion. As the original theory posits, human behavior, particularly the willingness to express one’s opinion, is heavily directed by a fear of isolation that makes sanctions of denial of sympathy, and so forth, very powerful forms of influence. However, this significant variation in these predisposing conditions and thus the effects of the spiral-of-silence prompted a redefinition towards capturing one’s willingness to self-censor, defined as ”the withholding of one’s true opinion from an audience perceived to disagree with that opinion” [24]. In a social discourse, individuals can’t simply stay silent, but instead look for a way to avoid expressing their opinion through some other methods. The self-censorship predisposing conditions are expressed through four opinion expression strategies individuals resort to when discussing issues with high moral component: (1) comment on the issue; (2) read or listen about the issue but choose not to comment; (3) ignore it; (4) tell someone else about it offline.

Iii-B Spiral-of-Silence on Social Media

The spiral-of-silence theory was developed for face-to-face communication and considers printed and televised mass media content. The Internet has changed the way people communicate and receive mass media - it provides anonymity and at the same time affords individuals access to diverse media content, autonomy, selectivity, and social media interactivity [17]. The fundamental change in interpersonal communication and media exposure prompted researchers to test the spiral-of-silence theory in the context of social media. Since social media interactions are anchored in real-world relationships, social media interactions are still vulnerable to fears of social isolation. Individuals online may express their opinions in ways that may ”result in appearing unpopular or otherwise socially undesirable within the social media community” [37].

Our research builds upon existing studies that point to the validity of the spiral-of-silence effect on social media. A study examining how social media is used to express opinions on the issue of LGBT+ tolerance found that the spiral-of-silence phenomenon is present on Facebook [16]. Testing how perceptions of surveillance contribute to an online spiral-of-silence in the wake of the Edward Snowden’s revelations, authors in [55] found that the government’s online surveillance programs may threaten the disclosure of minority views and contribute to the reinforcement of majority opinion. A study of discussion on nuclear power generation showed that the spiral-of-silence phenomenon exists on Twitter, too. Confirming the tenability of this theory in a social media context, a meta-analysis of the spiral-of-silence demonstrated that the relationship between opinion climate perception and opinion expression is as equally strong and robust on social media as it is in face-to-face communication [34].

Iii-C Spiral-of-Silence in Social Media on Political Issues

People are regularly exposed to political content on social media. A Pew research report indicates that users on social media are more exposed to political perspectives dissimilar from their own than in face-to-face encounters [45]. Disagreements between users on social media on political topics is very common. For example, 73% of the surveyed users reported having friends with divergent political opinions. This is in line with the notion that high levels of sociality diversifies political discourse on social media platforms [8].

Suspecting that social media platforms may facilitate the spiral-of-silence phenomenon on political issues, authors in [16] revealed that ”encountering agreeable political content predicts speaking out, while encountering disagreeable postings stifles opinion expression.” Authors in [30] found that the fear of isolation from offline contacts increases the willingness to self-censor when it comes to posting political comments on Facebook. A recent study further confirmed the opinion congruence-based mechanism argued by the spiral-of-silence theory when expressing political opinions on Facebook [33]. Authors in [15] confirmed these findings when it comes to commenting on police discrimination on Facebook. Most recently, authors in [29] assessed the spiral-of-silence in the context of the 2016 US presidential election. Their analysis suggests that the more people perceived a public opinion support for Hillary Clinton, the less likely were to share a divergent pinion. This same phenomenon was particularly present for Donald Trump on Facebook. Because Donald Trump was highly unfavorable among Facebook users therefore inducing a spiral-of-silence among those who might have supported him in reality. These studies suggest the spiral-of-silence occurs when Facebook users discuss polarizing political issues.

Iv Socially Engineering a Spiral-of-Silence on Facebook

Iv-a Overview

The spiral-of-silence theory posits that humans fear isolation, which motivates us to observe our social environment and mass media to determine opinion climate on issues with a strong moral component. The results of this observation influences our opinion expression in public, both interpersonally and on social media. The studies testing the the spiral-of-silence tenability assume that the public opinion climate is assessed from legitimate sources of information. What if this assumption is violated? The Cambridge Analytica incident provides reasonable grounds for us to believe that a malicious actor might resort to manipulating how one derives the public opinion climate about a polarizing political issue, for example, a presidential election or a referendum to leave the EU.

One option is by trolling, a tactic where a malicious actor affects the public opinion climate by posting provoking and inflammatory messages and/or comments. Although effective in the past, social media platforms nowadays are taking active measures to curb trolling and remove suspicious user accounts and content. Another option for a malicious actor is to use MIM to covertly manipulate the public opinion climate for targeted set of social media users. Instead of infusing inflammatory content, the idea is to make a authentic posts and comments look ”polarized.” The MIM browser extension described above can be used for this purpose and alter valenced words in the comments section before it is presented in the targeted user’s browser. The goal is to ”socially engineer” the spiral-of-silence effect. In other words, the malware either induces or eliminates the fear of isolation and with that makes a target user more or less willing to self-censor their opinion. This motivated us to investigate whether an MIM attack will affect one’s perception on the public opinion climate.

Iv-B Research Questions and Hypotheses

The study utilized a social media post about freedom of political expression on college campuses. This topic was chosen following President Donald Trump’s executive order to protect freedom of speech on college campuses [56]. Expressing political opinions on college campuses is a polarizing issue and has generated substantial media coverage and induced heated discussion both in-person and online [4], [42]. We opted out for this polarizing topic in the Facebook post to eliminate any a priori bias about a trending topic that a participant might have seen before. Another objective was to capture the initial reaction of the participants to a ”new” post that was based on real, authentic events. The study also focused on one Facebook post with a limited number of comments instead of multiple posts to mimic a realistic setting where users qucikly skim a piece of online text, e.g. a ”new” Facebook post  [13].

The original scenario, shown in Figure 3, included an authentic Facebook post about a report on political bullying at a higher education institution followed by authentic conservative-leaning comments. The comments were from users with generic aliases and removed profile pictures to eliminate any potential bias on the grounds of popular trolling accounts. We used the malware to manipulate the comments and make them appear liberal-leaning in the MIM scenario shown in Figure 4. The malware replaced the words ”liberal” with ”conservative,” ”far-left” with ”far-right,” ”over-parented” with ”under-parented,” ”more” with ”less,” ”far-left” with ”far-right”, and ”Trump” with ”Alexandria Ocasio-Cortez” (we took a surface polar opposite approach following the reports of both President Trump and congresswoman Ocasio-Cortez blocking users from their social media accounts [36], [47]).

Participants were randomly assigned in a control group (original scenario) and treatment group (MIM scenario). The collected data were used to investigate the possibility to ”socially engineer” the spiral-of-silence effect for Facebook users. Because the four response strategies proposed in [24] indicate the predisposing spiral-of-silence conditions on social media, we used them as a primary dependent variable to explore whether the malware, by inducing misperceptions, can covertly nudge people to choose a particular one:

Research Question 1: How the manipulated Facebook post on the freedom of political expression on college campuses influences the utilization of different response strategies as predisposing spiral-of-silence conditions?

To test the existence of the spiral-of-silence effect on Facebook, based on the predisposing conditions and using the well-established willingness to self-censor measure [16], [33], we proposed the following hypothesis:

Hypothesis 1: The willingness to self-censor will be negatively related to publicly expressing an opinion in both the original and the MIM opinion climates (likelihood of commenting on the Facebook post).

Because of the nature of the topic, there is a reason to suspect that the frequency one follows political news and uses social media may play an important role when deciding whether to comment or not. Therefore, we added the following hypotheses to our tests:

Hypothesis 2a: The frequency of following political news will be a strong predictor of the utilization of different opinion expression strategies on Facebook in both the original and the MIM scenarios.

Hypothesis 2b: The frequency of use of social media will be a strong predictor of the utilization of different opinion expression strategies on Facebook in both the original and the MIM scenarios.

Previous research on spiral-of-silence in social media explored the effects of opinion strength such as attitude certainty (i.e. the degree to which one feels about their own opinion is correct [35]) and perceived issue importance (i.e. how important is the freedom of speech on college campuses to the general public [38]), we asked:

Research Question 2: How will attitude certainty influence the utilization of different response strategies on Facebook when discussing the freedom of political expression on college campuses?

Research Question 3: How will the perception of issue importance influence the utilization of different response strategies on Facebook when discussing the freedom of political expression on college campuses?

There is also a reason to suspect that the perceived opinion climate of one’s friends and family and of the nation may influence the willingness to express opinions, as found in some instances [35]. These hypotheses tested these claims:

Hypothesis 3a: The perceived opinion climate among friends and family will be strong predictors of the utilization of different opinion expression strategies on Facebook when discussing the freedom of political expression on college campuses.

Hypothesis 3b: The perceived opinion climate of the nation will be strong predictors of the utilization of different opinion expression strategies on Facebook when discussing the freedom of political expression on college campuses.

Fig. 3: The original Facebook post and comments.
Fig. 4: The MIM Facebook post and comments.

V Results

Following an IRB approval, data were obtained through an online survey (N = 311), fielded via Prolific, a crowd-sourced participants pool [43]. Due to the choice of the topic of this study we recruited participants in the age bracket between 18 and 34 that are either college students or have recently earned a bachelor’s degree. The spiral-of-silence theory assumes that the topic is of personal relevance for an individual to engage with it, therefore the selection criteria required participants to be mainly college-age [39]. Participants consisted of 55% cis-female (N = 171), 42.1% cis-male (N = 131), 0.3% transgender female (N = 1), 1.3% transgender male (N = 4), gender variant/non-conforming 1.0% (N = 3), and 0.3% preferring not to answer (N = 1). Participants were randomly assigned to either the original or the MIM scenario and completed a questionnaire. The questionnaire asked the participants about their response to the Facebook post and comments, about social media use, following of political news, opinions, attitudes, and the issue importance. Upon completion, participants were debriefed and rewarded a small monetary prize.

V-a Predisposing Spiral-of-Silence Conditions on Facebook

Research Question 1 explored how a manipulated Facebook post on the topic of freedom of political expression on college campuses influences the choice of a response strategy. As shown in Table 1 and 2, participants in the MIM scenario are more likely to comment on the Facebook scenario and post () and more likely to tell someone else about it offline () compared to the participant in the original scenario. The calculated effect size is 0.3 (small). Participants didn’t show any difference on the other two response strategies between the scenarios. As suspected, the results demonstrate that the malware is capable of engineering the spiral-of-silence effect on Facebook. This is an important finding that confirms the existence of the spiral-of-silence effect on a polarizing political issue on Facebook for the younger-leaning participant sample in our study. By inducing a misperception that the opinion climate is liberal-leaning, the malware eliminated the fear of isolation from the original, conservative-learning scenario, and encouraged the participants to express their opinion, both online and offline.

Comment Read, not Comment Ignore Tell Offline
10501 11892 10474 10914
2.155 .259 1.411 2.072
Sig .031* .796 .158 .038*
*p .05, **p .01
TABLE I: Matt-Whitney U Test for the Two Scenarios as a Grouping Variable.
Comment Tell Offline
Scenario MIM Original MIM Original
Mean 2.42 2.08 3.88 3.95
Median 2 1 4.0 4.5
STD 1.732 1.64 1.810 1.857
TABLE II: Descriptive Statistics For the Signficant Responses Strategies.

This is an expected outcome in the context of political discourse, i.e. the malware covertly created the necessary conditions that allowed the participants in the MIM scenario to succumb to the characteristic ’echo chamber’ effect [18]. Seeing a favorite narrative in the MIM scenario reinforced the confirmation bias that helped to account for participants’ decisions about whether to spread content both online and offline, as the formative action that leads to towards preferential interaction based on confirming claims in the Facebook comments section of the post [44]. The preferential interactions, based on the malware-induced misperception, initiate a feedback loop that continuously amplifies ideologically orthodox comments and posts and drowns out any opposing views, ultimately resulting into the spiral-of-silence effect [40].

V-B Socially Engineered Spiral-of-Silence on Facebook

Hypothesis 1 claimed that the willingness to self-censor, as a composite measure, will be negatively related to the likelihood to comment on the Facebook post in both scenarios. Based on Table 3, the more participants were willing to self-censor, the less likely they were to publicly comment on the Facebook post (original condition , ; MIM condition , ), confirming the prediction in Hypothesis 1. These results demonstrate the existence of the spiral-of-silence effect on Facebook on the particular issue investigated in our study in both the original and MIM scenario, proving the capability of the malware to induce misperception of the public opinion climate without raising suspicion. This is a very important finding that demonstrates the capability of the malware to socially engineer the spiral-of-silence effect on social media, Facebook in particular. In addition, the results also confirm the previous evidence that individuals with high levels of willingness to self-censor use Facebook as a forum to monitor public opinion on important social and political issues when expressing their opinion offline  [16], [30], [33].

Original \pbox20cmStd. MIM \pbox20cmStd.
Demographics
Age .026 .001 .349 .036
Gender .208 .088 -.167 -.092
Incr. 3.5 4.7
Social Media and Politics
Social Media Use .194 .097 -.042 -.016
Following Politics -2.45* -.169* -.276* -.190*
Incr. 5.2** 5.9*
Focal Variables
\pbox5cmWillingness to
g self-censor -0.512** -.228** -.831** -.334**
\pbox5cmAttitude certainty .036 .034 .001 .001
\pbox5cmIssue importance -.241 -.131 .145 .072
\pbox5cmCongruence
g friends & family .002 .034 .002 .033
\pbox5cmCongruence
g nation .000 .004 .004 .048
Incr. 6.8* 11.3*
Total 15** 21.9**
*p .05, **p .01
TABLE III: Hierarchical Regression Predicting the Likelihood of Commenting on the Facebook Post.

Hypothesis 2a claimed that the frequency with which one follows political news will be strong predictors of the utilization of different opinion expression strategies. The frequency of following political news was measured by asking, ”How closely do you follow political news” (1 = Never to 5 = Always; , ). Based on Tables 3-6, the more frequently one follows political news:

  • the less likely is to comment on the Facebook post in both scenarios (original scenario ; MIM scenario , )

  • the less likely to read but not comment on the Facebook post in both scenarios (original scenario ; MIM scenario , )

  • the more likely to ignore the Facebook post in both scenarios (original scenario ; MIM scenario , )

  • the more likely to tell someone about the Facebook post in the original scenario ( )

These results shed further light into the capabilities of the malware and the possibility of profiling future MIM targets. The stronger negative relationship between the frequency of following political news and and the first response strategy (Table 3) in the MIM scenario compared to the weaker relationships for the other response strategies (Table 4-6) indicates that the primary targets for the MIM attacks should be the individuals that are interested in the daily politics but remain largely ”undecided.” This is a well known fact that is used in political campaigning well before social media became a factor in inducing voter bias [6]. The results also indicate that the MIM attack is not simply an alternative to trolling, but it is a much more powerful tool for influencing outcomes. The social media trolls usually target individuals that follow political news with high frequency; MIM on the other hand, allows for targeting the individuals without a particular pattern of daily check-ups for the public opinion climate.

Hypothesis 2b claimed that the frequency with which one uses social media will be strong predictors of the utilization of different opinion expression strategies. The frequency of using social media was measured by asking, ”How often do you use social media” (1 = Never to 5 = Several Times a Day; , ). Based on Tables 3-6, the more frequently one uses social media:

  • the more likely to read but not comment on the Facebook post only in the original scenario ( )

  • the less likely to ignore the Facebook post only in the original scenario ( )

The frequency of social media use proved not to be a decisive predictor in speaking up or silencing in the MIM scenario. The same can be concluded for the original scenario given that significance is achieved only for the response strategies that ignore or simply read the post and comments. Seeing this result from a profile perceptive, as discussed before, the MIM attackers need not to worry about how frequently one uses social media, but for what purpose. This uncovers another utility of the MIM attack - it can be used, in a same fashion as trolling, if the attackers choose to alter the factual integrity of the Facebook content and post and make it look more provoking or sound inflammatory to the target users.

Original \pbox5cmStd. MIM \pbox5cmStd.
Demographics
Age 2.137 .104 1.99 .210
Gender -.299 -.122 .095 .053
Incr. 0.1 0.7
Social Media and Politics
Social Media Use .677** .352** .291 .11
Following Politics -.388** -.277** -.286* -.194*
Incr. 21.0** 5.9*
Focal Variables
\pbox5cmWillingness to
g self-censor -012 -005 .178 .070
\pbox5cmAttitude certainty .134 .128 .018 .015
\pbox5cmIssue importance -.108 -.061 .239 .117
\pbox5cmCongruence
g friends & family .010 .162 .000 -.004
\pbox5cmCongruence
g nation -.007 -098 -.001 -.008
Incr. 2.5 1.9
Total 23.6 8.5
*p .05, **p .01
TABLE IV: Hierarchical Regression Predicting the Likelihood of Reading but not Commenting the Facebook Post.
Original \pbox5cmStd. MIM \pbox5cmStd.
Demographics
Age -1.05 -.005 .034 .003
Gender -.025 -.009 .242 .127
Incr. 0.3 0.9
Social Media and Politics
Social Media Use -.474** -.214** -.114 -0.41
Following Politics .663** .390** .546** .353**
Incr. 19.2** 12.4**
Focal Variables
\pbox5cmWillingness to
g self-censor -.012 -.005 .330 .125
\pbox5cmAttitude certainty .194 .161 -.068 -.056
\pbox5cmIssue importance -.122 -.059 .208 .097
\pbox5cmCongruence
g friends & family -.003 -.041 .002 .037
\pbox5cmCongruence
g nation -.002 -.024 -.004 -.043
Incr. 2.4 2.8
Total 29.1 16.1
*p .05, **p .01
TABLE V: Hierarchical Regression Predicting the Likelihood of Ignoring the Facebook Post.

The versatility of the MIM attack is further corroborated with the results of the tests of Hypothesis 3a and 3b show in in Table 3-6. The claims that the perceived opinion climate among friends and family and among the nation, respectively, will be strong predictors of the utilization of different opinion expression strategies were unsupported in our particular case. Similarly, we haven’t found evidence that the attitude certainty (Research Question 2) and the perceived issues importance (Research Question 3) influence the utilization of the response strategies. Seeing this result from a profile perceptive, the MIM attackers need not to worry about what the target users talks with their friends and family or whether the user believes the issue is important to the general public. Looking back to the findings from Research Question 1, these results confirm that the MIM attack is only concerned about the search for confirming claims in the Facebook comments section of the post (the ’echo chamber’ effect). This means that it is sufficient for the malware to induce misperceptions about the ”majority” opinion climate without considering any other factors in order to socially engineer the spiral-of-silence effect on social media. The overall findings make a compelling case for a resourceful actor, interested in alternative to trolling, to invest into developing and disseminating a misperception-inducing malware.

Original \pbox5cmStd. MIM \pbox5cmStd.
Demographics
Age 2.607 .112 -.446 -.043
Gender -.092 -0.33 -.161 -.043
Incr. 1.9 0.5
Social Media and Politics
Social Media Use .254 .112 .026 .009
Following Politics -.412** .142** -.273 -.171
Incr. 9.2** 3.3
Focal Variables
\pbox5cmWillingness to
g self-censor .279 .110 .126 .046
\pbox5cmAttitude certainty -.066 -.054 .113 .090
\pbox5cmIssue importance .230 .110 .107 .048
\pbox5cmCongruence
g friends & family -.004 -.062 -.006 -.092
\pbox5cmCongruence
g nation .013 .167 -0.004 -.040
Incr. 3.4 2.1
Total 14.4 5.9
*p .05, **p .01
TABLE VI: Hierarchical Regression Predicting the Likelihood of Telling Someone Else Offline about the Facebook Post.

Vi Discussion

This study, to our knowledge, is the first one to test the possibility of socially engineering or disrupting the spiral-of-silence on social media by employing a malware-induced misperception in a polarized discourse on Facebook. Previous studies exploring the spiral-of-silence effect assumed that individuals’ perception of the public opinion is based on media information from authentic and credible sources. In our study, we used a malware to induce misperception by manipulating the linguistic formatting of authentic social media information, a post and comments discussing a polarizing political issue. The Cambridge Analytica scandal and the alleged Russian meddling with the 2016 elections provided an additional impetus for the test in order to scope the potential strategies for political influence before the election year 2020.

Our initial tests demonstrate that a malware could successfully induce a misperception about the public opinion climate gauged from the people’s interaction on social media. In our study, this malware covertly manipulated words in the conservative-leaning comments section of a Facebook post to make them appear liberal-leaning, and with that, created a the perception for our liberal-leaning sample that the opinion climate is preferential to them. Seeing a favorite narrative, participants took a formative action towards sharing their opinion both online and offline. Our further analysis demonstrated that the misperception induced by the malware was sufficient to socially engineer the spiral-of-silence effect. The preferential interactions of most participants were to talk about the Facebook post offline instead of online, which initiate a feedback loop that continuously amplifies ideologically liberal comments and posts and drowns out any opposing views, ultimately resulting into the spiral-of-silence effect.

In other words, the malware preliminary disrupted the predisposing spiral-of-silence conditions to nudge participants to succumb to the ’echo chamber’ effect, and with that, avoid to share their opinion publicly online. The findings of the study supports the claim that ”engaging in opinion expression to someone offline removes the inherent risks associated with expressing opinions in a public online forum composed of people one knows in real life” [16] ,[33]. This is an important notion from a political influence perspective because it confirms the findings that ”social media users, despite being reluctant to publicly comment on the post, are actively engaged in this environment through observation” [16], [30].

The MIM attack works in a highly targeted fashion and has a reduced reach compared to other forms of online influence like trolling. Target profiling, then, is more important to a MIM attacker and we also conducted an analysis to see the profile of targets that will mostly fall victim to a MIM attack. Our analysis suggests that the most likely victims to the MIM attacks are the people who follow political news, but remain generally undecided on most polarizing issues on social media. Demographic aspects like age, gender, and social media use have shown in our analysis to be irrelevant factors. It also doesn’t matter for MIM attackers if a target user’s opinion is congruent with the public opinion. Victims to the MIM attack can also be anyone regardless of their attitude certainty or perceived issue importance on the particular polarizing discourse and issue of freedom of speech on college campuses.

Vi-a Implications

The malware, as demonstrated, has the potential to ”nudge” a target user to focus on the opinion climate rather than assessing whether a Facebook post and comments are intended as trolling or rumors. The MIM attack vector, in other words, is not aimed at the social media platform but rather at a user or group of users of interest. This eliminates the constraint that the platform administrators will remove suspicious content and places the burden of defense on the user side. The MIM can be used to ”socially engineer” a targeted user to break out from the spiral-of-silence and express their opinion on a topic that, under normal conditions, they would choose not to say offline. The alternative outcome is also possible: silencing users on topics upon which they would usually choose to express on social media. For posts with public comments, the MIM attack can work with minimal to no adaptation for more than one political topic (e.g. foreign policy, immigration, tariffs, and reproductive health). This allows the malicious actors to dynamically re-purpose the attack depending on the trending political discourse on the social media platforms.

The ethical implications of our MIM study are the same as those related to publishing any vulnerability: the value of publicly sharing a proof-of-concept social engineering attack with knowledgeable researchers outweighs the opportunity that potential attackers may benefit from the publication. If this paper introduces a viable attack in the social media ecosystem–which it might will–due to its simplistic nature, we believe that this might be merely a confirmation of similar attacks, independently developed and deployed by well-resourced adversaries or nation-state groups. The study itself tests the plausibility of a locally developed MIM browser extension (not publicly available on the Chrome store). In the context of a real-life MIM attack, a responsible disclosure would entail contacting Google, the developers of Chrome, and working with them through the details of the malware extension.

Vi-B Limitations

Though the results of this study suggest that the MIM attack is capable of socially engineering or disrupting the spiral-of-silence within social media on a polarizing political issue, caution is warranted when interpreting them. The use of controlled Facebook post allowed us to capture the first impressions of the participants, but this choice at the same time limits the generalization of the findings in regards to the real opinion expression behavior. The polarizing topic chosen in this study might have been of variable degree of interest to individual participants, which also affects their decision to express their opinion. We tried to control for this by selecting a younger, college-age population assuming that the issue of freedom of political expression is highly relevant for them and they can identify with it. This on the other side, limits the generalization of the findings about an older population that has a more distant outlook on this issue considering other factors such the general political climate or their attitude certainty. Same holds for the self-reported frequency of following political news, which may be influenced by the type of news, outlets, topics, and interfaces. We anonymized the comments in both scenarios and didn’t explicitly ask whether participants will express their opinion if anonymity is granted. Anonymity is an integral part of the social media ecosystem and further research should test the MIM potential of socially engineering a spiral-of-silence process under conditions of anonymity.

Our results are also limited particular choice of web browser as an interface and a particular social media site - Facebook. The malware was tested in its extension variant but there are many people that access social media through smartphone applications or multiple interfaces in the same time. There is a possibility that the same results might not be obtained because smartphone applications provide a different set of interaction affordances that limit the cues one uses to access the opinion climate. Similarly, using multiple interfaces contributes to repetitive exposure to the same information which can lead to changes in perceptions about the issue importance and one’s attitude certainty. This, in turn, can make people more or less compelled to comment on a polarizing issue regardless of their political ideology or gender identity. The particular choice of social media site also limits the generalization because other social media platforms have different affordances that influence one’s opinion formation and decision to speak out. For example, Twitter has limited text input, Instagram is heavy on non-textual content (e.g. images, videos, gifs), while Reddit has ‘SubReddits,’ ‘up’ or ‘down-voting’, and the act of giving ‘gold’. Because these affordances shape norms of what people share and expect to see being shared, different platforms could have a variable degree of conductivity to a socially engineered spiral-of-silence effect.

The sample in the study was liberal-leaning and the findings might be different for a representative sample. We didn’t control for any other dimensions of one’s political identity, which certainly factor in one’s willingness to self-censor. For example, individuals’ partisanship, structure, culture, and historical experience of society often shape the preconceptions of a polarizing issue at stake, even in circumstances where the people put a premium on purportedly independent and objective public opinion assessment [52]. On this token, MIM is a novel attack and users are unaware of its existence to be able to detect it in the first place, regardless of any prior phishing training or negative experience with trolling and propaganda on social media. The outcomes of the study may be different if user awareness about this attack is raised, as it is usually the case with social engineering attacks. Although we demonstrated the potential of the MIM attack, it might be hard to scale it up quickly to a large social media population like the trolling, rumor, or disinformation campaigns do, but that is what makes the MIM attack compelling to a malicious actor.

Vi-C MIM Defenses and Prevention

The study introduces a plausible social engineering vector against individuals and groups that has yet to emerge in the wild, but has analogs in other deception and information warfare contexts [10]. The threat of MIM is an inherent risk of computer-mediated communication, particularly as artificial intelligence and machine learning enable software to parse and edit text toward particular opinion climate, emotional tone, or adversarial perspective. The first line of defense would require elimination of any suspicious extensions in the Chrome store that require permissions to control how HTML text is presented to a user. An example defense, along the lines of malicious software detection, would be using trusted browsers to detect JavaScript executions that are rearranging words and sentences in the textual portion of an HTML document [28]. Another example is Chrome’s Manifest v3 API, which is designed to eliminate extensions exhibiting suspicious behaviour in content manipulation [19]. Content-level signing might not help in these regards because the MIM manipulation happens after the content integrity check in the sequence of HTML reception and display.

One thing to have in mind is the possibility of the sneaking the malware extension on the Chrome store as an ”accessibility (a11y) extension” by claiming that the rewording is done to create an assistive natural language software that, for example, helps non-native English speakers [26]. It might be harder to bar an extension from the Chrome store on these grounds, therefore, the certification process must request all the use cases for these word manipulations upfront to ensure no misperception-inducing logic is hidden in the inner workings of the assistive extension. Even with these cautions, a malicious actor may find a way to deploy the malware on a target’s browser (for example, an insider threat).

As with any social engineering tactic, awareness of the potential attack is an advantage to the defender and a second line of defense. Given that the attack takes place on the target’s browser and not the social media platforms, this might be the only available option for individuals at this point. A practical training session for detecting MIM attacks revolves around the idea of crossing the deception judgment threshold, as argued by the Truth-Default theory and scrutinizing the Facebook post and comments [31]. The traditional social engineering training is focused on quick visual assessments for the most reliable indicators like URLs, grammar, padlocks for https, links, and attachments. This is already in place for the MIM attack. The focus for the MIM training is thus on the analysis of the social media posts and comments in the broader context of the issue at stake (in our case, freedom of speech on college campuses). The deception judgment can be calibrated based on updated facts for both the perceived majority and minority opinions. Compared to the traditional social engineering victims, the MIM victims have the advantage of individually approaching each of the people that commented on the post and verifying their original opinion. Or, verifying the authenticity of the comments and the prevailing public opinion by checking other media sources reporting on a given polarizing issue. Certainly, this out-of-band verification might make the social media interaction cumbersome, but that is a very small cost to quickly cross the deception judgment threshold. We believe this is an empowering strategy, and suggest that any social engineering training has a section on MIM as a tactic for inducing misperception on social media.

Vi-D Future Work

For our next research steps we plan to replicate and extend the current study with other social media sites (e.g. Twitter, Reddit) to explore whether the affordances of a particular social media side affects the choice of a response strategy. Our plan is also to cover other controversial topics popular or social media, for example vaccination, conspiracy theories or global warming that to not necessarily divide the people on political ideology or gender identity lines. We will work on diversifying our future samples and control for age, level of education, or other demographic and cultural factors so as to get a more nuanced idea on how a spiral-of-silence effect, socially engineered or disrupted by a covert malware, might unfold in the future for a purpose of a covert, low-intensity political propaganda. Towards a more robust test of the malware, the future research will investigate whether a different packaging, e.g. a third-party smartphone social media application, could amplify or attenuate the misperception-inducing potential of the malware. Another line of research will continue to explore machine learning mechanism for automated decision making on what type of linguistic rearrangement is the best suited for a particular polarizing issue, target, or a social media platform. Our objective in future research is not to perpetuate any deviant cybersecurity behaviour, quite the contrary. We are strongly dedicated to investigating any facet of the MIM attack to be able to eradicate it with both technological and societal prevention mechanisms.

Vii Conclusion

In this work, we introduced the MIM attack as a means of covert opinion manipulation of political discourse on Facebook. We tested it with 311 participants and showed that the MIM attack has the potential to socially engineer the spiral-of-silence effect on social media. The results also show that the MIM attack has the potential to disrupt the spiral-of-silence by creating misperceptions about the public opinion climate and nudging people to succumb to the echo chamber effect. Our main contribution is the evidence that the spiral-of-silence effect can be induced on demand - only with a piece of seemingly benign JavaScript (or other software) code and without fabricating any social media content. We hope our results inform the security community about the implications of having an alternative social engineering vector for social media influence, at least in a micro-targeted variant. We are aware that malware and the attack have a long way to go before materialize into a sizable threat. Nevertheless, the early proof-of-concept demonstrated in this paper facilitates a critical, scientific outlook on the use covert malware in situations where social interaction is a decision making factor.

References

  1. Cited by: §I.
  2. M. Alsharnouby, F. Alaca and S. Chiasson (2015) Why phishing still works: user strategies for combating phishing attacks. International Journal of Human-Computer Studies 82, pp. 69–82. External Links: Document, ISBN 1071-5819 Cited by: §II-A.
  3. T. Baldwin (2018) Ctrl Alt Delete: How Politics and the Media Crashed our Democracy. Oxford University Press, Oxford, UK. Cited by: §II-E.
  4. Z. Beauchamp (2019-09) Trump’s free speech executive order isn’t about free speech. External Links: Link Cited by: §I, §IV-B.
  5. Y. Benkler, R. Faris and H. Roberts (2018) Network propaganda: manipulation, disinformation, and radicalization in american politics. Oxford University Press, Oxford, UK. Cited by: §I, §II-A, §II-A, §II-E.
  6. W.L. Benoit (2007) Communication in political campaigns. Frontiers in political communication, Peter Lang, Bern, Switzerland. Cited by: §V-B.
  7. S. Bradshaw and P. N. Howard (2017-12) Troops, trolls and troublemakers: a global inventory of organized social media manipulation. Technical Report Oxford University, Project on Computational Propaganda, Oxford, UK. Cited by: §I.
  8. J. Brundidge (2010) Encountering “difference” in the contemporary public sphere: the contribution of the internet to the heterogeneity of political discussion networks. Journal of Communication 60 (4), pp. 680–700. External Links: Document, https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1460-2466.2010.01509.x, Link Cited by: §III-C.
  9. R. B. Cialdini (2007) Influence: the psychology of persuasion; Rev. ed.. Collins, New York, NY. External Links: Link Cited by: §I.
  10. B. Cronin and H. Crawford (1999) Information Warfare: Its Application in Military and Civilian Contexts. The Information Society 15 (4), pp. 257–263. External Links: Document, Link Cited by: §VI-C.
  11. R. DiResta, K. Shaffer, Ruppel, Becky, Sullivan, David, R. Matney, R. Fox, J. Albright and B. Johnson (2018) The tactics and tropes of the internet research agency. Technical Report New Knowledge. Cited by: §I, §I.
  12. C. Downey (2018) Probably overthinking it. External Links: Link Cited by: §II-D.
  13. G. B. Duggan and S. J. Payne (2006) How much do we understand when skim reading?. In CHI ’06 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’06, New York, NY, USA, pp. 730–735. External Links: ISBN 1-59593-298-4, Link, Document Cited by: §IV-B.
  14. A. Ferreira, L. Coventry and G. Lenzini (2015) Principles of persuasion in social engineering and their use in phishing. In Human Aspects of Information Security, Privacy, and Trust, T. Tryfonas and I. Askoxylakis (Eds.), pp. 36–47. External Links: ISBN 978-3-319-20376-8 Cited by: §I, §II-A, §II-A.
  15. J. Fox and L. F. Holt (2018/09/03) Fear of isolation and perceived affordances: the spiral of silence on social networking sites regarding police discrimination. Mass Communication and Society 21 (5), pp. 533–554. Note: doi: 10.1080/15205436.2018.1442480 External Links: Document, ISBN 1520-5436, Link Cited by: §III-C.
  16. S. Gearhart and W. Zhang (2013/09/23) Gay bullying and online opinion expression: testing spiral of silence in the social media environment. Social Science Computer Review 32 (1), pp. 18–36. Note: doi: 10.1177/0894439313504261 External Links: Document, ISBN 0894-4393, Link Cited by: §I, §III-B, §III-C, §IV-B, §V-B, §VI.
  17. S. Gearhart and W. Zhang (2015/04/01) “Was it something i said?”“no, it was something you posted!”a study of the spiral of silence theory in social media contexts. Cyberpsychology, Behavior, and Social Networking 18 (4), pp. 208–213. Note: doi: 10.1089/cyber.2014.0443 External Links: Document, ISBN 2152-2715, Link Cited by: §III-B.
  18. N. Gillani, A. Yuan, M. Saveski, S. Vosoughi and D. Roy (2018) Me, my echo chamber, and i: introspection on social media polarization. In Proceedings of the 2018 World Wide Web Conference, WWW ?18, Republic and Canton of Geneva, CHE, pp. 823?831. External Links: ISBN 9781450356398, Link, Document Cited by: §II-E, §V-A.
  19. Google (2018) Manifest v3. External Links: Link Cited by: §VI-C.
  20. K. Granville (2018) Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens. External Links: Link Cited by: §II-A, §II-E.
  21. B. Grosser (2018) Facebook Demetricator — benjamin grosser. External Links: Link Cited by: §II-B.
  22. Cited by: §I.
  23. S. Hardy, M. Crete-Nishihata, K. Kleemola, A. Senft, B. Sonne, G. Wiseman, P. Gill and R. J. Deibert (2014) Targeted threat index: characterizing and quantifying politically-motivated targeted malware. In 23rd USENIX Security Symposium (USENIX Security 14), San Diego, CA, pp. 527–541. External Links: ISBN 978-1-931971-15-7, Link Cited by: §II-A.
  24. A. F. Hayes, C. J. Glynn and J. Shanahan (2005-9/5/2019) Willingness to self-censor: a construct and measurement tool for public opinion research. International Journal of Public Opinion Research 17 (3), pp. 298–323. External Links: Document, ISBN 1471-6909, Link Cited by: §III-A, §IV-B.
  25. J. T. F. T. Initiative (2012-09) Guide for conducting risk assessments. Technical Reportt Technical Report 800-30, National Institute of Standards and Technology, Gaithersburg, MD. Cited by: §II-C.
  26. Y. Jang, C. Song, S. P. Chung, T. Wang and W. Lee (2014) A11Y Attacks: Exploiting Accessibility in Operating Systems. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS ’14, New York, NY, USA, pp. 103–115. External Links: ISBN 978-1-4503-2957-6, Link, Document Cited by: §II-B, §VI-C.
  27. M. Khonji, Y. Iraqi and A. Jones (2013) Phishing detection: a literature survey. IEEE Communications Surveys Tutorials 15 (4), pp. 2091–2121. Cited by: §II-A.
  28. D. Kohlbrenner and H. Shacham (2016-08) Trusted browsers for uncertain times. In 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, pp. 463–480. External Links: ISBN 978-1-931971-32-4, Link Cited by: §VI-C.
  29. M. J. Kushin, M. Yamamoto and F. Dalisay (2019/04/01) Societal majority, facebook, and the spiral of silence in the 2016 us presidential election. Social Media + Society 5, pp. 2056305119855139. Note: doi: 10.1177/2056305119855139 External Links: Document, ISBN 2056-3051, Link Cited by: §III-C.
  30. K. H. Kwon, S. Moon and M. A. Stefanone (2015) Unspeaking on facebook? testing network effects on self-censorship of political expressions in social network sites. Quality & Quantity 49 (4), pp. 1417–1435. External Links: Document, ISBN 1573-7845, Link Cited by: §I, §III-C, §V-B, §VI.
  31. T. R. Levine (2014) Truth-Default theory (TDT). Vol. 33. Cited by: §VI-C.
  32. C. A. Lin and M. B. Salwen (1997/01/01) Predicting the spiral of silence on a controversial public issue. Howard Journal of Communications 8 (1), pp. 129–141. Note: doi: 10.1080/10646179709361747 External Links: Document, ISBN 1064-6175, Link Cited by: §III-A.
  33. Y. Liu, J. R. Rui and X. Cui (2017) Are people willing to share their political opinions on facebook? exploring roles of self-presentational concern in spiral of silence. Computers in Human Behavior 76, pp. 294–302. External Links: ISBN 0747-5632, Link Cited by: §III-C, §IV-B, §V-B, §VI.
  34. J. Matthes, J. Knoll and C. von Sikorski (2018) The “spiral of silence” revisited: a meta-analysis on the relationship between perceptions of opinion support and political opinion expression. Communication Research 45 (1), pp. 3–33. External Links: Document, https://doi.org/10.1177/0093650217745429, Link Cited by: §I, §III-A, §III-B.
  35. J. Matthes, K. Rios Morrison and C. Schemer (2010/06/16) A spiral of silence for some: attitude certainty and the expression of political minority opinions. Communication Research 37 (6), pp. 774–800. Note: doi: 10.1177/0093650210362685 External Links: Document, ISBN 0093-6502, Link Cited by: §IV-B, §IV-B.
  36. J. C. Mays (2019-07) Alexandria ocasio-cortez is sued for blocking critics on twitter. External Links: Link Cited by: §IV-B.
  37. M. J. MetzgerR. L. Nabi and M. B. Oliver (Eds.) (2009-09) The study of media effects in the era of internet communication. SAGE Publications, Thousand Oaks, California. Cited by: §III-B.
  38. P. Moy, D. Domke and K. Stamm (2001/03/01) The spiral of silence and public opinion on affirmative action. Journalism & Mass Communication Quarterly 78 (1), pp. 7–25. Note: doi: 10.1177/107769900107800102 External Links: Document, ISBN 1077-6990, Link Cited by: §IV-B.
  39. L. H. Newman (2018) Chrome Extension Malware Has Evolved. External Links: Link Cited by: §I, §II-B, §II-B, §V.
  40. E. Noelle-Neumann (1993) The spiral of silence - public opinion: our social skin. 2nd edition, The University of Chicago Press, Chicago, IL. Cited by: §I, §III-A, §V-A.
  41. J. Paavola, T. Helo, H. Jalonen, M. Sartonen and A. Huhtinen (2016) Understanding the trolling phenomenon. 15 (4), pp. 100–111. External Links: ISBN 14453312, 14453347, Link Cited by: §I.
  42. J. W. Peters (2019-09) In name of free speech, states crack down on campus protests. External Links: Link Cited by: §I, §IV-B.
  43. Prolific (2019) Online platform for participants recruitment. External Links: Link Cited by: §V.
  44. W. Quattrociocchi, A. Scala and C. R. Sunstein (2016) Echo chambers on facebook. Available at SSRN 2795110. Cited by: §V-A.
  45. L. Rainie and A. Smith (2012-03) Social networking sites and politics. Technical Report Pew Research Center, Pew Research Center, Washington DC. Cited by: §III-C.
  46. B. Sanders (2019-09) Official facebook page. External Links: Link Cited by: §II-B.
  47. C. Savage (2019) Trump can’t block critics from his twitter account, appeals court rules. External Links: Link Cited by: §IV-B.
  48. B. Scher (2019) What would bernie bomb?. External Links: Link Cited by: §II-B.
  49. D. A. Scheufle and P. Moy (2000-9/9/2019) Twenty-five years of the spiral of silence: a conceptual review and empirical outlook. International Journal of Public Opinion Research 12 (1), pp. 3–28. External Links: Document, ISBN 1471-6909, Link Cited by: §III-A, §III-A.
  50. T. Seals (2019) SDKs misused to scrape twitter, facebook account info. External Links: Link Cited by: §II-B.
  51. C. Shao, G. L. Ciampaglia, A. Flammini and F. Menczer (2016) Hoaxy: a platform for tracking online misinformation. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ?16 Companion, Republic and Canton of Geneva, CHE, pp. 745?750. External Links: ISBN 9781450341448, Link, Document Cited by: §II-E.
  52. C. Simpson (1996/09/01) Elisabeth noelle-neumann’s “spiral of silence”and the historical context of communication theory. Journal of Communication 46 (3), pp. 149–171. Note: doi: 10.1111/j.1460-2466.1996.tb01494.x External Links: Document, ISBN 0021-9916, Link Cited by: §VI-B.
  53. A. Spangher, G. Ranade, B. Nushi, A. Fourney and E. Horvitz (2018) Analysis of strategy and spread of russia-sponsored content in the us in 2017. External Links: 1810.10033 Cited by: §II-A.
  54. A. J. Stewart, M. Mosleh, M. Diakonova, A. A. Arechar, D. G. Rand and J. B. Plotkin (2019) Information gerrymandering and undemocratic decisions. Nature 573 (7772), pp. 117–121. External Links: Document, ISBN 1476-4687 Cited by: §I, §II-A.
  55. E. Stoycheff (2016/03/08) Under surveillance: examining facebook’s spiral of silence effects in the wake of nsa internet monitoring. Journalism & Mass Communication Quarterly 93 (2), pp. 296–311. Note: doi: 10.1177/1077699016630255 External Links: Document, ISBN 1077-6990, Link Cited by: §III-B.
  56. S. Svrluga (2019-09) Trump signs executive order on free speech on college campuses. External Links: Link Cited by: §IV-B.
  57. N. Thompson and I. Lapowski How russian trolls used meme warfare to divide america. External Links: Link Cited by: §I.
  58. M. Tyler (2019) Phishing campaign uses malicious office 365 app. External Links: Link Cited by: §II-B.
  59. J. Vincent (2018) This blessed Chrome extension replaces ’Elon Musk’ with ’Grimes’s Boyfriend’. External Links: Link Cited by: §II-B, §II-B.
  60. S. Zannettou, T. Caulfield, E. De Cristofaro, M. Sirivianos, G. Stringhini and J. Blackburn (2019) Disinformation warfare: understanding state-sponsored trolls on twitter and their influence on the web. In Companion Proceedings of The 2019 World Wide Web Conference, WWW ?19, New York, NY, USA, pp. 218?226. External Links: ISBN 9781450366755, Link, Document Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
407209
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description