Está en la página 1de 29

Journal of Language Aggression and Confict 1:1 (2013), 5886. doi 10.1075/jlac.1.1.

issn 22131272 / e-issn 22131280 John Benjamins Publishing Company
Uh. . . . not to be nitpicky,,,,,butthe past
tense of drag is dragged, not drug.
An overview of trolling strategies
Claire Hardaker
Tis paper investigates the phenomenon known as trolling the behaviour of
being deliberately antagonistic or ofensive via computer-mediated communica-
tion (CMC), typically for amusements sake. Having previously started to answer
the question, what is trolling? (Hardaker 2010), this paper seeks to answer the
next question, how is trolling carried out? To do this, I use sofware to extract
3,727 examples of user discussions and accusations of trolling from an eighty-six
million word Usenet corpus. Initial fndings suggest that trolling is perceived to
broadly fall across a cline with covert strategies and overt strategies at each pole. I
create a working taxonomy of perceived strategies that occur at diferent points
along this cline, and conclude by refning my trolling defnition.
Keywords: trolling, computer-mediated communication, aggression, deception,
manipulation, corpus linguistics
1. Introduction
Trolling being deliberately antagonistic online, usually for amusements sake
is a term for behaviour that can be traced back at least as far as the 1980s (e.g.
Doyle 1989; Maddox 1989; Mauney 1982).
Despite this, trolling has only reached
mainstream public consciousness relatively recently (see, for example, BBC 2010;
Camber and Neville 2011; Morris 2011), and has received scant academic atten-
tion, especially from felds like linguistics (see, however, Binns 2011; Donath 1999;
Golder and Donath 2004; Herring et al. 2002; Shachaf and Hara 2010).
1. Like Donath (1999), I distinguish between the post (a troll), the individual (a troller), and the
act (trolling).
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 59
Given the remarkable shortage of research on this topic, previously (Hardaker
2010), I sought to answer the question, what is trolling? In that paper, I demon-
strated that current defnitions of impoliteness struggle to account for trolling, and
formulated a working defnition of trolling. Tis paper develops on from that by
tackling the question, how is trolling carried out?
To do so, I present the fndings
of the analysis of a total of 3,727 user discussions of trolling, drawn from an eighty-
six million word Usenet corpus. Te examples were extracted using corpus sof-
ware, then categorised depending on the strategies that appeared to prompt users
to discuss trolling. In this paper, for reasons explained further below, I focus purely
on the perception of trolling, and I try to reiterate this throughout via (perceived)
strategies and (alleged) troller. Where perceived, alleged and so forth are omitted
due to constraints of grammar or clarity, their inclusion should be assumed.
1.1 Te anonymity efect
Computer-mediated communication (CMC) is human communication that oc-
curs via devices such as mobile phones, computers, games consoles, etc. (December
1997; Ferris 1997; Herring 2003, 612). One aspect of CMC that substantially fa-
cilitates behaviour such as trolling is anonymity. With regards to the worldwide
web, some sites such as MySpace and 4chan are relaxed about, or even welcome
anonymity (Bernstein et al. 2011; Dwyer, Hiltz, and Passerini 2007). Some such
as Slashdot allow anonymity but moderate behaviour (Lampe and Resnick 2004).
And others such as Facebook strive to remove anonymity by prohibiting fake
identities (Facebook 2010). In reality, regardless of sofware, administrators, and
terms of use, members will typically choose their own level of disclosure.
Te efects of anonymity have long interested researchers in felds such as psy-
chology, sociology, and philosophy. As far back as 380BC, philosophers recognised
that anonymity could facilitate negatively marked behaviour. Plato wrote of the
shepherd, Gyges, who found a ring that made him invisible (or efectively, uniden-
tifable and therefore anonymous). Discovering this, Gyges used the protection of
this invisibility to infltrate the royal household, seduce the queen, assassinate the
king, and take the kingdom (Plato 2007, 2.359c-2.360d). Plato then writes,
If now there should be two such rings, and the just man should put on one and
the unjust the other, no one could be found, it would seem, of such adamantine
temper as to persevere in justice. (Plato 2007, 2.360b)
2. Te interested reader might therefore fnd it useful to read the 2010 paper frst.
3. Te notion of a real online (or ofine) identity is problematic for reasons too numerous to
tackle here. See, however, Hardaker (fc.) for a fuller discussion of the realness of online identities.
60 Claire Hardaker
In short, Plato felt that the protection of invisibility, or anonymity, would corrupt
even the most morally upstanding person. In the modern age, CMC can be seen
as a ring of Gyges that encourages CMC users to think that they can hide from the
consequences of negatively marked behaviour. In fact, much research in psychol-
ogy has focussed on this toxic disinhibition (Suler 2004, 321) the ways in which
anonymity can foster a loss of self-awareness, a sense of impunity, an increased
likelihood of acting upon normally inhibited impulses, increased polarisation, and
decreased consideration and empathy for others online (Sia, Tan, and Wei 2002;
Siegel et al. 1986). Tis disinhibition, according to Douglas & McGarty (2001,
399), can manifest itself in behaviours such as trolling and faming.
1.2 Defning trolling
Whilst there has been academic interest in some CMC behaviours like spamming
(e.g. Barron 2006; Stivale 1997), faming (Avgerinakou 2008; e.g. Herring 1994;
Johnson, Cooper, and Chin 2008; Shea 1994, Ch. 7), online impoliteness (e.g.
Graham 2007; 2008; Herring et al. 2002; Shin 2008), cyberbullying (e.g. Strom
and Strom 2005; Topu, Erdur-Baker, and apa-Aydin 2008), and cyberstalking
(e.g. Bocij 2004), other behaviours such as trolling have largely been lef to the
media. Yet prior to 2010, even the media generally ignored trolling (see, however,
Black 2006; Cox 2006; Moulitsas 2008; Tompson 2009). Te few, early reports
that exist describe trolling as ludicrous rants, inane threadjackings, personal in-
sults, and abusive language (Naraine 2007, 146), and as provoking others to dis-
rupt the group beyond repair for the trollers own amusement (Brandel 2007, 32;
Hefernan 2008).
When we turn to academic research on trolling, what little exists tends to take
defnitions from the media, intuition, and online ephemera such as Te Trollers
FAQ (1996). Te result is that troll is used as an all-encapsulating term. For ex-
ample, Baker (2001) starts his paper by talking about trollers, but then moves to
discussing faming, potentially confating the trollers actions with the groups re-
In this paper the concept of the moral panic is applied to computer-mediated
communication through a qualitative examination of the case of a troll poster to
the Usenet group over a four month period. [] Tis paper
analyses a single case of what appeared to be a fame war, in which one partici-
pant in a newsgroup was pitted against many other participants, over a prolonged
4. It is beyond the scope of this work to compare faming and trolling, however, faming should
be understood as a heated (over-)reaction to provocation. Tough faming is aggressive, and can
be manipulative, it is not deceptive.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 61
period of time. [] Te case of Macho Joe can be interpreted as pernicious
spamming (Stivale 1997) or trolling (Donath 1999, p. 45), the act of baiting a
newsgroup, and then enjoying the resulting confict. (Baker 2001)
In some of the earliest research into trolling, Tepper (1997, 41) explains how troll-
ing can defne group-membership: those who bite (i.e. who rise to the trollers
bait) signal their novice, outgroup status, whilst ingroup members will identify
the troller, will not be baited, and may even mock those who are. Donaths work
(1999, 45), later developed by Utz (2005, 50), suggests that trollers intentionally
disseminate poor advice to provoke corrections from others. Donath (1999) and
Dahlberg (2001) argue that trolling is a one-sided game of deception played on
unwitting victims. Te troller impersonates a sincere participant (Donath 1999,
45), and once she has become accepted, she sets about causing as much disruption
as possible whilst trying to conceal her real intentions (Dahlberg 2001).
In later research, Herring et al. (2002, 372) and Turner et al. (2005) describe
trolling as luring others into frustratingly useless, circular discussion that is not
necessarily overtly argumentative. Finally, some of the most recent research into
trolling by Shachaf & Hara (2010) combines Donaths (1999) and Herring et al.s
(2002) defnitions of trolling with that of a New York Times report (Schwartz 2008).
Tere are three main problems here. Firstly, the above essentially summarises
the majority of academic literature available on trolling, especially from a linguis-
tic perspective. Despite the pervasiveness of this behaviour, it is especially under-
researched. Secondly, whilst there are few defnitions to draw on, those that do
exist are far from in agreement with each other, particularly with regards to the
methods or strategies that count as trolling. And thirdly, none of these various
defnitions is drawn from the views of multiple users or large datasets. In response
to this, in 2010, I proposed this following working defnition of trolling based on
the analysis of a large CMC corpus:
A troller is a CMC user who constructs the pseudo-identity of sincerely wishing
to be part of the group in question, including professing, or conveying ostensibly
sincere intentions, but whose real intention(s) is/are to cause disruption and/or to
trigger or exacerbate confict for its own sake. Trolling can be:
(1) frustrated if users interpret trolling intent, but are not provoked into respond-
(2) thwarted if users interpret trolling intent, but curtail or neutralise its success,
(3) failed if users do not interpret trolling intent to troll and are not provoked, or
(4) successful if users are provoked into responding sincerely.
Finally, users can mock troll. Tat is, they may undertake behaviour that appears
to be trolling, but that actually aims to enhance or increase afect, or group cohe-
sion. (adapted from Hardaker 2010, 2378)
62 Claire Hardaker
(Te general inability of impoliteness theories to account for trolling was ad-
dressed extensively in this 2010 paper, so it is not discussed again here.)
Within this paper, it is clear already from the analysis of the major trolling
strategies that the defnition is insufcient on at least three counts. Firstly, it focus-
es on trollers, rather than on trolling, and since identity is a process of construction,
it makes sense to focus on the way that behaviour (trolling) constructs identity
(troller), rather than vice versa. Secondly, this defnition implicitly requires us to
know someones intention, when, as discussed further in 1.3, this is never the
case. In reality, users may perceive trolling where none exists, and miss it when
it occurs. And thirdly, this defnition is too narrow. It focuses on covert trolling,
where the (alleged) troller endeavours to blend into the group and not appear to
be trouble-causing. Tis already fails to account for (perceived) trolling based on
maliciously shocking others (see 3.5) or launching unprovoked attacks (see 3.6).
Such strategies do not try to blend into the groups norms, nor to convey pseudo-
sincere intentions. Given these three points, afer discussing the (perceived) strat-
egies below, this paper concludes with a refned, though still working defnition of
1.3 Intentions
When dealing with any linguistic practices, but especially deceptive behaviours
like trolling, intention becomes a key issue. Tough I discuss intention more fully
in Hardaker (2010), it is worth partly restating here. Tat said, intention is an
enormous, complex, and rapidly evolving topic, and since even an article entirely
devoted to the subject would fall short of a comprehensive overview, I restrict my
discussion only to those aspects most relevant to trolling, in the knowledge that
far more could be said.
Following several researchers (e.g. Arundale 2008; Gibbs 2001; Haugh 2008),
I espouse the view that we whether interactant, bystander, analyst, etc. never
know or retrieve either interpretations or intentions. I do not, however, sideline
these phenomena. Instead, a hearer (H) must hypothesize from the available evi-
dence, sometimes very quickly, what the speaker (S) actually intended (Culpeper,
Bousfeld, and Wichmann 2003, 1552; Mills 2003, 136).
In other words, we are
continually reconstructing and hypothesising about (1) the intentions of others,
5. S refers to the speaker/author/sender, whilst H refers to the hearer/reader/recipient. Further,
unless the data indicates user sex, for purely alliterative convenience, S is indexed as she/her/
etc. and H is indexed as he/him/etc.. Tese are certainly not perfect terms, especially given the
primarily written-nature of CMC, but have a slight advantage of clarity over terms like author,
recipient, interactant etc.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 63
(2) their interpretations of our intentions, and (3) their hypotheses about our in-
Our interpretations and hypotheses are based on available contextual and
cognitive information such as historical knowledge, schemata, and logic. Again,
however, whilst we are aware of the extent of our own knowledge, we can only
hypothesise about what others know (Mills 2003, 45; Mooney 2004, 900). Te hy-
potheses and interpretations we reach, whether as S or as H, can be partly or fully
wrong due to mistakes, accidental or deliberate ambiguity, or deliberate deception.
In short, we cannot simply reduce interaction to H interprets Ss intent correctly
versus H interprets Ss intent incorrectly (Grimshaw 1990, 281).
Tis grey area, caused by the unknown and unprovable nature of intentions
and interpretations provides the very opening for a troller to exploit. For example,
a troller may deliberately goad whilst claiming only to debate. At the same time,
however, this grey area also means that an individual accused of trolling may be
innocent of the charge. I therefore make no assertions regarding intentions or in-
terpretations. Instead, I simply aim to discuss strategies that Hs identify as trolling
A fnal consideration is the notion of on-recordness. Te degree of on- or of-
recordness is contextually dependent and negotiable by interactants (Aronson and
Rundstrm 1989), but as with intention, however on-record S may make her ut-
terance, this does not mean that her intent is, or ever becomes, retrievable. Rather,
increased of-recordness means that, there is more than one unambiguously at-
tributable intention so that the actor cannot be held to have committed [her]self
to one particular intent (Brown and Levinson 1987, 69). Vice versa, increased
on-recordness decreases the number of reasonably attributable intentions.
In non-trolling impoliteness situations, S may use of-record attacks because
incontrovertible, on-record attacks increase the risk that H may reciprocate with
equal or greater impoliteness (Andersson and Pearson 1999; Bousfeld 2008, 220;
Culpeper 1996, 355), potentially leading to a confict spiral which could get entirely
out of hand (Felson 1982, 245; Locher 2004). With regards to trolling, however, of-
record strategies make it far more difcult for H to prove (i.e. feel very confdent
and convince others to feel the same way) that S is trolling, and to therefore deal
with her as a troller. Te direct consequence of this is that the examples retrieved
from the data contain an overwhelming bias. Almost all of them involve Hs discuss-
ing whether or how or why trolling is occurring. Far fewer involve (alleged) troll-
ers discussing their own trolling intentions. Where these do occur, they typically
take the shape of denials, excuses, and counter-accusations, whilst confessions are
extremely rare. (Only four occur in this dataset.) However, in any case, whether a
confession or an accusation, there is still no guarantee that we have Ss real inten-
tions, or Hs real interpretations on-record. (Tis is discussed further is 4.2.)
64 Claire Hardaker
2. Finding trolls
I draw on two Usenet corpora with a combined word count of eighty-six million
words (86,412,727).
Tese corpora are described further below. Usenet, which
predates the current incarnation of the worldwide web, is,
an electronic forum for discussion of almost any subject, allowing access to mil-
lions of computer users who share similar (or very diferent) hobbies, interests
and worldviews (McLaughlin, Osborne, and Smith 1995). Characterized by its
immediacy and sheer volume of trafc, Usenet groups based around the discus-
sion of a particular topic aford a prime example of Internet communities. Te
main method of communication is text-based e-mail, although some groups per-
mit the exchange of graphics, sound or video fles. (Baker 2001, 1)
Usenet data is useful for several reasons. Tere are newsgroups on an extraordi-
nary range of topics. Some newsgroups have archives reaching back to the 1980s,
and perhaps most usefully, newsgroup posts can be downloaded. Tis makes tai-
loring corpora for topic, chronology, language, region and so forth relatively easy.
Secondly, whilst there is no direct evidence for this, trolling is said to have begun
on Usenet (Tepper 1997) and indeed, one can fnd Usenet examples of troll being
used to indicate deliberate online trouble-causing up to three decades ago (e.g.
Maddox 1989; Mauney 1982; Miller 1990):
Remember that people come here for help, ofen. Tere may also be those who
cant believe there can be a fame-free group on the net. Or even those who see
a lack of faming as a weakness. Or, perhaps, those like the troll in Gillys story
(metaphorically speaking), who want to start a fame war and step back to watch
the chaos. (Doyle 1989)
Tere are drawbacks to this data, however. Usenet is no longer as widely known
about or used, probably due to the increasing prominence of feature-rich social
networks. As such, the Usenet demographic may tend towards older long-term
members, and this may mean that the behaviour found on Usenet may be difer-
ent than that on, say, social networks. However, this is a difcult hypothesis to test,
and overall, the benefts of the data its scope of topics, longevity, and processing
compatibility were considered greater than the drawbacks for this exploratory
6. and
7. Tis is possible, since Usenet was one of the earliest types of mass-CMC, but it is a difcult
claim to prove.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 65
Te frst corpus, hereon RE, was created from part of the newsgroup, rec.
equestrian. REs theme is equestrianism, horse competitions, and breeding, along
with related topics such as animal welfare, agricultural legislation, and livestock
nutrition. Te second corpus, hereon SF, was created from a subset of the news-
group, SFs theme is (English) football, event fxtures, and league
tables, along with related topics such as footballer wages, club management, and
refereeing decisions. Further details of each corpus are found in Table 1:
Once RE and SF were created, WordSmith (Scott 2009) was used to retrieve all
instances of troll*.
Searching the corpora with an open-ended wildcard resulted
in around ~9% false hits (e.g. Trollope), but using this wildcard also retrieved deri-
vations, infections, compounds, neologisms, and some typographic errors that
might otherwise have been excluded. RE returned 2,643 instances, whilst SF re-
turned 1,456 instances. Tis created an initial sub-corpus of 4,099 examples that
reduced to 3,727 once the false hits were excluded. Tese results are collated into
Table 2:
Tough WordSmith retrieved an impressive set of results from RE/SF, no
search can currently retrieve of-record or implicit references to troll (e.g. it has
a sub-bridge apartment). Tese instances were only captured if troll occurred in
a more explicit part of the thread. Te examples discussed in this paper should not,
therefore, be taken as an exhaustive collection of all instances of trolling in RE/SF.
8. Te asterisk (*) wildcard denotes zero or more characters, so a search for cat* retrieves cat,
cats, catch, etc.
Table 1. Comparative information about RE and SF
Token RE SF
Group creation date 02/12/87 04/06/98
Corpus collection date 31/05/11 31/05/11
Group subscriber-count at corpus collection
1,106 211
Group post-count at corpus collection date 795,349 (92.767 per
288,163 (60.472 per
Corpus date-range 01/07/0530/06/10 10/03/0530/06/10
Corpus post-count 170,634 57,734
Corpus word-count
62,884,032 16,596,565
Wordcount is a very weak guide, since Usenet posts frequently (re)quote, and discounting (re)quoted
material to acquire a real wordcount, whether automatically or manually, is very cost/beneft-prohibitive.
66 Claire Hardaker
2.1 Classifying trolling
Following Watts (2003), I take the view that, investigating frst-order politeness is
the only valid means of developing a social theory of politeness (Watts 2003, 9).
First-order approaches emphasise the lay users perspective and understanding,
Table 2. Frequency per million words (FPMW) of troll* in RE and SF
Token RE
(62,884,032 tokens)
(16,596,565 tokens)
FPMW (raw frequency) FPMW (raw frequency)
Troll 26.509 (1,667) 65.013 (1,079)
Trollboi 0.542 (9)
Trollbusters 0.031 (2)
Trolldom 0.063 (4) 0.060 (1)
Trolled 0.302 (19) 5.724 (95)
Trollerita 0.031 (2)
Trollers 0.015 (1)
Trollery 0.047 (3)
Trolley 0.254 (16) 0.421 (7)
Trollfare 0.060 (1)
Trollhunter 0.031 (2)
Trollies 0.301 (5)
Trolliest 0.063 (4)
Trollign 0.482 (8)
Trollin 0.429 (27)
Trolling 3.434 (216) 7.591 (126)
Trollings 0.063 (4)
Trollish 0.079 (5)
Trollius 0.015 (1)
Trolll 0.180 (3)
Trolllled 0.031 (2)
Trollometer 0.031 (2) 0.060 (1)
Trollop 0.063 (4) 0.180 (3)
Trollops 0.015 (1)
Trolls 4.500 (283) 5.603 (93)
Trolls 0.143 (9) 0.964 (16)
Trolly 0.063 (4) 0.120 (2)
TOTAL 36.225 (2,278) 87.307 (1,449)
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 67
whilst attempting to background the analysts interpretation. Tis approach is not
without its problems, however. Typically, lay users lack sufcient meta-linguistic
articulation to apply a rigorous, scientifc framework to their interaction (OKeefe
1989). Despite this, they can, and do, assess im/polite behaviours such as trolling
on a moment-by-moment basis. Te fact that diferent interactants can, in some
cases, consistently arrive at similar interpretations suggests the existence of a prin-
cipled system of assessing the contextual appropriacy of behaviour a system
which, like natively learned grammar, seems to be implicit.
In keeping with the frst order approach, the subsequent classifcations were
driven, as far as possible, by the users interpretations and discussions surrounding
the mention of trolling. Tis is explored further throughout. To do this, I follow
Culpeper (1996) and Culpeper et al. (2003) in creating a descriptive framework of
(perceived) trolling strategies. Examples were categorised based on the particular
behaviours cited by users as trolling. Inevitably, depending on the fneness of de-
tail, one could identify potentially thousands of contextually-specifc categories
(e.g. digressing onto a sensitive topic, digressing onto an absurd topic, digressing
despite previous warnings to stay on-topic, etc). However, my interest was in iden-
tifying more general behaviours (e.g. digressing) that resulted in accusations or
discussions of trolling multiple times, preferably in both RE and SF. Tis resulted
in six (frequently overlapping, perceived) strategies ranging from the most covert
through to the most overt: digression, (hypo)criticism, antipathising, endanger-
ing, shocking, and aggressing.
Further strategies were identifed in RE/SF besides the ones given above. Tese
included mocking (derisively joking at someones expense), doxing (maliciously
publishing personal information), blackmailing (e.g. threatening to email dam-
aging information to employers), and stalking (e.g. following a user across sites
to continually attack him/her). However, some were only evidenced by very few,
weak examples (e.g. mocking) and were better accounted for by other strategies
(e.g. criticism), and some, though described as trolling by users, had developed
into far more serious behaviours such as cyberharassment and cyberstalking (e.g.
doxing, blackmailing, stalking). Given their problematic nature, and the intended
scope of this article, these strategies are not pursued further.
2.2 Counting trolling
It is vital to begin this section by emphasising two points. Firstly, given how end-
lessly creative individuals and groups can be, this taxonomic short-list of (per-
ceived) strategies should not be considered close to exhaustive. It should not even
be taken to represent all of Usenet, nor even all of equestrian or football groups
on Usenet. Data from diferent sources, times, and cultures will almost certainly
68 Claire Hardaker
reveal strategies that have not been captured here. As such, this papers fndings
should be considered purely as preliminary.
Secondly, as mentioned above, in keeping with the frst order approach, I have
striven to emphasise what the user has emphasised, but there are inevitably prob-
lems with this. Diferent users may cite diferent grievances for the same indi-
vidual, or one user may cite multiple possible grievances that they cannot choose
among (e.g. is the accused lying about mistreating animals just to upset the group?
Or is she telling the truth and therefore morally ofensive?) And users do not al-
ways make the reasons for their accusations clear. H may post, Ignore A, shes
trolling, without explaining why he thinks A is a troller. For the purposes of this
paper, I have used examples where H indicates a reason for mentioning trolling,
but I do so in the knowledge that this is an imperfect solution to the problem.
A fundamental consideration with attempting to quantify subjective features
such as these ties in specifcally with our inability to know, for certain, whether a
user is trolling. Since all the examples were retrieved by searching for instances
of troll*, and since, of these, nearly all involve Hs discussing their perspectives
on trolling, any statistical analysis of the data would be biased towards features
that Hs interpret as trolling, rather than those that defnitely are trolling. An addi-
tional complication to quantifying features is that many posts and/or threads cited
by users as trolling typically contain multiple potentially antagonistic strategies
(Culpeper, Bousfeld, and Wichmann 2003, 1,561), and it is not always possible to
identify which one triggered H to suspect that he was being trolled.
Te combination of these factors (1) the potential co-occurrence of several
strategies, (2) the occasional absence of indication from H about which strategy
triggered his accusation, (3) the perceived (and therefore potentially incorrect) na-
ture of the strategies identifed by H, and (4) our general inability to know inten-
tion has made statistical, quantitative analyses so highly qualifed as to have
little value in this study. Tis section therefore presents a qualitative overview of
the major (perceived) strategies identifed when analysing the data. (See 4.2 for
more on this.) In this case, major (perceived) strategies is used to mean strategies
that, (1) are identifed in an example explicitly by one or preferably more users,
(2) occur consistently, e.g. three times or more in at least one dataset, and that (3)
preferably occur across both datasets, though this was not obligatory.
2.3 Changes to the data
As far as possible, I avoided altering the data, and all spelling, grammar, and (non-
indenting) punctuation are original. However, some changes were necessary. (1)
9. I address this problem in future research where I investigate responses to (perceived) trolling.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 69
To enhance anonymity, all names are replaced with letters. In each example, user-
letters re-start at A, so unless stated otherwise, A in Example (1) is not the same
person as A in Example (2). (2) I use italics in square brackets to indicate removed
information or glosses, e.g. [web address]. (3) I removed indenting punctuation
(e.g. >), since subsequent replies already present chronological threads for the
reader to follow. (4) For brevity and clarity, I removed unnecessary line-breaks.
(5) I use italics to highlight the parts of an example being analysed. (6) Whilst
Usenet now ofers rich text, very few posts use this, so all are presented in standard
text. (7) Where applicable, I highlight the (alleged) troller by designating her A,
and underlining that user-letter. Where no troller appears, A is not used. (8) All
examples are excerpts from threads that are, in some cases, signifcantly longer.
Inevitably, this leads to orphaned deictic markers, users being addressed but not
appearing, missing contextual information, and so forth. To mitigate this, I use ex-
amples that stand reasonably well alone and/or I summarise the relevant context.
(9) Sections deleted from long posts are indicated via [].
3. Trolling in action
For the purposes of this article, a strategy is conceptualised as a goal-driven behav-
iour, and as such, this can occur over multiple posts. In fact, examples of (alleged)
trolling typically contain multiple potentially antagonistic strategies (Culpeper,
Bousfeld, and Wichmann 2003, 1,561), and as already discussed, it is not always
possible to identify which specifc one triggered H to suspect trolling. Whilst I
have many examples of some (perceived) strategies, for brevity I use those that
best demonstrate the concept under discussion
3.1 Digress
Te (perceived) strategy of digression-based trolling appears to most clearly ex-
emplify the fndings of Herring et al. (2002, 372) and Turner et al. (2005), who
describe trolling as luring others into a discussion, which though not necessar-
ily overtly argumentative, frustrates users with its pointlessness and circularity.
Digression, rather as it suggests, involves straying from the purpose of the discus-
sion or forum. Tis can be achieved in many ways by maliciously spamming
the group, by taking part in cascades, or by introducing tangential or entirely ir-
relevant topics:
10. Cascading involves several users (re-)posting nonsense to the group, such as counting, re-
citing the alphabet, answering other posts with rhymes, etc.
70 Claire Hardaker
Example (1) [SF050410]
A Religion is but myth and superstition that hardens the hearts and enslaves
minds. End of story. Take your bible-thumping elsewhere gOD boy.
B What better place for bible thumping than a newsgroup called alt.religion.
christian.biblestudy ? Cant more on topic than that.
C Whoever posted that was trying to cause a fame war between groups, in other
words, a troll. Just check what other groups it was crossposted to. You prob-
ably didnt plan to post to or a bondage-related newsgroup, did
Example (1) demonstrates a classic method of (deliberately or otherwise) causing
aggravation on Usenet, via a function known as crossposting. Rather like sending
a single email to multiple people, crossposting is simply sending the same mes-
sage to multiple groups. As with an email, the sender must choose each recipi-
ent, and in this case A chose,,, hfx.general, alt.religion.
christian.biblestudy, and soc.subculture.bondage-bdsm. Te likelihood of acciden-
tally choosing so many irrelevant groups, especially ones that might seem hostile
towards each other, is extremely low. However, whilst one of-topic crosspost may
be annoying, a greater potential for aggravation stems from careless replies. As
with email, a user can reply just to the original individual/group, or to all original
recipients (i.e. Reply versus Reply All). Careless use of the Reply All function can
send a response back out to every group originally targeted, as B does, thereby
spamming thousands of users and triggering numerous complaints in return. Te
aggravation typically springs from the degradation of the signal-to-noise ratio.
Te time-wasting noise of one troll-post is relatively easily ignored, but the noise
of hundreds of replies to the troll-post, and complaints about those replies, can
entirely drown out the worthwhile content, or signal (Wasko and Faraj 2005, 37).
Interestingly, whilst any digression can annoy some groups, digression onto
sensitive topics seemed to trigger the strongest reactions:
Example (2) [RE060812]
A joins RE to ask for help with an unscrupulous horse dealer who has supplied her
with a six-month-old, untrained horse for her small daughter to ride. B immediately
accuses A of trolling. A claims not to know what trolling is, and describes B as rude
and full of contempt.
B I make no apology for being contemptuous or rude to someone who says
they were going to put their 7 year old child on a horse when they self-pro-
fessedly know nothing about horses and enters into a lease or contract
knowing that nothing and then seeks advice over the internet. When I do
something dumb and naive, I call myself on it, too. My advice is, get smart
and get over it. Youre welcome again.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 71
A I dont believe I ever thanked you and I feel sorry for you you are probably
very lonely. If this is how you treat people you dont even know how could
you possibly ever make a friend. You should fnd Christ in your life.
C <plonk> We have enough religious and anit-religious trolls in this newsgroup
Im not going to feed this one.
Despite the fact that A only introduces religion into her post in passing, this ele-
ment alone seems sufciently of-topic and sensitive enough to convince C that
she is trolling.
In RE and SF, digression seems to be judged in relation to the users pre-exist-
ing status within the group. For instance, there were no examples of longer-term,
established users being accused of trolling on the basis of digression alone, though
there were examples of junior RE members being famed if they failed to mark
of-topic posts correctly (e.g. by putting OT in the subject-line). Tis suggests that,
unless the digression is especially controversial or sensitive (see 3.5), a group
may view it as a trolling strategy only when weighed up with other incriminating
evidence, such as unpopularity, newness, or a history of trouble-causing.
3.2 (Hypo)criticise
Tis strategy simply involved criticising others, usually excessively. More antago-
nistically, the hypocriticism strategy involved criticising someone for an ofence
of which the critic was also guilty. (Hypocriticise has been derived from hypocrite,
hypocritical, etc., and should be understood in the same way.) Te examples of
(perceived) (hypo)criticism in RE/SF tend to be pedantic, and a common target
was proof-reading. Te (alleged) troller would ignore a posts content and instead
attack its grammar, spelling, or punctuation (criticism trolling). In some cases, the
troller would even do so with a post which, itself, contained proof-reading errors
(hypocriticism trolling), as in Example (3) below:
Example (3) [RE060304]
B [] Te next two days, I drug myself out of bed, went to teach the classes I
absolutely had to do (my brain was so fried I couldnt even contemplate what
I had to do to set things up for a sub, it was just easier to go to work and be
there), then went home early, took meds, went to bed. Drug myself through
the weekend. Monday, I drug myself to the barn afer work. []
11. Interestingly, such is the prevalence of proof-reading hypocriticism that a humorous inter-
net law has sprung up, known variously as Skitts Law or Muphrys Law (spelling is correct!),
which observes that any post criticising anothers proof-reading will, itself, inevitably contain
proof-reading errors.
72 Claire Hardaker
A Uh. . . . .not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug.
Otherwise it was an interesting post.
C Shes a teacher. I think she knows. Not to be nitpicky, but more than three dots
is considered improper for ellipsis, and fve commas in a row is a no-no. Te
rest of your post was Sorry, I cant resist wiseassdom afer 4 on Fridays : ).
D Nor is uh a word in standard English.
E Four dots (aka periods) are used correctly when an ellipsis appears at the
end of a complete sentence.
C I didnt know that. Tanks, I will take your word for it. But I note that our troll
friend still used them incorrectly <g>
A [To C] If she is a teacher, I pity her students. To confrm my observation,
simply use the dictionary.
F =;-D Good grief! Go get that dictionary, A, and look up wet blanket. Or stick
in the mud. Crank would be a good one. While youre at it, look up collo-
quial. Sheesh.
A [To C] At least I can handle it when someone corrects me. Also, I am neither
a troll, nor your friend.
C If youre not a troll, you have an unusual way of showing it. First post in some
while, and not about horses but about grammar, insult and condescension. I
was going to ask if you limit your instruction to grammar, or also teach riding
with a stick up your ass. But that would be rude. My Ides of March resolu-
tion is to cut back on rudeness. Probably you were just having a bad day 1.
Peace. (1) Note proper use of four-period ellipsis.
D You used to post meaningful articles about horses. Ten you took several
months of and returned to bring up this business about colloquial usage and
pitying As students. What happened?
In Example (3), As hypocriticism provokes exasperated responses from C, D, E,
and F, who take pains to explain to A all the ways in which she could follow her
own advice. (Notably, C and D also both allude to As digression from discussing
horses. Tis reiterates three points: that digression may be used as supplementary
evidence alongside other infringements to build a case for constructing certain
behaviour as trolling; that one post may contain many strategies; and that those
strategies may blur into each other.)
Proof-reading may be such a popular target because, for the lay online user,
the written word is the primary instantiation of identity in this data. As such, it
can be heavily invested with Ss notion of face and identity, making it an easy and
sensitive target for attack. Whatever linguists may think of this kind of prescriptiv-
ism, populist guides to grammar, spelling, and punctuation (Cook 2004; Rozakis
2003; Truss 2003) only support the idea that linguistic (in)correctness or (in)
competence can, and even should be used as a primary index for Ss education,
intelligence, maturity, trustworthiness, and habits.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 73
3.3 Antipathise
When antipathising, the (alleged) troller proactively and usually covertly creates a
sensitive and/or antagonistic context to exploit by being deliberately controversial
or provocative. Tis is unlike the shock strategy discussed in 3.5, where the (al-
leged) troller reactively, and usually overtly exploits a ready-made, sensitive and/
or antagonistic context.
Antipathy trolling is heavily predicated on deceiving the group and covertly
manipulating face, ego, sensitivities, guilt, morals, and so forth, usually to trigger
emotional responses:
Example (4) [RE061120]
A OK, aside from those that replied with useless comments, it seems that ev-
eryone thinks I SHOULD get her a pony. Dont get me wrong. I think ponies
are cute animals and I would not mind having one around, but I think my
daughter will be the laughing stock of the town. I mean, who really has ponies
or horses anymore, aside from those in the horse business. Its like ponies went
out of fashion in the early part of the 1900s when people got cars. Today thinks
like Ipods and video games are the rage. Why cant my daughter be normal like
all the other kids? I dont know. I want her to be happy more than anything, but
a pony? Tats so stupid for a kid in 2006.
B OK; I ll assume for a moment that youre not just trolling. Firstly: plenty of
people who are not in the horse business have horses. Kids, teenagers,
adults. For pleasure riding, sport, just as a pet, you name it. Secondly: plenty
of kids (girls especially) want a pony. Its defnitely not just your daughter.
Shes perfectly normal. Tirdly: should your daughter really get a pony of her
own, shed be more like the envy of half her school (assuming everyone else
havent got ponies themselves already).
In Example (4), A sets a patronising tone by describing ponies as cute animals and
equestrianism as an outdated, derisible pastime that her daughter is abnormal and
stupid for taking an interest in. Te ofensiveness is arguably exacerbated when A
demonstrates a high degree of general ignorance about horses, and seems more
concerned with superfcial issues such as her neighbours opinions than with her
daughters happiness. In maligning equestrianism, A puts forward a controversial
and logic would suggest minority opinion (in this forum) that is likely to
meet with strong disagreement. In short, the success of this (perceived) strategy
rests on aggrieving the group on a subject that they have invested face in, in a way
that is plausibly deniable.
Further variations of antagonism involve posing as a user usually a new or
inexperienced individual with a pseudo-nave question or scenario that implies
preventable or deliberate stupidity, cruelty, neglect, or similar:
74 Claire Hardaker
Example (5) [RE050802]
A Vet came out today 2 days ago she was trotting in her bare small paddock,
yesterday and today she is lame again. Vets says she has infamation of the
white line on both front feet, but couldnt see an abcess. She has 1 week to im-
prove or she will be having an x-ray. Two shots will be taken, one for pedal bone
rotation and one to see if she has a pedal bone fracture. In the meantime she is
to saty on 1 sachet of bute per day and to keep treating her as a lamanatic.
B Delaying for a WEEK to take x-rays and start treatment for pedal bone rota-
tion is a very very bad idea. If there is rotation, it needs to be diagnosed and
treated immediately. Te longer you wait the worse the damage becomes and
the greater the chance that you will NOT solve the problem and the horse will
be lame for life. If you suggested wait and see when the vet wanted to x-ray
immediately, then you are failing in your obligations to this horse. If the vet
suggested wait and see instead of lets take x-rays today, then you need a
better vet.
C Im sorry, but Im beginning to think you are a troll? I meancome on. How
many times are you going to post *again* that you are waiting to take an xray?
Just to get everyones hackles up? If youre not a troll, x-rays arent that expen-
sive so call a vet with a brain in their head and get some taken. If you are a
trollIm sure you d never admit it. <sigh> If there even is a *real* pony in all
of this, I feel very sorry for it.
A I am not a troll I am only doing what my vet has advised me to do.
D Keep on waiting and youre likely to kill the pony. Get your x-rays while you
still have a chance.
In this case, A presents a scenario in which her horse is at high risk of sufering
from a debilitating and potentially fatal lameness, yet despite the many exasper-
ated users who are urging her to seek (better) medical care for her pony, A defends
her inaction by abrogating responsibility onto her vet (Culpeper, Bousfeld, and
Wichmann 2003, 1,565).
Te success of the pseudo-nave scenario strategy rests on the difculty of dis-
tinguishing genuine cries for help from sophisticated trollers, thereby placing con-
scientious members in a moral dilemma. If they ignore or rebuf posts asking for
help, and S is sincere, the group may be (unintentionally) allowing a preventable
problem to occur or continue, but if they assist and S is trolling, they may waste
considerable time, efort, and emotional investment on an interaction that dam-
ages the groups harmony.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 75
3.4 Endanger
Te endangerment trolling strategy most clearly links with Donath (1999, 30),
and Utzs (2005, 50) pseudo-nave troller. Tis (perceived) strategy may combine
antipathy (see 3.3) with shock (see 3.5) via intentionally disseminating poor
advice, presenting a dangerous example for others to follow, giving out incorrect
information under the guise of being innocently unaware that the information is
wrong, and so on. In other words, this (perceived) strategy is designed to mas-
querade as help or advice whilst actually causing harm and/or forcing others to
respond to prevent harm:
Example (6) [RE061122]
B Te real problem is that we have to keep an eye on these trolls and be sure to post
the appropriate followups to their incredibly stupid activity suggestions to avoid
leaving others with the impression that these activities arent actually incredibly
C Last I heard, the proper response to a troll was to stop feeding it.
B Tat may work to stop the troll from posting but it does nothing to help stop
others from mistakenly assuming that the advise is good advise, since it wasnt
As illustrated by the annoyance of the users in Example (6), this (perceived) strat-
egy gives a troller a method of continually forcing the group to re-engage with
her, since, when dealing with (perceived) endangerment strategies, members are
caught between the personal convenience of ignoring or blocking pseudo-nave
posts, and the social responsibility and moral obligation of protecting inexperi-
enced members who may follow the advice or example and hurt themselves or
3.5 Shock
A classic and even hackneyed (perceived) strategy was to be insensitive or explicit
about a sensitive or taboo topic such as religion, death, politics, human rights,
animal welfare, and so forth. Recently, online users and the media have coined
terms such as RIP trolling and memorial trolling, which involve leaving ofensive
content, such as jokes, on a deceased persons memorial site (BBC 2010; Camber
and Neville 2011).
In RE/SF, this (perceived) strategy occurred more broadly in that it could con-
stitute an inappropriately thoughtless or hurtful response to any sensitive, upset-
ting, or emotional situation:
76 Claire Hardaker
Example (7) [RE070320]
B On the last blustery cold snowy day of this month, all of my horses were
tearing around the paddock in the snow.. underneath that snow was a fresh
layer of ice which had formed due to a freeze thaw situation. All was fne for
the AM feed everybody standing bright eyed and moving sound on four legs.
When I went down the hill for the evening feed, I noticed Little Boy, my six
year old arab stallion was standing on three legs. He had snapped his right
hind leg mid shannon bone and it was blowing side to side in the stif wind.
I drove the truck at breakneck speed back up and told my husband to load
up the rife. Within ffeen minutes of my discovery, Little Boys sufering was
over. We put him down in the paddock with the other horses present and lef
him there overnight.
A So he had been lef mortally injured and in shock all day then?
C [To A] Such a nasty little man.
D [To C] He really is a prick, isnt he?! Too bad he wont just go away.
E A!! I have never answered one of your posts, but that was totally uncalled
for and the nastiest thing to say, regardless of how you may, or may not have
been, treated by this group in the past/present. I was shocked to my bones
by such a remark which I fnd completely out of place. It shows more of your
personality than I ever care to know. An apology is in order.
A [To C] Bite me, Weeny girl. I am here to break a few mares.
F Im glad he wrote that. Now the few who still thought he had some redeeming
features, and were responding to him conversationally, can see what a dis-
gusting person he really is. To take a swipe at someone who just lost her beloved
horse is unconscionable. He has proven once and for all that he is nothing but a
G [To A] You are hereby killfled by me forever and ever Amen!!
H Yes. I hope J, K and others who have stood up for him can recognize him for the
troll that he is, now, and ignore him.
B [To A] Twirp; actually nasty ex divorce-lawyer twirp. Tere: I said it: and
I promised I wouldnt use foul language. Bang goes another resolution.
Perhaps, if I only get down to your level, you will fnally kill fle me then
we shall both live happier, more fulflling lives, dont you think?
H As I said J, K? Ilya icta est.
As in memorial trolling proper, in Example (7), A responds to a distressing topic
that typically expects tact by implying not only that B was negligent for having
lef her horse, but that she was also potentially deliberately cruel for having done
so all day. Tis triggers angry responses from C, D, E, F, G, and H, who all come
to Bs defence. Of course, A could have been unaware of the groups relevant norms
(Graham 2007, 743), or he may have misjudged how his post might be interpreted,
but his next response is unrepentant.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 77
Importantly for this paper, F states that A has proven once and for all that he is
nothing but a troll. Likewise, H suggests that others can now recognize him for the
troll that he is. In other words, both F and H suggest that this (perceived) trolling
strategy has little plausible deniability. A would struggle to convince the group that
he has been misunderstood, or that he was joking. Tis suggests, then, that shock
trolling is more on-record than the (perceived) strategies dealt with above.
Logic suggests that if the (apparent) intent to troll is so obvious, it becomes
self-defeating, and the group will deny the (alleged) troller the arguments he (sup-
posedly) seeks. Despite this, however, shock trolling appears to succeed, primarily
due to the strength of feeling provoked by the deliberately personal and extraor-
dinarily hurtful nature of As insensitivity. It seems that shock trolling can trigger
a desire to retaliate that is stronger than the desire to deny S the satisfaction of a
3.6 Aggress
Te fnal (perceived) strategy identifed in RE/SF, and the one that is most typi-
cal of early media reports on trolling, involves openly and deliberately aggress-
ing H, without any clear justifcation, and with the aim of antagonising H into
Example (8) [RE 070324]
A I have plenty of manners, I just prefer to save them for use in plaes other than
this NG [newsgroup]. It amuses me to piss you of, thus I forego the manners in
favor of biting words.
B Tat would make you a troll. How special.
Tis (perceived) strategy has close ties with memorial or shock trolling (see 3.5)
and most closely exemplifes the academic literature with regards to malicious im/
politeness (Beebe 1995, 159; Culpeper, Bousfeld, and Wichmann 2003, 1,546;
Gofman 1967, 14). However, unlike the other (perceived) strategies dealt with
above, this was a relatively unusual strategy in RE/SF. One possible explanation
may be that it was typically unsuccessful in generating responses, perhaps because
overt, unprompted, and unjustifed aggression is difcult to characterise as any-
thing other than trouble-causing. One cannot (easily) pass this behaviour of as a
joke or a misunderstanding. Such an apparently on-record strategy makes it easy
for Hs to interpret the behaviour with a high degree of confdence, and therefore
deny the (alleged) troller the responses they think she seeks. Overt aggression also
12. I use aggress as defned by the OED (2012): Aggress (v.) 2. intr. To make an attack; to set
upon; to commit the frst act of violence; to begin the quarrel (Johnson).
78 Claire Hardaker
lacks the deeply personal ofensiveness (as in shock trolling) that might otherwise
make someone respond despite knowing that this is what this strategy seeks.
4. Discussing trolling
Te (perceived) strategies identifed in RE/SF suggest a global-level cline, based
primarily on deception of intention. Tis ranges from the (perceived) use of strat-
egies that are as secretive and implicit as possible, where the would-be troller can
plausibly deny the ascribed intent, through to the (perceived) use of strategies that
are as obvious and explicit as possible, where the would-be troller cannot plausibly
deny the ascribed intent. For ease, I have named each pole on this cline as covert
and overt. As I have emphasised throughout and discuss further (see 4.2), how-
ever, these are all strategies that users have identifed, and therefore may not fully
refect actual trolling strategies.
Te more covert trolling strategies are least well described by existing impo-
liteness defnitions, primarily due to the extensive use of deception in this behav-
iour which extant impoliteness research mostly ignores (see Hardaker 2010 for
more on this). With regards to the small body of trolling literature that does exist,
however, the covert superstrategy is perhaps best accounted for by Donath (1999)
and Dahlbergs (2001) defnition, in which the individual masquerades as a sin-
cere participant (Donath 1999, 45), and, once accepted within the group, uses of-
record strategies to cause as much disruption as possible whilst trying to plausibly
deny, or entirely conceal her real intentions (Dahlberg 2001).
Covert strategies appear to be characterised by afect-, friendship-, and/or
trust-based approaches. According to users, S wants H to believe that she is sin-
cere, and does not want H to recognise that she intends to cause trouble. In other
words, covert strategies are deception heavy, and the manipulation of the context
or users is (or aims to be) carefully disguised. Tis manipulation might take the
form of accidentally breaching group norms (see Example (1) and Example (2),
3.1), being (hypo)critical (see Example (3), 3.2), presenting an infammatory or
anxiety-inducing scenario (see Example (4) and Example (5), 3.3), or exploiting
moral obligations (see Example (6), 3.4). At the most covert end of the cline, an
(alleged) troller may simply appear to be innocent bystander to, or even a victim
of arguments that she did not trigger (consider Examples 15). Strategies from
the middle of the covert-overt cline may incorporate behaviour such as aggres-
sion, where this can be made to look justifed, e.g. as a counter-attack (consider
Example (6)).
Strategies from the overt end of the cline become more heavily characterised
by aggression- and shock-based behaviours that are increasingly difcult for S to
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 79
plausibly deny as trolling. In other words, these strategies are deception light,
though not necessarily deception-free. Similarly, the corresponding manipulation
is likely to be obvious, and can take the form of causing outrage (see Example (7),
3.5), and being abusive or antagonistic (see Example (8), 3.6), without any at-
tempt to justify the behaviour.
Overt trolling strategies are well captured by media accounts which describe
this behaviour as ludicrous rants, inane threadjackings, personal insults, and
abusive language (Naraine 2007, 146), as disrupting the group by insulting and
provoking members (Brandel 2007, 32), and as a type of schadenfreude (Hefernan
2008). Tese same strategies are also reasonably well accounted for by existing
academic defnitions of genuine impoliteness, malicious impoliteness, strategic
rudeness, and so forth, where S attacks H with the intention of not only hurting
H, but of making H aware that she meant to hurt him (Beebe 1995, 159; Culpeper,
Bousfeld, and Wichmann 2003, 1,546; Gofman 1967, 14). Ultimately, however,
wherever the strategy falls on the covert-overt cline, the perceived aim, as with all
trolling, is to trigger or exacerbate disruption, annoyance, arguments, outrage, and
so forth in the group. Tis therefore means, as discussed in 1.2, that my earlier
defnition of trolling (Hardaker 2010) needs amending.
4.1 Redefning trolling
Te earlier defnition that I proposed (see Hardaker 2010, 2378) was insufcient
to account for all of the (perceived) trolling strategies found in RE/SF. In particu-
lar, it did not capture the overt trolling strategies where the (perceived) intent was
difcult to plausibly deny. I therefore propose the following fuller, though still
working, defnition of trolling which is primarily based on H-perception of this
Trolling is the deliberate (perceived) use of impoliteness/aggression, deception
and/or manipulation in CMC to create a context conducive to triggering or an-
tagonising confict, typically for amusements sake.
Trolling may be perceived as being more or less covert.
H may assess Ss behaviour as covert trolling if he thinks that she is using co-
vert strategies to construct an online identity that is (mostly) inconsistent with
her desired outcome (e.g. she may use a surreptitiously manipulative, trust-based
approach that leaves more reasonably defensible explanations for her behaviour
than just an intention to troll).
H may assess Ss behaviour as overt trolling if he thinks that she is using overt
strategies to construct an online identity that is (mostly) consistent with her
80 Claire Hardaker
desired outcome (e.g. she may use an explicitly manipulative, aggression-based
approach that leaves no other reasonably defensible explanation for her behaviour
than an intention to troll).
Tis newer defnition strives to account for the fact that (perceived) trolling strate-
gies appear to fall along a cline from covertly antagonistic (as the earlier defnition
suggested) through to overtly antagonistic. Tese (perceived) strategies could be
envisioned roughly as follows:

Digress Antipathise Shock

(Hypo)criticise Endanger Aggress
Figure 1. Scale of (perceived) trolling strategies
As Figure 1 shows, along this cline, specifc strategies include: (1) digressing from
the topic at hand, especially onto sensitive topics; (2) criticising, especially for a
fault that the critic then displays herself; (3) antipathising, by taking up an alienat-
ing position, asking pseudo-nave questions, etc.; (4) endangering others by giving
dangerous advice, encouraging risky behaviour, etc.; (5) shocking others by being
insensitive about sensitive topics, explicit about taboo topics, etc.; and (6) aggress-
ing others by insulting, threatening, or otherwise plainly attacking them without
(adequate) provocation.
Tis cline should not be taken as an absolute representation of precisely how
these actual strategies occur, but instead as a general depiction of how reason-
ably defensible these strategies appear to be from accusations of trolling. In other
words, as throughout this whole paper, this cline does not depict what S actually
intends or does. Instead, it focuses on what H claims to think S intended, and how
incontrovertibly H can prove Ss (supposed) guilt. For example, at the covert end
of the cline, all else being equal, defending behaviour such as digression as some-
thing other than trolling should be relatively easy one can claim many reasons
for not staying on-topic. At the overt end of the cline, however, it is far more dif-
fcult to defend unprovoked aggression as something other than trolling.
Of course, as a general representation of strategies, this cline cannot take into
account specifc contextual nuances, exceptions, and individual variation, but that
is not its point. Moreover, whilst the amended defnition and taxonomy of strate-
gies attempt to provide a fuller account of trolling, there is still a long way to go. A
few of the issues are summarised below.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 81
4.2 Future developments
Perhaps the frst and most serious consideration is that due to a combination of
the methodological basis and the phenomenon under study, this work is primarily
based on examples that Hs claim to perceive as trolling. However, those Hs could
be lying about their interpretations (e.g. H might accuse someone he dislikes of
trolling simply to get her banned from the group). Alternatively, he might wrongly
accuse one member of trolling, and fail to recognise that another member has
been carrying out highly skilled, covert trolling for years. In short, Hs may unwit-
tingly attack the innocent and defend the guilty.
H interpretation is certainly important. Indeed, it is used as an implicit mea-
sure in some UK linguistic aggression statutes (e.g. the Communications Act
2003; the Defamation Act 1996; the Malicious Communications Act 1988; and
the Protection From Harassment Act 1997). However, as demonstrated at length,
interpretation alone cannot constitute the only proof that someone is trolling.
A second consideration, which is partly afected by the frst, is the value of
quantitatively processing these strategies. As already discussed, H can profess
mistaken or dishonest interpretations. Additionally, some examples combined
multiple strategies. For instance, Example (1) arguably demonstrates digression,
criticism, antipathy, an attempt at shocking, and aggression; Example (3) contains
digression and hypocriticism; and Example (4) combines criticism with antipathy.
Users do not always identify every specifc grievance. In fact, on occasion, they do
not identify any grievance at all, leaving the analyst to try to decide what might
have triggered Hs accusation of trolling.
Tirdly, trolling as a term is as contextually bound and relative as a term such
as impoliteness. Just as when discussing impoliteness, we cannot discount the is-
sue of group norms of acceptable behaviour (Hetcher 2004; Opp 1982; 2001). Te
same behaviour in one group may be deemed as hurtful, malicious trolling, and
in another as in-group entertainment (Donath 1999, 47). Tis takes us into the is-
sue, which I touch on only briefy in this paper, of the co-construction of identities
and the efects that this has on interaction (Haugh 2010). In particular, as stressed
throughout this work, H may perceive trolling where, in fact, none was intended.
Should we still treat this as a case of trolling? My view is that we should not, and
13. Notably, the Communications Act (2003), which deals with CMC in 127, was enacted prior
to the peak of major social networks. Further, these Acts typically work on the basis of a reason-
able person a common law concept of a decontextualised, normative, objective fction whose
knowledge, behaviours, and beliefs represent an idealised standard against which others are
measured. Much research already shows, however, that assessments of behaviour are highly con-
textually dependent. What may be admirable in one context may be highly ofensive in another,
even if carried out by the same person, in the same place, before the same company.
82 Claire Hardaker
that a fuller defnition should strive to incorporate how both H and the (alleged)
troller jointly construct, challenge, and negotiate their own, and each others iden-
Fourthly, as an extension of the above, the strategies presented here do not be-
gin to constitute an exhaustive list. Indeed, Usenet newsgroups even on the same
topic can contrast markedly in their norms and limits, so it is dangerous to assume
that all interaction within even one type of CMC will be homogenous. CMC, by
its very nature, is enormous, complex, varied, and quickly evolving. As such, it is
more likely than not that analysis of further data will reveal more strategies than
are presented here. Tese few examples alone, therefore, should not be taken to
represent all (perceived) trolling strategies, or all types of CMC.
4.3 Te great imponderable
In this data, at the macro-level, trolling is principally comprised of three ingre-
dients: deception, manipulation, and aggression. Troughout this analysis, de-
ception was an almost ubiquitous, defning ingredient of trolling, involving false
identities, disingenuous intentions, and outright lies. Aggression was also impor-
tant, but, unlike deception, it was not necessarily always produced by the (alleged)
troller. Instead, she might use deception to covertly manipulate others into being
aggressive. Only more rarely would the (alleged) troller use overt aggression to
manipulate others into retaliating.
Aggression, impoliteness, confict, and so forth are now all receiving increas-
ing amounts of attention within impoliteness research, however, linguistic mani-
festations of manipulation, and more especially of deception, are drastically un-
der-researched. Far more investigation is yet needed, both in the broader areas
of linguistic manipulation and deception in general, as well as in more specifc
behaviours such as (cyber)bullying, trolling, (online) hoaxing, and so forth. I hope
that this paper has taken a very small step in that direction.
Andersson, Lynne, and Christine Pearson. 1999. Tit for Tat? Te Spiraling Efect of Incivility in
the Workplace. Academy of Management Review 24 (3): 452471.
Aronson, Karin, and Bengt Rundstrm. 1989. Cats, Dogs, and Sweets in the Clinical Negotiation
of Reality: On Politeness and Coherence in Pediatric Discourse. Language in Society 18:
Arundale, Robert B. 2008. Against (Gricean) Intentions at the Heart of Human Interaction.
Intercultural Pragmatics 5 (2): 229258.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 83
Avgerinakou, Anthi. Contextual Factors of Flaming in Computer-Mediated Communication.
Heriot-Watt University 2008 [cited.
Baker, Paul. 2001. Moral Panic and Alternative Identity Construction in Usenet. Journal of
Computer-Mediated Communication 7 (1): (
html 08/12/09).
Barron, Anne. 2006. Understanding Spam: A Macro-Textual Analysis. Journal of Pragmatics
38: 880904.
BBC. 2010. Jade Goody Website Troll from Manchester Jailed. BBC News October 29th (http:// 12/11/10).
Beebe, Leslie M. 1995. Polite Fictions: Instrumental Rudeness as Pragmatic Competence. In
Linguistics and the Education of Language Teachers: Ethnolinguistic, Psycholinguistics and
Sociolinguistic Aspects. Georgetown University Round Table on Languages and Linguistics.,
ed. by James E. E. Alatis, Carolyn A. Straehle, Brent Gallenberger and Maggie Ronkin,
154168. Georgetown: Georgetown University Press.
Bernstein, Michael S., Andrs Monroy-Hernndez, Drew Harry, Paul Andr, Katrina Panovich,
and Greg Vargas. 2011. 4chan and /B/: An Analysis of Anonymity and Ephemerality in a
Large Online Community. Association for the Advancement of Artifcial Intelligence: 18.
Binns, Amy. 2011. Dont Feed the Trolls: Managing Troublemakers in Magazines Online
Communities. Mapping the Magazine 3 (Cardif University).
Black, Lisa. 2006. Its a Trolls Life for Some: Online Games Raise Addiction Concerns. Chicago
Tribune November (30th): 1.
Bocij, Paul. 2004. Cyberstalking: Harassment in the Internet Age and How to Protect Your Family.
Westport, USA: Praeger.
Bousfeld, Derek. 2008. Impoliteness in Interaction. Philadelphia and Amsterdam: John
Brandel, Mary. 2007. Blog Trolls and Cyberstalkers: How to Beat Tem. Computerworld May
28: 32.
Brown, Penelope, and Stephen C. Levinson. 1987. Politeness: Some Universals in Language Use.
Cambridge: Cambridge University Press. Original edition, 1978.
Camber, Rebecca, and Simon Neville. 2011. Sick Internet Troll Who Posted Vile Messages and
Videos Taunting the Death of Teenagers Is Jailed for 18 Weeks. Daily Mail September 14th
internet-troll-Sean-Dufy-jailed.html 02/12/11).
Cook, Vivian J. 2004. Accomodating Brocolli in the Cemetary: Or Why Cant Anybody Spell?
London: Profle.
Cox, Ana Marie. 2006. Making Mischief on the Web. Time December 16th (http://www.time.
com/time/magazine/article/0,9171,1570701,00.html 03/02/10).
Culpeper, Jonathan. 1996. Towards an Anatomy of Impoliteness. Journal of Pragmatics 25:
Culpeper, Jonathan, Derek Bousfeld, and Anne Wichmann. 2003. Impoliteness Revisited: With
Special Reference to Dynamic and Prosodic Aspects. Journal of Pragmatics 35: 15451579.
Dahlberg, Lincoln. 2001. Computer-Mediated Communication and the Public Sphere: A
Critical Analysis. Journal of Computer-Mediated Communication 7 (1): (http://jcmc.indi- 22/08/10).
December, John. 1997. Notes on Defning Computer-Mediated Communication. CMC Magazine
January ( 19/07/08).
84 Claire Hardaker
Donath, Judith S. 1999. Identity and Deception in the Virtual Community. In Communities in
Cyberspace, ed. by Marc A. Smith and Peter Kollock, 2959. London: Routledge.
Douglas, Karen M., and Craig McGarty. 2001. Identifability and Self-Presentation: Computer-
Mediated Communication and Intergroup Interaction. British Journal of Social Psychology
40: 399416.
Doyle, Jennifer. 1989. Re: <Hick!>. alt.callahans 14th December:
Dwyer, Catherine, Starr Roxanne Hiltz, and Katia Passerini. 2007. Trust and Privacy Concern
within Social Networking Sites: A Comparison of Facebook and Myspace. Proceedings of
the Tirteenth Americas Conference on Information Systems Keystone, Colorado: August
Facebook. 2010. Statement of Rights and Responsibilities. Facebook (
terms.php 13/05/10).
Felson, Richard B. 1982. Impression Management and the Escalation of Aggression and
Violence. Social Psychology Quarterly 45 (4): 245254.
Ferris, Pixy. 1997. What Is CMC? An Overview of Scholarly Defnitions. CMC Magazine January
(Retrieved 19th July 2008):
Gibbs, Raymond W. 2001. Intentions as Emergent Products of Social Interactions. In Intentions
and Intentionality, ed. by Bertram Malle, Louis Moses and Dare Baldwin, 105122.
Cambridge, Mass: MIT Press.
Gofman, Erving. 1967. Interactional Ritual: Essays on Face-to-Face Behavior. Allen Lane: Te
Penguin Press.
Golder, Scott A., and Judith S. Donath. 2004. Social Roles in Electronic Communities. In
Association of Internet Researchers (Aoir) Conference: Internet Research 5.0, 125. Brighton,
England: 1922 September.
Graham, Sage Lambert. 2007. Disagreeing to Agree: Confict, (Im)Politeness and Identity in a
Computer-Mediated Community. Journal of Pragmatics 39: 742759.
Graham, Sage Lambert. 2008. A Manual for (Im)Politeness?: Te Impact of the FAQ in Electronic
Communities of Practice. In Impoliteness in Language: Studies on Its Interplay with Power
in Teory and Practice, ed. by Derek Bousfeld and Miriam A. Locher, 324352. Berlin and
New York: Mouton de Gruyter.
Grimshaw, Allen D., ed. 1990. Confict Talk: Sociolinguistic Investigations of Arguments in
Conversations. Cambridge: Cambridge University Press.
Hardaker, Claire. 2010. Trolling in Asynchronous Computer-Mediated Communication: From
User Discussions to Academic Defnitions. Journal of Politeness Research. Language,
Behaviour, Culture 6 (2): 215242.
Haugh, Michael. 2008. Intention in Pragmatics. Intercultural Pragmatics 5 (2): 99110.
Haugh, Michael. 2010. When Is an Email Really Ofensive?: Argumentativity and Variability in
Evaluations of Impoliteness. Journal of Politeness Research 6: 731.
Hefernan, Virginia. 2008. Trolling for Ethics: Mattathias Schwartzs Awesome Piece on Internet
Poltergeists. New York Times July 31st.
Herring, Susan C. 1994. Politeness in Computer Culture: Why Women Tank and Men Flame.
Paper read at Cultural performances: Proceedings of Te First Berkeley Women and
Language Conference, at California.
Herring, Susan C. 2003. Computer-Mediated Discourse. In Te Handbook of Discourse Analysis,
ed. by Deborah Schifrin, Deborah Tannen and Heidi E. Hamilton. Oxford: Blackwell.
Uh. . . . not to be nitpicky,,,,,butthe past tense of drag is dragged, not drug. 85
Herring, Susan C., Kirk Job-Sluder, Rebecca Scheckler, and Sasha Barab. 2002. Searching for
Safety Online: Managing Trolling In a Feminist Forum. Te Information Society 18:
Hetcher, Steven A. 2004. Norms in a Wired World. Cambridge: Cambridge University Press.
Johnson, Norman, Randolph Cooper, and Wynne Chin. 2008. Te Efect of Flaming on
Computer-Mediated Negotiations. European Journal of Information Systems 17 (4): 417
Lampe, Clif, and Paul Resnick. 2004. Slash(Dot) and Burn: Distributed Moderation in a Large
Online Conversation Space. Proceedings of the SIGCHI conference on human factors in com-
puting systems: Changing our world, changing ourselves New York, USA: ACM: 543550.
Locher, Miriam A. 2004. Power and Politeness in Action: Disagreements in Oral Communication.
Edited by Monica Heller and Richard J. Watts, Language, Power and Social Processes.
Berlin and New York: Mouton de Gruyter.
Maddox, Tomas. 1989. Re: Cyberspace Conference. alt.cyberpunk 22nd October: https://
Mauney, Jon. 1982. Second Verse, Same as the First. net.nlang 05th July:
McLaughlin, Margaret L., Kerry K. Osborne, and Christine B. Smith. 1995. Standards of
Conduct on Usenet. In Cybersociety: Computer-Mediated Communication and Community,
ed. by Steve Jones, 90111. Tousand Oaks, CA; London: Sage Publications.
Miller, Mark. 1990. Foadtad. alt.fame 8th February:
Mills, Sara. 2003. Gender and Politeness. Cambridge: Cambridge University Press.
Mooney, Annabelle. 2004. Co-Operation, Violations and Making Sense. Journal of Pragmatics
33: 16011623.
Morris, Steve. 2011. Internet Troll Jailed afer Mocking Deaths of Teenagers. Guardian September
13th (
Moulitsas, Markos. 2008. Ignore Concern Trolls. Te Hill January 09th (
kos-moulitsas/dems-ignore-concern-trolls-2008-01-09.html 06/06/09).
Naraine, Ryan. 2007. Te 10 Biggest Web Annoyances. December: 141148.
OKeefe, Barbara J. 1989. Communication Teory and Practical Knowledge. In Rethinking
Communication, ed. by Brenda Dervin, 197215. Newbury Park: Sage.
Opp, Karl-Dieter. 1982. Te Evolutionary Emergence of Norms. British Journal of Social
Psychology 21: 139149.
Opp, Karl-Dieter. 2001. How Do Social Norms Emerge? An Outline of a Teory. Mind and
Society 2: 101128.
Plato. 2007. Te Republic. (3rd edn.) London: Penguin Classic.
Rozakis, Laurie E. 2003. Te Complete Idiots Guide to Grammar and Style. (2nd edn.). US: Alpha.
Schwartz, Mattathias. 2008. Malwebolence: Te World of Web Trolling the Trolls among Us.
Te New York Times August (03rd): MM24.
Scott, Mike. 2009. Wordsmith Tools. Liverpool: Lexical Analysis Sofware.
Shachaf, Pnina, and Noriko Hara. 2010. Beyond Vandalism: Wikipedia Trolls. Journal of
Information Science 36 (3): 357370.
Shea, Virginia. 1994. Netiquette. San Francisco, CA: Albion Books.
Shin, Jiwon. 2008. Morality and Internet Behavior: A Study of the Internet Troll and Its Relation
with Morality on the Internet. In Proceedings of Society for Information Technology and
86 Claire Hardaker
Teacher Education International Conference 2008, ed. by Karen McFerrin, Roberta Weber,
Roger Carlsen and Dee Anna Willis, 28342840. Chesapeake, VA: AACE.
Sia, Choon-Ling, Bernard C. Y. Tan, and Kwok-Kee Wei. 2002. Group Polarization and
Computer-Mediated Communication: Efects of Communication Cues, Social Presence,
and Anonymity. Information Systems Research 13 (1): 7090.
Siegel, Jane, Vitaly J. Dubrovsky, Sara Kiesler, and Timothy W. McGuire. 1986. Group Processes
in Computer-Mediated Communication. Organizational Behaviour and Human Decision
Processes 37: 157187.
Stivale, Charles J. 1997. Spam: Heteroglossia and Harassment in Cyberspace. In Internet Culture,
ed. by David Porter, 133144. New York: Hampton Press.
Strom, Paris S., and Robert D. Strom. 2005. When Teens Turn Cyberbullies. Te Education
Digest December: 3541.
Suler, John. 2004. Te Online Disinhibition Efect. CyberPsychology & Behavior 7 (3): 321326.
Tepper, Michele. 1997. Usenet Communities and the Cultural Politics of Information. In Internet
Culture, ed. by David Porter, 3954. New York: Routledge.
Tompson, Clive. 2009. Clive Tompson on the Taming of Comment Trolls. Wired Magazine
March 23rd (
Topu, igdem, zgr Erdur-Baker, and Yesim apa-Aydin. 2008. Examination of Cyberbullying
Experiences among Turkish Students from Diferent School Types. CyberPsychology &
Behavior 11 (6).
Truss, Lynne. 2003. Eats, Shoots and Leaves: Te Zero Tolerance Approach to Punctuation. Profle:
Turner, Tammara Combs, Marc A. Smith, Danyel Fisher, and Howard T. Welser. 2005. Picturing
Usenet: Mapping Computer-Mediated Collective Action. Journal of Computer-Mediated
Communication 10 (4): ( 05/01/11).
Unknown. 1996. Te Trollers FAQ. (
Utz, Sonja. 2005. Types of Deception and Underlying Motivation: What People Tink. Social
Science Computer Review 23 (1): 4956.
Wasko, Molly McClure, and Samer Faraj. 2005. Why Should I Share? Examining Social Capital
and Knowledge Contribution in Electronic Networks of Practice. MIS Quarterly 29 (1):
Watts, Richard J. 2003. Politeness. Cambridge: Cambridge University Press.