Está en la página 1de 7

COMM 3715 - Internet Policy

Essay 1
Student I.D. -200727571
November 2015
Word count: 2,550
To what extent is freedom of speech regulated online and do such current
measures serve in the best interests of society?

According to Article 19 of the Universal Declaration of Human Rights,


everyone has the right to freedom of opinion and expression; this right
includes freedom to hold opinions without interference and to seek,

receive and impart information and ideas through any media and
regardless of frontiers (United Nations, 1948). With this in mind, it shall
be the aim of this essay is to critically evaluate the current regulatory
responses used to address concerns raised over our right to freedom of
speech made within the networked realm. By focusing on the countryspecific difficulties in addressing acts of speech, to understanding the role
of intermediaries in the governance of the internet; the subsequent
increasingly added pressures placed upon by states and questionable
organisations, it is in the conclusive view of this essay to assert that such
measures pose a particularly negative outlook for the notion of an
individuals right to freedom of speech and therefore suggests the need
for revision of such regulatory responses.
In respect to ascertaining whether current regulatory measures towards
our right to freedom speech truly serve in best interests of society, it is
important to understand that in regard to acts of hate speech, this poses a
considerable dilemma when dealing with content within Europe and the
United States. It is defined as any message that promotes hatred on the
basis of race, religion, gender, sexuality, ethnicity, or national origin
(Boyle, 2001:53).
In most western European countries, the adoption of hate speech laws is
commonplace and the logic behind doing so can be seen in the intention
of protecting the dignity of its inhabitants, to ensure their identity, selfworth, social and political inclusion is upheld. Governments that punish
those who bring undue harm onto others are committed to the ideals of
egalitarianism (Delgado and Stefancic, 1997) and in one sense can be
congratulated for their efforts to ensuring that the voices of the most
susceptible to targets of hate speech are not silenced and their
participation in broader public debates remains intact (Vick, 2005). Both
Germanys Constitutional Court and following the Race and Religious
Hatred Act 2006 in the UK, for instance, persecutes those found guilty of
intent to incite racial hatred. While the sense in carrying out such
measures are justifiable, it also directly comes at a cost to an individuals
right to freedom of speech; so much so that when hate speech laws
adopted by national governments have been challenged on free speech
grounds in international tribunals, these challenges have usually failed
(Vick, 2005:45). The efficacy of hate speech laws imposed by
governments are drawn into question, given that by design, they are there
to protect the most marginalised citizens, yet often is the case that those
they serve to protect are most likely to be drawn to extremist ideologies
(Vick, 2005). Most alarmingly perhaps are the chilling effect hate speech
laws might pose for all forms of controversial expression, begging the
question of when and where do we draw the line, and at what point does a
government go too far in the censorship of material, mirroring that of
something more like an authoritarian state? On the contrary, there is
weight in the argument that policy concerning acts of free speech in the
United States might be viewed as more beneficial to citizens and to
society at large. For instance, providing the dissemination of online
speech that advocates racial violence but does not actively incite it, this

will be protected unlike in the UK per se. This is because of the paramount
importance that the First amendment of the Constitution has in the
protection of a US citizens autonomous right to free expression, even in
cases deemed particularly controversial by the global community.
Seemingly, through the adoption of a laissez-fairs approach to acts of
speech seen as contradictory to public opinion, the USA affords an equal
space in which reason and truth can emerge triumphant through a free
trade in ideas and statements regarded as false can be vigorously
rebutted instead of sanctioning by the state (Schauer, 1982; Vick, 2005).
In this way, citizens are granted equality regardless of how obscene or
immoral their words might sound. One idea might be to think about the
use of an online public forum such as Reddit wherein a user may post
homophobic comments about another person. Public denunciation of
hateful speech serves a societal interest as it allows the public to hear all
voices in the debate which are necessary in order to make an informed
decision on what constitutes as truth. If we were to convict he who
challenges the norm it may only in effect, lead to intensified feelings of
resentment and provide a cloak of martyrdom for those expressing
pernicious views (Vick, 2005:51). Moreover, as put succinctly by one BBC
article, in the act of shutting down free discussion, we will never know
who is right the heretics, or those who try to silence them (BBC, 2015).
Subsequently, comparing legislation between the United States and
Europe would in this case point to the US as a more favourable system to
look to when thinking of our right to freedom of speech.
While it may be that countries one might regard politically, socially and
culturally similar, as has been noted this is not always the case for
determining the legal implications of disputes over certain free speech
instances. Nonetheless, there are common features involved in the
process of content regulation that cross borders and should subsequently,
be taken into consideration when understanding how speech is regulated
online and whether it operates in a way most beneficial to society. With
this in mind, it is imperative discuss the role of intermediaries in the
regulation of content, specifically identifying the use of blocking and thus,
addressing concerns that arise with it. Due to the decentralised and
international nature of the internet, it is often seen that the process of
regulation lies with that of private intermediaries (Alder, 2011; Boyle
1997; Swire, 1998) and in light of this, such intermediaries have taken
steps in order to adequately meet the needs of internet users. One
method of doing so is that of filtering or blocking which refer to
technologies which provide an automatic means of preventing access to
or restricting distribution of particular information (McIntyre and Scott,
2008:109). The use in filtering out content may in one sense be seen as
contributory to societal interest if we are to consider things such as child
pornography or indecent content that may fall into the hands of minors,
both of which do not have a place in society. Similarly, filtering
mechanisms have proven to be beneficial in the curtailment of spam in
emails as well as the prevention of copyright infringement. But the
arguments weighted against filtering seem by contrast, tenfold. The

automatic nature of the technology used by Internet Service Providers


(ISPs) to carry out filtering can be very troubling for notions of free speech
on the grounds that whole websites or domains can be blocked just
because of a key word or phrase has been detected on a singular page
(McIntyre and Scott, 2008). Often in cases such as these, we end up with a
systematic process of over-blocking where valuable and unrelated content
is effectively no longer accessible and this is alarmingly very common
(Villeneuve,). In places such as the classroom, teachers have felt the brunt
of over-blocking within schools whereby access to resources for learning
have been hindered and questions over whether filtering actually impedes
youth safety online rather than ensuring it (Alder). In essence, the
problem with blocking by the intermediaries for the notion of free speech
is that the software is a very efficient mechanism for implementing rules,
but not so good when it comes to standards (Grimmelman 2005, quoted
in McIntyre and Scott, 2008:116); meaning that it fails to distinguish
where content is and isnt permissible such as what does and does not
constitute as fair use in copyright. Likewise, it is important to add that on
top of issues surrounding the automaticity of the blocking technology,
there lacks an opportunity for users to give any credible feedback or
ability to challenge what has been blocked. Often users remain in the dark
over why they cannot gain access to certain websites or why their voices
have been silenced if say, their personal blog has been restricted. In
some instances, content seen as being socially desirable by the public
that has been restricted by ISPs, find loop-holes to maintain their presence
on the internet. A good example here is the website The Pirate Bay which
frequently changes its domain name in order to evade the law. Similarly,
academics have taken to Twitter as means of obtaining research papers
they otherwise would have been restricted to. By putting the hashtag
#icanhazPDF and then the research article or paper they require, can
invoke the help other members of the community to send them what they
require (BBC, 2015). Naturally, the sharing of knowledge for purposes of
progress and advancement is of tantamount importance on a global scale
as it affects everybody and thus, any measures that may seek to inhibit it
pose a troubling thought and one that needs to be addressed. We might
do well to remember the tireless efforts of Aaron Swartz, the American
computer programmer who perhaps above all, saw that knowledge should
not be held in the hands of a few powerful corporations but belonged to
everyone. His ultimate suicide in 2013 shook the cyber-world and in many
ways was a great loss to the voices of free speech and echo the continued
concerns of internet regulation today (Wikipedia, 2015).
Not only does the automatic nature of filtering mechanisms create
problems for speech made online, but there has been cause for concern
over the lack of transparency by intermediaries employing filtering as a
means of regulation. One the one hand, an absence of transparency may
be deemed necessary in order to maintain a safe and protected society.
For example, revealing the list of blocked child pornography sites
censored by British Telcoms Cleanfeed system would simply advertise
them further (McIntyre and Scott, 2008:119). But quite to the point is

precisely that. Having a lack of transparency may mean that internet


users are unaware of what is being blocked (Lessig, 2006) and are
therefore at the mercy of their internet service providers who determine
what is and isnt seen. Arguably this can be seen as unjust in that the user
is denied any say in the matter and cannot consent to it. The reason for
intermediaries engaging in this kind of behaviour can be explained on the
basis that ISPs put its own commercial interests before that of the free
speech rights of its users. Fear of costly fines for failure to comply with
content takedown requests on time drive private organisations to value
money over speech. Building on this, some ISPs haven been known to sue
those who attempt to make hidden lists of blocked websites public
knowledge, again signifying an economic motive over that which might
benefit notions of free speech. Moreover, according to Stone and Helft,
social media sites may start restricting memberships for users in
developing nations because they are finding it difficult to target ads to
these users (Stone and Helft, 2009, cited in Zuckerman 2010:81). What is
also somewhat troubling, is the fact that in both Europe and under Section
230 of the Communications Decency Act 1996 in the United States,
intermediaries are granted immunity from prosecution where illegal
content is concerned, and yet, they continue to voluntarily police the
internet. Subsequently, the opaque nature of current methods of filtering
remain a concern for a users right to free speech and suggest revision is
crucially needed.
As already pointed out earlier, individual actions of speech can and will be
subject to the law of the land and although time has show us a shift
towards greater systems of self-regulation, there remains to be seen a
fully functioning common cyberlaw. Until that time, it can be assured that
governmental influence over the functionality of the internet will persist in
one form or another. if issues that surround the motivations of private
service providers are disconcerting enough, it must be also be understood
that these issues are extended further by the governmental pressures put
on ISPs also. One such case highlighted by Alder (2011) can be seen in
that of The Craigslist Killer which through the intensified demands of
Richard Blumenthal and a coalition of state Attorney Generals (AGs), saw
success in getting the website Craigslist to remove its erotic services
category over beliefs that advertising related content was the means by
which the killer was able to find his victim, a masseuse, and thereby
should be removed since it would likely facilitate similar threats to others.
In this instance, the operators of Craigslist succumbed to the weight
placed upon them by government at the expense of those who rely on the
site as a tool for making a living and ultimately then, were silenced. So not
only do intermediaries attempt to regulate content for their own personal
pursuits, but also to mitigate the risk of other potentially unforeseeable
consequences that might occur further down the line. Some might argue
that where government influence over intermediaries is at its most
ambiguous, is where the greatest cause for concern lies. For instance, if
an intermediary takes it upon itself to regulate content yet adheres to the
whispers of government unbeknownst to the general public, the legality of

subsequent measures can be called into question. It can be seen presently


that the effect of UK policy is to put in place a system of censorship of
internet content, without any legislative underpinning, which would
appear to be effectively insulated from judicial review (Akdeniz, 2004,
cited in McIntyre and Scott, 2008:122). It might be worth noting further,
the Internet Watch Foundation (IWF), an independently funded hotline for
the removal of obscene and inappropriate content and URLs. With no
clear legal status, it only points to questions over their accountability and
authority to do what they do. Undeniably, there can be no avoidance in
the argument that where such a system of both unchecked government
influence on intermediaries or the unsolicited potential for intermediaries
themselves to regulate the internet as they see fit, are each
fundamentally counterproductive to the well being of unsuspecting
internet users and society at large.
As has been specified throughout this essay, it would be wrong to say that
free speech online enjoys an obstacle free environment in which to
blossom. The opposite is arguably much more true. That does not mean to
say that governments, intermediaries and charitable organizations do not
do their best in the upkeep of this human right - it is just overlooked,
undercut or sidelined in place of other rights and motives far too much;
which by and large makes the current regulatory responses unsatisfactory.
In order to serve societys right to freedom of speech, there must exist
greater levels of transparency in the cooperation between states and
intermediaries, a relaxation in laws that might threaten to hinder speech
like hate speech laws and increasing efforts to see greater access to the
internet as communicative medium so that the right to freedom of speech
might be exorcised to its fullest potential.
Bibliography;
Adler, J. 2011. The Public's Burden in a Digital Age: Pressures on
Intermediaries and the Privatization of Internet Censorship. Journal of Law
and Policy. 20(1) 231-265
BBC, 2015. A Point of View: Why we should defend the right to be
offensive. [Online]. [Accessed 9 November 2015]. Available from:
http://www.bbc.co.uk/news/magazine-34613855
BBC, 2015. The scientists encouraging online privacy with a secret
codeword. [Online]. [Accessed 9 November 2015] Available from:
http://www.bbc.co.uk/news/blogs-trending-34572462
Boyle, J. 1997. Foucault in cyberspace: Surveillance, Sovereignty, and
Hardwired Censors. 66 University of Cincinnati Law Review 177.
Boyle, K. 2003. Hate speech- the United States versus the rest of the
world? 53 Maine L Rev 487.

Delgado, R and Stefancic, J. 1997. Must We Defend Nazis? New York: New
York
Lessig, L. 2006. Code and Other Laws of Cyberspace. Cambridge: MA
McIntyre, T.J. and Scott, C. 2008. Internet Filtering: Rhetoric, Legitimacy,
Accountability, and Responsibility, in Brownsword, R & Yeung, K. (eds.).
Regulating Technologies. Oxford: Hart Publishing, pp.109-124.
Schauer, F. 1982. Free Speech, Cambridge: CUP, pp 15-34; Marshall, W, in
defence of the search for truth as a First Amendment justification. 1995.
30 Georgia L Rev 1.
Swire, p. 1998. Of Elephants, Mice and Privacy: International Choice of
Law and the Internet. [Accessed 10 November 2015] Available from:
http://ssrn.com/abstract=121277
United Nations, 2015. Article 19 of the Universal Declaration of Human
Rights, United Nations, 1948. [Online]. [Accessed 7 November 2015].
Available from: http://www.un.org/en/universal-declaration-human-rights/
Vick, D. 2005. Regulating Hatred, in Klang, M & Murray, A (eds.), Human
Rights in the Digital Age. London: Glass House Press, pp. 41-53.
Wikipedia, 2015. Aaron Swartz. [Online]. [Accessed 9 November 2015].
Available from: https://en.wikipedia.org/wiki/Aaron_Swartz
Zuckerman, E. 2010. Intermediary Censorship. In Deibert, R. Palfrey, J.
Rohozinski, R and Zittrain, J. (eds.). Access Controlled: The shaping of
Power, Rights, and Rule in Cyberspace. MIT Press: MA. Pp. 71-85.

También podría gustarte