Hate speech, Censorship, and Freedom of Speech: The Changing Policies of Reddit

This paper examines the shift in focus on content policies and user attitudes on the social media platform Reddit. We do this by focusing on comments from general Reddit users from five posts made by admins (moderators) on updates to Reddit Content Policy. All five concern the nature of what kind of content is allowed to be posted on Reddit, and which measures will be taken against content that violates these policies. We use topic modeling to probe how the general discourse for Redditors has changed around limitations on content, and later, limitations on hate speech, or speech that incites violence against a particular group. We show that there is a clear shift in both the contents and the user attitudes that can be linked to contemporary societal upheaval as well as newly passed laws and regulations, and contribute to the wider discussion on hate speech moderation.


I. Introduction
The peculiarities of online hate speech as compared to traditional, offline, hate speech has made it necessary to rapidly develop and implement laws regulating offensive speech online (Brown, 2018).The characteristics of online communities means that it is much easier for fringe ideals to gain foothold and members of such communities to spend their online lives in echo chambers of hate (Shaw, 2011), often targeting a specific group of people based on some real or perceived characteristic such as race, sex, sexuality, religion, and political beliefs.The EU's 2016 Code on Countering Illegal Hate Speech Online (Code) introduced voluntary legislation that saw Twitter, Google, Facebook, and Microsoft commit to regulating hateful speech or speech that incites violence on their platforms.By 2020, each platform had stringent guidelines and flagging systems to remove reports of hate speech (Aswad, 2016).Anu Bradford (2019) would attribute this shift in practices to what she defined as "The Brussels Effect", or the regulatory power of the EU in certain markets to change standards of certain markets through their institutional capacity, willingness to create stringent rules, the relative market size of Europe, and the multilateral organization focusing on inelastic targets and non-divisible processes; meaning that their regulations had target aspects of the market that do not change, unlike capital or profit, and aspects that cannot be easily divided into separate processes to confront different standards of production.
In terms of the digital economy, Bradford (2019) identified two aspects where the Brussels Effect was in full force, specifically, the areas of data protection and hate speech.She was able to identify both the de facto and de jure aspect of the Brussels Effect in terms of data protection, as the EU had passed the 2016 binding legislation, the General Data Protection Regulation (GDPR), that forced a change in tech companies practices around the collection of user data.The de facto and de jure aspect of the Brussels Effect on hate speech remained unexplored, as the EU had not passed any binding legislation around hate speech regulation in the digital economy and only had the voluntary Code in effect at the time of writing her book; although currently, the binding version of the Code, The Digital Services Act (DSA) has been adopted and is in the process of being fully implemented at a later date.
Although the Code has been criticized by several scholars (see e.g.Aswad, 2016;Portaru, 2017;Alkiviadou, 2019) for both the privatization of hate speech making multinational private companies the arbiters of what constitutes legal and illegal hate speech and vague outcome reports (Portaru, 2017 -for the outcome report itself see Jourova, 2016), it has clearly facilitated change in how social media platforms deal with the datafication of hate (see e.g.Aswad, 2016;Laaksonen et al., 2020).As there is no universally accepted definition of hate speech (Nemes, 2010;Laaksonen, 2020) the private actors have little choice but to come up with their own definitions leading to vastly different policies on different social media platforms (Alkiviadou, 2019).Nonetheless, it is undeniable that the Code has greatly affected the content policies of all major social media platforms.
The overall effectiveness of the Code assists in understanding the EU's regulatory power in the realm of regulating hate speech online through the lens of the Brussels Effect.However, curiously enough, major American technology companies outside the scope of the Code have presented a distinct shift in philosophy surrounding their content policies and overall stance on hate speech since the adoption of the Code in 2016 (see e.g.Aswad, 2016;Bradford 2019).This offers the opportunity to make the argument that the EU's impact on hate speech practices is not only a result of their regulatory power, but a result of their normative power as well.
To highlight this phenomenon, it is only necessary to look at the Content Policy updates from the American online forum website, Reddit, to see how this shift has occurred.In 2015, Reddit expressed ambivalence towards the topic of hate speech, and rather demonstrated their commitment to libertarianism when it concerned violent speech.This is reflective of the general American political philosophy concerning hate speech, which ultimately enshrines offensive language in the First Amendment right to freedom of speech offered by the American constitution.In regards to offensive content, Reddit moderators state specifically, "It's ok to say 'I don't like this group of people.'It's not ok to say, 'I'm going to kill this group of people'" (Reddit -[u/spez], 2015a).They also cited their commitment to preventing "the speech police knocking down their door" (Reddit -[u/spez], 2015a).By 2020, Reddit's stance on hate speech would transform completely.They would state: "Everyone has a right to use Reddit free of harassment, bullying, and threats of violence.Communities and people that incite violence or that promote hate based on identity or vulnerability will be banned" (Reddit -[u/spez], 2020).
The change in Reddit Content policy represents a shift in practices from Reddit as a company, and this change necessitates the question of if general Reddit users have shifted their philosophy surrounding hate speech in line with the company.For this reason, this paper utilizes topic modeling of comments made by general Reddit users on Reddit admins' posts informing them of Content Policy updates to determine the general topics brought up in reaction to changing standards of what can and cannot be posted on the website.This is done in order to determine if and how the general philosophy of Reddit users has shifted on the topics on censorship and freedom of speech.We will first explain the context of the five content policy updates used for the analysis in this paper before presenting the data and methods used to conduct this analysis.
We will then explore the results, before commenting on the change in the general lexicon of Reddit users and what it means for the future of social media platforms and further regulations of violent speech online.

II. Background
For a more comprehensive investigation of the reddit data we here qualitatively outline the content of each Reddit update utilized in the analysis for this paper.The Content Policy updates utilized in this paper begin in 2015, and end in 2020.The first update was made by the Reddit admin u/spez 1 in 2015 entitled, "Let's Talk Content.AMA 2 ." in r/announcements 3 (Reddit -[u/spez], 2015a).This is the first time Reddit admins post about content restrictions on the website.An excerpt from this post reads: As Reddit has grown, we've seen additional examples of how unfettered free speech can make Reddit a less enjoyable place to visit, and can even cause people harm outside of Reddit.Earlier this year, Reddit took a stand and banned non-consensual pornography.This was largely accepted by the community, and the world is a better place as a result (Google and Twitter have followed suit).Part of the reason this went over so well was because there was a very clear line of what was unacceptable.

Reddit -[u/spez], 2015a
3 Communities on Reddit are known as subreddits and are denoted with an r/ preceding the community name.
2 Ask Me Anything 1 When referring to users on reddit, the username is traditionally preceded by a u/ This post displayed ambivalence towards hate speech, and focused more on content of a sexual nature, perhaps due to the rise on the #metoo movement which gained traction in 2013 and influenced several influential social media platforms' content policies (see e.g.Klonick, 2021).
Twitter and Google, along with Facebook, had been under immense pressure at the time by German government officials to better their standards around the content posted on the site (see e.g.Alkiviadou, 2019).Notably, these companies would later become voluntary signatories to the EU's Code.This Reddit post was more of an open dialogue between users of Reddit and Reddit admins in preparation for their new Content Policy.Several researchers have pointed out the "laxness" of Reddit's content policies (Massanari, 2017;Gaudette et al., 2020).Incidentally, the #metoo movement and subsequent steps taken by social media platforms has had no impact on the prevalence of sexism in general (Archer et al., 2020), something that is echoed in many of the steps taken to combat hate speech online (Portaru, 2017).
The second post utilized for analysis in this paper is also from 2015 by Reddit admin u/spez in the r/announcements subreddit4 , where they introduced a "quarantine" function, which would essentially prevent subreddits that violated their new Content Policy from growing.An excerpt from the update read: One new concept is Quarantining a community, which entails applying a set of restrictions to a community so its content will only be viewable to those who explicitly opt in.We will Quarantine communities whose content would be considered extremely offensive to the average redditor… Our most important policy over the last ten years has been to allow just about anything so long as it does not prevent others from enjoying Reddit for what it is: the best place online to have truly authentic conversations.

Reddit -[u/spez], 2015b
Again, it is possible to see the general ambivalence around the concept of hate speech.There is a focus on "offensive content", but at this point in time, Reddit admins mostly concern themselves with a general enjoyment of the website for users.The third post extracted for the analysis in this paper is from 2017, made by u/landoflobsters in r/modnews entitled, "Update on site-wide rules regarding violent content".It is important to note that this post was made one year after the introduction of the EU's Code.An excerpt from this post reads: In particular, we found that the policy regarding "inciting" violence was too vague, and so we have made an effort to adjust it to be more clear and comprehensive.Going forward, we will take action against any content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people; likewise, we will also take action against content that glorifies or encourages the abuse of animals.
This applies to ALL content on Reddit, including memes, CSS/community styling, flair, subreddit names, and usernames.

Reddit -[u/landoflobsters], 2017
This post is the first time Reddit addresses violent speech, or in other words, hate speech.Specifically, they begin to update their restrictions around speech that incites violence against a particular individual or group of people.
The fourth post employed for analysis in this paper comes a year later, in 2018 with the title, "Revamping the quarantine function," made by u/landoflobsters in r/announcements.An excerpt from this post reads: On a platform as open and diverse as Reddit, there will sometimes be communities that, while not prohibited by the Content Policy, average redditors may nevertheless find highly offensive or upsetting.In other cases, communities may be dedicated to promoting hoaxes (yes we used that word) that warrant additional scrutiny, as there are some things that are either verifiable or falsifiable and not seriously up for debate (eg, the Holocaust did happen and the number of people who died is well documented).

Reddit -[u/landoflobsters], 2018
While they do not mention hate speech in this update, the concept of removing hate speech can be identified in them announcing that subreddits dedicated to Holocaust denial, or other "hoaxes" are subject to a quarantine.
As for the effectiveness of quarantining Reddit communities, Chandrasekharan et al. (2021) examined two quarantined subreddits, The Red Pill 5 (r/theredpill, a misogynist community) and 5 Red pills are a reference to the movie "The Matrix" (1999) where the protagonist is given a choice between a blue pill and a red pill.The red pill will make him see the world for what it is and the blue pill will allow him to keep his delusions."Incels" believe that they have been given the proverbial red pill and can see women for "what they really are".They are male supremacist with extreme resentment towards women, often advocating sexual violence.
The Donald (r/the_donald, a racist community encouraging anti-Muslim content in particular).
They found that although the influx of new users decreased significantly, despite the condition of exiting the quarantine being a significant reduction in sexist/misogynist and racist content, the offensive content levels remained at similar levels.Therefore quarantining worked to keep the hate from spreading, but did not alter the behavior of those who were already members of these communities nor did quarantining make them flee to other platforms (Chandrasekharan et al, 2021) or keep them from brigading (Gaudette et al, 2020).Brigading is a practice where members of a specific Reddit community are encouraged to upvote specific, often offensive, content outside of the community itself to make it more visible to other users.
The final post utilized in this analysis announces Reddit's current Content Policy made by u/spez in r/announcements in 2020.An excerpt from this post states: From our conversations with mods and outside experts, it's clear that while we've gotten better in some areas-like actioning violations at the community level, scaling enforcement efforts, measurably reducing hateful experiences like harassment year over year-we still have a long way to go to address the gaps in our policies and enforcement to date.These include addressing questions our policies have left unanswered (like whether hate speech is allowed or even protected on Reddit), aspects of our product and mod tools that are still too easy for individual bad actors to abuse (inboxes, chats, modmail), and areas where we can do better to partner with our mods and communities who want to combat the same hateful conduct we do.

Reddit -[u/spez], 2020
Here, Reddit clearly takes a stance on hate speech, and declares its commitment to the regulation and removal of hate speech on their platform.Four years after the adoption of the EU's Code, the general sentiment of Reddit admins concerning hate speech have demonstrated a clear shift in philosophy.The next step is to see how Reddit users have responded to these posts in the comments, and what these responses mean in terms of the philosophy of the general Reddit user.

III. Data and Methods
Using the Reddit API Wrapper (PRAW6 ) for Python, all comments on each of the aforementioned five posts were scraped.The information we gained contained the username of each poster of the comment, the body of the comment made, and the upvote count of the comment.
Using structural topic modeling (STM) (Roberts et al., 2019)  For STM we determined that the best compromise between semantic coherence, exclusivity, and held-out likelihood measures was around 50 topics for the 2015 data, 27 for the 2018 data (see Figure 1), and 75 for the 2020 data.We used comment upvote scores as covariates.The STM package for R includes some built-in tools for pre-processing that include lower-casing, removal of punctuation, stopwords, and numbers, as well as stemming.Rare tokens are also removed.For example for the 2015 data there were originally 17988 terms of which 9364 were removed which is the equivalent of removing 9364 tokens out of 381422, i.e. tokens that appear only once in the data.The three corpora examined using topic modeling after STM's pre-processing looked as follows:  well as the logistics of quarantined subreddits (topics 23).The conversation on Reddit as a corporation is also still active (topics 8, 9) as is the issue with brigading (topic 10).There are also many references to politics and political actors (topics 4, 5, 6, 12, 19) and racism (topics 15, 16, 17, 22).A meta discussion relating to both free speech and censorship as well as propaganda can be inferred from topic 20.The 2020 Content Policy update's most salient topics express the collective desire of Redditors to create safety for marginalized groups (topics 11, 17, 19, 29, 32, 34, 35, 37, 40) taking into account the power of Reddit as a tool for discrimination (topics 2, 16, 17, 22, 39, 63).There is also very specific talk around racism and to a lesser extent sexism (topics 46, 48, 68, 74) and rule enforcement (topic 27).Some old topics re-emerge like the discussion on banning subreddits (12, 15, 52) as well as censorship in general (36,61).A new but very prevalent topic is that of Chinese propaganda on the platform, often specifically regarding the subreddit r/Sino (topics 5, 6, 7, 50).

V. Analysis
The top salient topics from both Content Policy posts in 2015 reflect an openness of Reddit users to rules around banning certain kinds of topics or subreddits dedicated to "brigading", a form of harassment on the website where users from one subreddit enter another to flood the comment and post section with harassment or other irrelevant content.The general openness of users to more rules would support the assertion Anu Bradford (2019) makes about hate speech regulation and the EU in the digital economy; that the EU has the strictest rules that the platforms respond to, unlike other regulators like Russia and China that also have strict rules that seem to repulse the most popular platforms.One year after the major signatories adopted the EU's Code, Reddit would also find themselves clarifying what violent speech meant in terms of acceptable content on the site.
A look at the update on rules around violent content in 2017 suggests that when the topic of violent speech is introduced by the admins, general users are able to engage with this topic, and provide examples of a real problem within the Reddit community concerning violent speech; in this case, violent speech directed towards women by incels, who blame women and society for their inability to find a sexual partner.Violent speech would remain a problem that necessitated a solution, without much controversy, although the regulation of some other types of hate speech showed some resistance.It seems as though the "incel problem" was less controversial in 2017, as incels were fringe private citizens with generally distasteful views.This would change, however, for general Redditors once politics were introduced into the discussion.
The most notable shift is from the top salient topics in the post regarding quarantines in 2018, to the top salient topics in the post announcing Reddit's current hate speech policy in 2020.48.93% of Reddit users are American (Clement, 2021), and 2018 marked the second year of 45th President Donald Trump's time in office.The post from 2018 remarked upon "hoax' subreddits, and directly mentioned subreddits dedicated to Holocaust denial (topic 7).At that time, r/The_Donald was a controversial subreddit at the time that was dedicated to Donald Trump and fostered far-right discourse and conspiracy theories (topics 5, 17).Holocaust denial is only one of many conspiracies parroted by the far-right in America (Souther Poverty Law Center, 2021).It would later be quarantined in 2019 (Haskins, 2019), then fully banned in 2020.A Reddit spokesperson told Vice news that 'we are sensitive to what could be considered political speech, however, recent behaviors including threats against the police and public figures is content that is prohibited by our violence policy.As a result, we have… quarantined the subreddit'' (Haskins, 2019).The backlash against the initial post about revamping the quarantine system and applying them more liberally can be understood as an initial reaction against the perceived suppression of political views.With the majority of Reddit users being American, this would reflect a distaste for anything that could potentially violate the concept of freedom of speech in the First Amendment of the American Constitution.Ultimately, the subreddit dedicated to Donald Trump, would, in fact, stay unquarantined for a full year before finally receiving consequences.
In the 2018 update there is an ongoing discussion where a minority, but a significant minority, of Reddit users seem willing to ban violent speech even if it meant censorship.This minority would become the majority by 2020.It was not until 2020, when the US experienced the largest civil rights movement in its history, Black Lives Matter, a movement against the asymmetrical treatment and killings of black Americans by the police, that Reddit moved to protect groups of people in their formal content policy (visible in topics 43 & 46 in particular).At this point in time, Twitter had begun to flag then-President Donald Trump's tweets for incitement of violence as he responded to the civil rights movement with a call to retaliatory violence against protestors who had looted and rioted in certain areas (Twitter, 2021).It is arguable that at this point, social media platforms themselves began to label the philosophy of Donald Trump as intrinsically violent.In 2018, users may not have agreed, but by 2020, as shown by the topics brought up by Reddit users, they too had come to the same conclusion.

VI. Discussion
How the EU has been able to impact normative behavior surrounding hate speech online is insufficiently explained through the Brussels Effect in the digital economy.While Bradford (2019) makes a compelling argument for the regulatory power of the EU in the online realm, the changes observed in Reddit moderation and admin teams' attitude toward regulation of offensive content offer a deeper, perhaps unintentional consequence of the interaction of differing political actors in the internet realm.The US Supreme Court views hate speech as a fundamental aspect of the democratic right to freedom of speech.In 2017, the US Supreme Court ruled on Matal vs.
Tam, a case concerning the trademark registration of the band name, "The Slants''; stating that the "reclaiming" of the derogatory term for Asians was protected under free speech: The Government has an interest in preventing speech expressing ideas that offend.And, as we have explained, that idea strikes at the heart of the First Amendment.Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express "the thought that we hate." Matal vs. Tam, 2017 The freedom to express hateful opinions is part of a larger American philosophy that the US Supreme Court previously discussed in the 1974 case of Miami Herald Publishing Co.v.
Tornillo, with the ruling that "the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public".
The philosophy of Reddit's aforementioned content policy update in 2015 mirrors the general sentiments of the Court on the topic of hate speech.The Court refers to this philosophy as the "freedom to express 'the thought that we hate'", while Reddit states that "people have more open and authentic discussions when they aren't worried about the speech police knocking down their door" (Reddit, 2015a).This philosophy would also be mirrored in the discourse of general Reddit users in the replies to this update; topic three highlights this concern in the highest probability terms and FREX terms: "discuss, place, open, reddit, plan, polic(y/ies), restrict / authent, conflict, worri, plan, discuss, door, restrict' The EU takes an opposite approach to hate speech, viewing violent language as a threat to democratic freedom as a whole, as evidenced by the ECHR's ruling in Refah Partisi (The Welfare Party) and Others v. Turkey, where it was asserted that it was allowable for there to exist "some compromise between the requirements of defending democratic society and individual rights" (2003).Reddit's most recent content policy update in 2020 would reflect the European philosophy of compromise as opposed to the American philosophy of unregulated discussion: "it's our responsibility to support our communities by taking stronger action against those who try to weaponize parts of Reddit against other people" (Reddit, 2020).
The reason why Reddit is notable in the discussion of the EU's ability to impact norms in the online sphere is because Reddit is currently outside the scope of EU legislation, binding and non-binding.The differing philosophies of the US and EU on hate speech centers around the meaning of individual rights in the context of a free and democratic society.Thus, this conflict can be understood as a tension around the meaning of human rights between the EU and the US; with the EU's introduction of hate speech regulation online as a distinct effort to influence human rights norms in American tech companies who operate in an environment that holds contrary opinions.
Risse and Sikkink (1999) would identify the processes through which human rights norms are ultimately progressed from "commitment to compliance".Because political actors ultimately rely upon the recognition from, and interaction with, other political actors; it then becomes possible for different norms of human rights to be socialized into other regions.This process of socialization embodies three separate processes.The first process is "instrumental adaptation", where actors, upon mounting pressure from other actors, make "tactical concessions" in the human rights realm, whether that involves making formal commitments or signing binding treaties.The second process involves "argumentative discourses" that involve, "shaming and denunciations, not aimed at producing changing minds with logic, but on changing minds by isolating or embarrassing the target" (Risse and Sikkink, 1999).The third and final process concerns "institutionalization and habitualization", in which the norms being diffused are viewed by the actor as the "normal thing to do".
The evolution of Reddit's content policy is evidence of the third process of the EU's concentrated effort to socialize American tech platforms to its own conception of human rights and hate speech, to remarkable success.While Reddit and other tech platforms are not distinct political actors as originally envisioned by Risse and Sikkink (1999), it is possible to argue that the two did not anticipate the prevalence of social media in daily life in 1999, and today, several arguments have been made for tech platforms to be considered as distinct political entities (Helberger, 2020;Gilardi, 2021).
In with, but along the lines of the law and within the framework defined by the lawmakers.Not according to the decision of the management of social media platforms" (Bermingham, 2021).
While the debate continues surrounding which actor has the right to remove content from platforms; Merkel's comment and the results of this analysis make one thing clear: hate speech removal is now standard practice in the minds of moderators and policymakers and in the minds of users as well.

VII. Future Work
Scraping general user data and topic modeling what they say about content policy updates and the freedom of speech and censorship can be extended to other social media platforms under the scope of the EU, as well as outside the scope of the EU, as has been done here.Furthermore, sentiment analysis could be utilized to not only discover what topics are being brought up by users in response to hate speech regulation, but also to analyze the sentiments attached to each topic.
Future work would include sentiment analysis of the body of the comments on each Reddit post and using these sentiments over time as covariates in the STM.We would also like to explore statements, their topics and sentiments, at different points in time on other social media platforms.

Figure 1 -
Figure 1 -Coherence measures for the Content Policy 2018 STM.

Figure 3 .
Figure 3. Content Policy 2018 post topic proportions

Figure 4 .
Figure 4. Expected topic proportions of the 2020 Content Policy update.

Table 1 .
Size of final corporaIV.ResultsThe most salient topics for each of the three STM are examined in this section.All topics and their FREX weighted 8 words can be viewed in the appendix.Here we only list the most representative topics and group them by common themes.We present these in chronological order starting with the initial AMA on Content Policy in 2015, continuing to the Content Policy update in 2018, and finishing with the Content Policy update in 2020.The topics that emerge from the 2015 AMA (Reddit -[u/spez], 2015) revolve around the need for more transparency around bans and rules (see topics5, 18, 30, 31, 34, 40, 48), the need for rules around brigading subreddits (see topics 1, 13, 14, 24, 25, 40), and separating valid criticism from real threat (see topics 4, 16, 21, 47 plus doxxing: topic 9).There is also a secondary discussion going on that deals with censorship in general(topics 4, 8, 10, 11, 27)and Reddit as a corporation and how potential revenue influences decisions on content policy(topic 36, 39, 45).Topic 34 Top Words:Highest Prob: will, content, offens, sens, violat, see, defin FREX: violat, decenc, content, sens, will, list, common Lift: silo, decenc, login, usefulbot, checkbox, futhermor, advisori Score: content, will, decenc, violat, sens, nsfw, common Topic 24 Top Words: Highest Prob: harass, group, anyth, peopl, bulli, individu, other FREX: intimid, bulli, silenc, behavior, individu, group, harass Lift:<URL>,<URL>, <URL>, oklet, intimid, hive-mind Score: harass, bulli, group, intimid, individu, silenc, abus Topic 27 Top Words: Highest Prob: speech, free, freedom, express, protect, allow, say FREX: speech, freedom, free, express, consequ, unfett, principl Lift: farewel, fetter, <URL>, <URL>, <URL>, <URL>, triggeredteehe Score: speech, free, freedom, express, protect, unfett, consequ Topic 30 Top Words: Highest Prob: subreddit, rule, allow, encourag, enforc, mani, break FREX: rule, subreddit, encourag, break, enforc, warn, guidelin Lift: banana, haikus, hiaku, ponzi, removedban, throwingpotatosatclown, doot Score: subreddit, rule, encourag, enforc, break, allow, guidelin Highest Prob: sub, quarantin, can, communiti, user, content, list FREX: quarantin, nsfw, accident, revenu, sub, heads-, search Lift: "ban",aboutjson, accident, accross, blank, canari, commentspost  Score: quarantin, sub, content, user, communiti, offens, list Topic 3 Top Words: Highest Prob: ban, speech, hate, free, read, first, exact FREX: voat, hate, echochamb, ban, hill, speech, infest Lift: brightest, durr, lib, alan, articul, banning", blackpeopletwitt Score: ban, speech, hate, free, read, voat, first Topic 23 Top Words: (Oltermann, 2016)d see the first processes of argumentation, specifically, naming and shaming when German Justice Minister Heiko Maas threatened Facebook with regulations on a parliamentary level if the company neglected to address the growing prevalence of hate speech on the platform(DW, 2015).Instrumental adaptation would begin for American tech platforms in the dialogue that began between the EU and US social media sites following Maas' public callout, that eventually birthed the 2016 Code on Countering Illegal Hate Speech Online(Oltermann, 2016)that made the removal of hate speech standard practice.It can easily be argued that the EU was effective in socializing platforms to hate speech norms, as nearly every major American tech platform holds some level of restriction on offensive language.However, scrutiny of the effectiveness of the Code in socializing users of these platforms to European norms around hate speech is difficult; as platforms like Facebook, Twitter, and Google do not offer the opportunity for users to directly interact with changing content policy.With the option for users to directly interact with changing policy, Reddit creates the unique opportunity to examine the shifting dialogue within and between users as hate speech removal becomes standard in American tech companies.
When considering the prevalence of an expressed need to protect marginalized groups in the top salient topics from 2020; while the debate in general may not fully reflect a complete commitment to European standards over American ones, it is clear that the ideas that motivated the EU's Code in the first place have entered the consciousness of internet users.In other words, the standards of hate speech removal on platforms, as first enshrined in the Code, has been accepted by a good portion of users as normal; offering not only evidence for the EU's regulatory power in this realm, but its power in shaping the norms of the internet to their conception of individual rights.After the banning of Donald Trump and related accounts by Reddit and several other platforms, a spokesperson for Angela Merkel, chancellor of Germany at the time, released this statement in response: "...the fundamental right [of freedom of expression] can be interfered