The CEO of Reddit, Steve Huffman, found himself once again involved in a controversy surrounding the moderation policy of his website. In a thread from Reddit announcing the findings of the 2017 transparency report of the platform, in which Reddit identified and listed about 1,000 suspect propaganda accounts linked to Russia that have been banned, Huffman responded to a direct question about the rules of I hate the company.
"I need clarification about something: is open racism evident, including insults, against reddit rules or not?" Asked the Reddit user, chlomyster. "It is not," replied Huffman, who operates on Reddit under his original "spez" handle.
Huffman elaborated his point, and added:
"On Reddit, the way we think about speech is to separate behavior from beliefs, which means that Reddit will have people with beliefs different from yours, sometimes extremely, when the actions of users conflict with our policies. of content, we take measures ".
Our approach to governance is that communities can establish appropriate standards around language for themselves. Many communities have rules about speech that are more restrictive than ours, and we fully support those rules.
It's a controversial approach, to say the least, and it has many Reddit users outraged that communities like Trump-Downtown r / The_Donald can walk up and down the line of racism repeatedly without any action on the site. Many Reddit users responded to Huffman by noting that hate speech constitutes behavior in a certain way, and that communities like r / The_Donald participated directly in the conversation and organization of events such as Charlottesville, Virginia, manifestation of white supremacy that resulted in death by Heather Heyer. This conversation has been boiling for quite some time, boiling more recently last month when the company discussed its approach to Russian propaganda.
Huffman's position here is evolutionary. Almost a decade ago, Huffman's approach to hate speech mirrored that of other major social media platforms today, which is to ban it, except in extremely narrow or circumstantially unique situations. For example, Facebook's policies on hate speech are well documented, and saying something racist will typically lead to some kind of disciplinary action. Other platforms such as Twitter, YouTube and Instagram have hate policies that can also lead to suspensions or bans.
Huffman's approach to hate speech has evolved over the past 10 years
"I think I came a little late to the party, but I banned it, we rarely forbid the misuse of junk mail, but the hate speech used in that context is not something we tolerate," Huffman wrote in a thread nine years ago about prohibit a user from using hate speech. "This is not a change in politics: we have always banned hate speech, and we will always do it." It is not for debate. You can whine and moan all you want, but my team and I are not responsible for encouraging behaviors that lead to hatred, "Huffman wrote in response to another user in the same thread.
However, when Huffman took over from interim executive director Ellen Pao, who was expelled from her position in part because of the vehement and toxic opposition of the platform to Pao's leadership, her approach to hate speech changed again. "While my personal view of fanaticism has not changed, my opinion of what Reddit should do about it," Huffman wrote on the subject almost three years ago, a few weeks after returning to head the company. "I do not think we should silence people just because their views are something we do not agree with, there is value in the conversation, and we, as a society, have to face these problems." This is an incredibly complex issue. , and I'm sure our thinking will continue to evolve. "
Reddit still takes hard line positions in calls to violence, threats, doxxing and other activities that can lead to real-world damage. But Huffman has often been insipid in moderating the more complex gray areas between innocuous content and those extreme examples. That's where the hate speech, which is not illegal in the US. UU., Thanks to First Amendment protection, it generally falls. For example, in 2015, Reddit banned the shameful community r / fatpeoplehate and the openly racist r / coontown community. The infamous situations before that included a ban on the community sharing photos of naked celebrities filmed and a community dedicated to sharing the so-called "creepshots" of underage girls.
Reddit takes action against some communities only when it seems that it has to
More recently, Reddit took action against the fake pornographic community generated by artificial intelligence r / deepfakes, as well as a handful of alt-right subreddits and Nazi boards. But every time he does this, Reddit cites a specific rule such as the use of violent language, doxxing or the exchange of non-consensual pornography.
However, in terms of pure language, Huffman seems to be more permissive, which contrasts sharply with other platforms in the technology industry, almost all of which are dealing with difficult questions about moderation these days. This week, Facebook president Mark Zuckerberg was questioned by Congress about the current data privacy scandal at Cambridge Analytica, and asked numerous questions about how the company plans to handle hate speech on its platform. The issue of Facebook is especially pressing, as ethnic violence in Myanmar has erupted, thanks in part to the organization and propagation of propaganda on the social network.
The focus of Facebook seems to focus mainly on artificial intelligence. Zuckerberg says that his company increasingly looks for automated algorithms that analyze text, photos and videos to do the job, even tens of thousands of human moderators can not do it. That work decides whether involving a part of the content breaks the company's policies around false news, incitement to hatred, obscenity and other inadmissible forms of content.
Huffman believes that banning speech will not make him disappear
The Reddit approach, on the other hand, seems to focus less on the general rules and more on the case-by-case evaluations. That will not do much to calm critics who want to ban communities like r / The_Donald or make the use of racial slurs a punishable offense. Huffman seems to adopt the absolutist approach to freedom of expression to let sunlight disinfect the world from extremist views and bigotry, or download work to the subreddit administrators and moderators of the site when appropriate.
However, that approach falls apart when it becomes inconvenient for Reddit as a company, as in the presence of a legal and public relations nightmare that results from allowing neo-Nazis or illegal pornography to run rampant on the site. As many Reddit users pointed out to Huffman in the answers to the thread, a study published last year on the ban on r / fatpeoplehate and r / coontown, entitled "You can not stay here: the effectiveness of the Reddit ban in 2015", showed clearly positive effects of banning hate communities.
"Many more accounts than expected interrupted their use of the site, and, among those who remained active, there was a drastic decline (of at least 80 percent) in their use of hate speech," the study authors concluded. "Although many subreddits saw an influx of r / fatpeoplehate and r / coontown & # 39; migrants & quot;, those subreddits saw no significant changes in the use of hate speech – in other words, other subreddits did not inherit the problem."
Prohibit obnoxious groups helps clean communication platforms
Whatever Huffman's evolutionary approach to the issue, Reddit users seem to be the most directly affected by the proliferation of hate speech on the platform.
"Spez, what qualifies as a bannable hate speech for you? Because I wonder if you could justify allowing some of the things on your platform that you allow on your platform in front of Congress," wrote the user PostimusMaximus. "Zuckerberg is sitting here being interrogated for not eliminating the hate speech quickly enough due to the limitations of the AI and yet you find yourself rejecting the hate speech as correct because you think it is not dangerous to allow it on your platform or because you expect t_d [r/The_Donald] to self-moderate and, hopefully, if they troll enough, they will die alone. "
"I think apart from the Russian interference, you need to give a full answer explaining what the logic is here," the user added, linking specific Reddit threads full of anti-Muslim hate speech in the Trump-focused subreddit. "Literally, users are allowed to spread the hate speech and pretend that it is political in a strange sense of freedom of expression, as if it were okay and nothing bad happens."