In light of the difficulties of Facebook, Twitter, Discord and other social media platforms to combat online harassment, MIT's Laboratory of Computer Science and Artificial Intelligence has developed a new tool that could help. He proposes that, instead of depending on the moderators of the platforms, people start trusting their friends.
The tool is called Squadbox, and it is "sources of friends" for moderators to filter messages and support people who are being harassed online. So, a blogger who wants to have a public email address to receive suggestions, but also wants to avoid receiving hate messages from strangers, can set up a Squadbox account and use two of his co-workers as moderators, as proposed by scientists of MIT.
Your "squad" of two will split the work between them and make sure your inbox stays clean. They will also have tools at their disposal: the ability to create whitelists for previously approved email senders, and blacklists for senders that should be rejected automatically. Squadbox also scores the toxicity level of each message to help moderators review emails. "This line of work helps provide a map of a hybrid solution to bullying that increases human support with tools in a meaningful way," said University of Michigan Information Professor Clifford Lampe in a press release. Squadbox currently only works with email, but the team behind it hopes to eventually expand to other social media platforms.
Moderators who read harassment could potentially face "psychological risks"
While the service could help Squadbox account holders, it is far from a perfect solution. MIT noted that "the use of friends as moderators simplifies issues related to privacy and personalization, but also presents challenges for the maintenance of relationships." Owners of email accounts also felt guilty for supporting their friends and began to be reluctant to ask for more favors
More importantly, moderation puts stress on volunteers. "Moderating is a lot of work," said an anonymous moderator in the study. The study also mentioned that moderators who read harassment could potentially face "psychological risks," raising questions about how feasible the solution is.
The MIT study also found that the moderators finally got tired of checking other people's emails and had slower response times. And while most emails ended up in the trash as junk, few email addresses were blacklisted, probably because a full blacklist of a contact was too extreme a measure. The study concluded that much more precision is needed to fight online harassment.