Twitter has defended its handling of misinformation and abuse in the wake of this week's shooting at YouTube headquarters. In a blog post titled "Serving public conversation during last-minute events," vice president of trust and security Del Harvey explained how Twitter tried to provide "reliable and authentic" information about the attack, even when some users spread false accusations about the identity of the shooter.
Harvey writes that Twitter does not have a system to verify the accuracy of the information, reiterating Twitter's position that it is not an "arbitrator of the truth". However, monitor the deliberate information that violates the rules against harassment, hate speech, spam or violence threats Harvey says that after the shooting, Twitter "suspended hundreds of accounts for harassing other people or deliberately manipulated conversations about the event "and implemented automated systems to prevent suspended users from creating new accounts.
Harvey also says that Twitter tried to promote reliable information by posting Twitter Moments about the shooting as early as 10 minutes after the tweets began to appear.
BuzzFeed complained after the shooting that Twitter was losing its usefulness as a source of credible news, telling 25 different people that the tricksters falsely claimed they were the shooter, including a BuzzFeed journalist who was discrediting the scams. A hacker also briefly took over the account of a YouTube employee who had tweeted about the shooting, spreading false information through the account. CEO Jack Dorsey said after the shooting that Twitter was "tracking, learning and taking action" against misinformation and "working diligently on product solutions to help."
This publication does not describe the changes, but Harvey says that Twitter "continues to explore and invest in" possible solutions. These include potentially making it more difficult for people to evade suspensions, improve Twitter's ability to identify automated accounts, and make team members respond more quickly to "ensure that a human review element remains present" in assessments.
Twitter is right that some of the problems we saw this week, such as users hacking accounts or posting images of specific people to encourage harassment, can be handled by enforcing existing rules. And the improvements that Harvey describes would help the general platform, not just its usefulness during the tragedies. At the same time, it is difficult to repress a behavior while explicitly refusing to ban it, which is the balance that Twitter tries to attack here.