Yesterday, Twitter issued a call for proposals to measure its "contribution to the general health of public conversation" so that it can help "foster a healthier debate, conversations and critical thinking" and discourage "abuse, spam and the manipulation". It is the kind of high-spirited request that you would expect from a platform that presents itself as a powerful civic forum for the digital age. But measuring "public conversation" is not going to solve the central problem of Twitter: that there is a big difference between promoting a public conversation and creating a liveable space, and Twitter might not be able to have both.
The interaction model of Twitter is almost completely binary: you can take a vow of silence with protected tweets or you can talk and be totally accessible. In addition to blocking individual users, Twitter's protections mainly involve muffling other people's tweets, not controlling how people see and interact with your own.
Most people on Twitter are not public figures who need criticism
For Twitter, the digital agora, this is invaluable. Politicians and companies can not avoid comments from ordinary people, and strangers can make jokes or discuss ideas without being invited to a private party. But most Twitter users are not agents of power that need criticism, and living in public has its costs. Maybe you're fine if someone retweetes an anecdote but you do not want to see it in a story. Or you make a perfect joke that goes viral and blinds you when people scrutinize your entire timeline. Or tweet a question for people in your industry, but someone retweets a large general audience, and you get a lot of useless answers. Or you just want to let off steam without receiving well-intentioned advice!
These concessions are not fodder for the debate over digital privacy or the abstract ethics of textual reproduction. They are just things that make Twitter a less pleasant place. They do not arise because Twitter did not ban the right Nazis or purge the right bots, but because their interface does not give people control over their own words. Twitter could change this. For example, it could allow users:
Disable tweet embedding outside the platform, making it more difficult to spread tweets in news or blog posts
Block retweets or tweeting of appointments, either for specific publications or for a full timeline, slowing the indignation quotes that feed the Twitter dogpiles
Completely disables the answers to a specific tweet, for when it literally means "do not"
Only accept retweets and responses from people who follow, instead of having to silence or filter notifications from random users
Selectively protect individual tweets
Protect all tweets without losing a verification badge, if they have one
These changes would not prevent trolls from making death threats or preventing a fake news account from tweeting propaganda. But a lot of Twitter toxicity is not clear forbidden monstrosity. It is death by a thousand courts to get silly answers to a six-month tweet, to have anonymous insults in each new publication, or to have a fragment of a thread of Twitter taken out of context and to circulate your control much further. It is an unthinking behavior that is possible without the exchange of information almost without friction of Twitter, but very encouraged by it.
And even if they do not end harassment, the privacy tools could facilitate the creation of a professional Twitter profile with some informative tweets, without prospective employers clicking to find an avalanche of abusive responses. It would make it easier to reframe your own space on Twitter, without having to jump into the bigger fray.
The Twitter interface does not cause toxic behavior, but it could help mitigate it
The Twitter community has developed an arcane honor code to compensate for the lack of privacy controls. There is a formula for calculating when a larger account should put a smaller one "in the works" with a dating tweet, a heated debate about when journalists should incorporate tweets in articles, and a careful label to judge when to refrain from tweeting someone. But why not let people establish their own limits if they wish?
Well, to begin with, powerful users could abuse all these options to avoid hearing criticism. (Imagine if no one could cite Donald Trump's tweets except his biggest fans, or correct a newscast when something went wrong). Most of the anti-bullying tools have been released first to verified users, but these would be better applied as anti-defamation rules: the more famous you are, the less control you should get. That's not exactly a winning business model for Twitter.
These tools would also break the flow of Twitter conversation. You can not count on the ability to interact with any idea, or enter into any exchange you see. It would be frustrating to click on a good tweet and realize that you can not retweet or respond. The limits of Twitter would become less porous and the service less predictable, especially if people could change the configuration of a tweet at any time, which would be, by far, the most versatile and useful approach.
Obviously, Twitter is not the only social network. If you want a smaller microphone and more privacy, you can go to Facebook or Mastodon or to a web forum of your choice.
But I like the Twitter format, and apart from the Nazi bots, I like many of its users. It does not make me call casual friends to my "friends" or make me choose a specific community of like-minded people to hang out with. I can meet a colleague through his diet or laugh at a retweeted joke of some stranger whose name I will never see again. I prefer to have a better version of that defective platform than a new beginning somewhere else.
People can still interact with Twitter as an all-in-zero and ignore these nuanced privacy settings. But they could also choose to commit to their own terms in a less exhaustive way. Healthy debate is not the same as perpetual and complete verbal combat with strangers. Critical thinking flourishes when people can take the time to consider and change their opinions, rather than simply defending them until death. And while I'm waiting for Twitter to build a philosophically sound and practically executable model of a collective social good, I'm willing to settle for experimenting with the user's experience.
I would say "do not do it to me", but in reality, I can not stop you.