Donald Trump’s return to Twitter couldn’t have come at a worse time.

Apparently “Vox Populi” has spoken, and they want former President Trump back on Twitter. A narrow majority of 51.8 percent of the 15 million respondents (which most likely included bots) voted for Trump’s return.

Since purchasing Twitter, Musk has moved at light speed without worrying about the consequences. Immediately after elevating himself to self-proclaimed chief twit, Musk pushed out chief legal officer Vijaya Gadde, the leader of all things trust and security. Musk then fired 3,000 people who practiced Gadde’s policies: the contractors “behind the screen” who dealt with reports of hate speech, harassment, stalking, threats, non-consensual intimate images, spam and other rule violations from Twitter. In one fell swoop, content moderation on Twitter was wiped off the map.

Before we dive into what it could mean if Trump starts tweeting again, it’s helpful to understand what Musk has taken apart and what he’ll likely try to put back together once the dangers become apparent and advertisers flee.

Over the years, Twitter has invested significant resources in addressing online harm. However, this effort started frustratingly slow. In 2009, Twitter only banned spam, impersonation, and copyright violations. Then the lone security officer, Del Harvey, recruited one of us (Citron) to write a memo about threats, cyberstalking, and damage suffered by people being attacked. Harvey wanted to address that damage, but the C-suite resisted in the name of the Freedom of Speech Party’s “Freedom of Speech” party.

Twitter largely stuck to this script until 2014, when cyber gangs began pushing women off the platform in a campaign known as Gamergate. At that point, advertisers decided they didn’t want their products to appear alongside rape and death threats and involuntary pornography. Gadde built an impressive trust and security team, tripling the number of people on it. Harvey, Sarah Hoyle, and John Starr designed policies that prohibit cyberstalking, threats, hate speech, and non-consensual pornography. On the product side, Michelle Haq put that policy into practice. Thousands of moderators were hired; product managers worked to make reporting processes more efficient and responsive.

That was just the beginning. In 2015, Gadde, Harvey and Hoyle established a Trust and Safety Council, made up of global civil rights and civil liberties groups. (We have since served on that council on behalf of the Cyber ​​Civil Rights Initiative, where we serve on the board of directors and in leadership positions.) That same year, Jack Dorsey returned as CEO, making trust and safety a priority. This was especially evident after the 2016 election. In response to the disinformation and hate speech that plagued the platform during election season, Dorsey and Gadde assembled a small kitchen cabinet (Citron, former New York Times editor Bill Keller, and dean of the Berkeley Journalism School, Tom Goldstein) to chart a path forward. ensuring that the platform would amplify rather than destroy public discourse.

On December 2, 2016, Dorsey – along with Gadde and Harvey – sat down with this group to talk about how Twitter could best address disinformation that was leading to the crumbling of trust in democracies around the world. Those in attendance didn’t have all the answers, but it was clear that the company was on high alert and would commit resources to addressing destructive online behavior.

For the next two years, the council met to advise on new products and services. After Harvey and Hoyle left in 2018, Gadde brought Nick Pickles on board. That team tackled new problems, including deepfakes and other digitally manipulated images. They worked on the “Healthy Conversations” initiative that sought feedback on promoting citizen dialogue. Gadde’s team updated the hate speech policy to ban “inhumane language.” (This, of course, is a condensed history of Twitter’s work on content moderation.)

Crucial to note is Twitter’s blind spot when it comes to breaking rules: officials. Trump (and others) were given free range to spew hate speech, harassment, election lies, and health disinformation in violation of company rules. Twitter and others held to the position that officials were “different,” as opposed to our mantra that “with great power comes more — not less — responsibility.”

On January 6, 2021, as a crowd descended on the US Capitol, many called for Trump’s long overdue removal, after which Gadde persuaded Dorsey and Trump’s account was temporarily suspended.

On February 6, 2021, we wrote together to advocate for Trump’s permanent social media ban. In our view, “enough was enough”: Trump had used his social media presence to downplay and politicize the deadly pandemic; he also used it to incite a mob that left five dead and countless seriously injured and shocked the nation and the world. Better late than never, Twitter and other social media companies have taken Trump’s online megaphone away.

Now Musk has invited him back, but the former president has suggested he’s not interested. He has a new place to post where the rules are literally made for him: In February 2022, Trump founded Truth Social. With less than 2 million users, his reach is anemic, but that hasn’t stopped him from using it to spread conspiracy theories, election lies, hate, and anti-Semitic tropes.

Despite his protests, Trump will certainly be tempted to return to Twitter to reconnect with his 86 million followers. But the platform he may return to is very different from the one he left in February 2021 when we opposed his return. That would have been bad enough. Now Trump is returning to a platform with a decimated trust and security team. What could go wrong?

Since taking over, Musk has been bulldozing and backtracking on content moderation in real time. He adopted an authentication scheme without committing to its original goal: identity impersonation protection for those most likely to encounter it. Now he’s trying to fix that blunder. He will soon discover that breaking down an entire edifice of trust and security will cause real harm to users and scare off (more) advertisers. However, unlike some of his previous mistakes, he will also learn that it won’t be easy to recreate an entire team that has taken over a decade to build.

Musk probably takes the interpretation that “Vox populi, vox Dei” implies that the people are always right, but one of the earliest references to this phrase comes in a letter from Alcuin to Charlemagne in 798: “And to those people may not be listened to who continue to say that the voice of the people is the voice of God, since the debauchery of the crowd is always very close to madness.

Now that the guardrails have been removed, these words resound WHERE.

Future Tense is a partnership of Slate, New America and Arizona State University that examines emerging technologies, public policy and society.

Leave a Reply

Your email address will not be published. Required fields are marked *