screen-shot-2018-09-26-at-3-36-10-pm.png

Twitter users -- particularly women and minorities -- have long complained about the platform's lax rules against open hate speech and harassment, but the massively popular social media site has been slow to change its ways.

It is now trying to make up for their past mistakes by beefing up its rules and expanding its hateful conduct policy, announcing on Tuesday that it would be creating new regulations to stop what it called "dehumanizing speech."

Critics have repeatedly and harshly assailed Twitter for years due to their rules stating that a comment must be directed at a specific person to fall under the hateful conduct policy. Wired Magazine put it succinctly, writing that previously, a Twitter user would be allowed to write something like "all women are scum and should die" because it was not directed at a specific woman.

Twitter's Vice President of Trust and Safety Del Harvey and Vijaya Gadde, a member of the legal team at company, wrote in a blog post that the new dehumanizing speech policy is an effort to address comments just like this and make people feel safer using the platform.

SEE: New 'Firefox Monitor' will alert you if your data or passwords are stolen

"Language that makes someone less than human can have repercussions off the service, including normalizing serious violence," the company wrote. "Some of this content falls within our hateful conduct policy...but there are still Tweets many people consider to be abusive, even when they do not break our rules."

Harvey spoke honestly with Wired about the social media site's failure to be more open about its more controversial policies, and hoped this new effort would be a step in the right direction. Twitter worked with their Trust and Safety Council -- a group of NGOs and organizations that provide input on Twitter's policies -- as well as its development teams to come up with the new rules. It now would like the public to comment on it and share any thoughts or suggestions on how Twitter can better curtail hate speech.

"For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Better addressing this gap is part of our work to serve a healthy public conversation," Harvey and Gadde wrote.

"With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target."

The management of hate speech has been a particularly thorny issue for Twitter, whose CEO, Jack Dorsey, has repeatedly decided against more stringent rules, wary of doing anything that would even hint of limiting free speech. Just last month, he was forced to permanently ban conspiracy theorist Alex Jones after YouTube and Facebook initially banned him for his targeted harassment of parents who lost children during school shooting incidents.

At first, Twitter spent weeks refusing to ban Jones from using their site, claiming he had not broken any rules. Dorsey openly defended the move, writing on Twitter that the company would not be cowed by public pressure and wouldn't take "one-off actions to make us feel good in the short term." He caused further uproar by suggesting it was the responsibility of journalists -- and not the websites giving Jones a platform -- to combat the hate speech Jones openly trafficked.

But after multiple media outlets pointed out dozens of Jones' Tweets that directly contradicted Twitter's rules, the social media site was forced to relent and remove his account. This debacle prompted Twitter security VP Harvey to send a confusing email to Twitter's staff claiming the company was simply following its own rule book by initially keeping Jones and then later suspending him. But after defending its moves, she said they planned to move quicker than expected on the dehumanizing speech effort and other initiatives to address similar issues.

Twitter is giving users until October 9 to respond to a survey about the new rules on dehumanization, and will review the comments as they try to implement the new policies. Users of the site have previously been skeptical of the rule changes because many of the current regulations that are already on the books are applied haphazardly, with seemingly little reason behind which Tweets get removed and which are allowed to stay.

The blog post from Twitter goes on to reference studies from researchers showing the negative effects of widespread hate speech on populations and the need for large platforms like Twitter to be aware of how their site is used by hate groups.

Social media sites across the world continue to struggle with balancing free speech against hate speech. Facebook has faced an avalanche of criticism after it was implicated by the United Nations in their report on the Rohingya genocide crisis in Myanmar. Human rights workers said Facebook allowed the senior generals of Myanmar's army to use the platform to not only spread racist, misogynistic misinformation about the Rohingya ethnic group but stoke racial animus toward the group, giving the army further justification for their harsh -- and illegal according to the UN -- actions. Last month, Facebook removed the accounts of 20 officials in the Myanmar government for their actions during the genocide.

"Although improved in recent months, Facebook's response has been slow and ineffective. The extent to which Facebook posts and messages have led to real-world discrimination and violence must be independently and thoroughly examined," the UN panel told Reuters late last month.

FOLLOW Download.com on Twitter for all the latest app news.

Takeaways

  1. Twitter is aiming to make its website safer by banning dehumanizing speech directed at "members of a protected group."
  2. Twitter is letting users comment on the changes in an online survey until October 9 before conducting a more thorough review of the rules and releasing it.

Also see

Jonathan is a Contributing Writer for CNET's Download.com. He's a freelance journalist based in New York City. He recently returned to the United States after reporting from South Africa, Jordan, and Cambodia since 2013.