Twitter has announced to update its “hateful conduct” policy to include content that “dehumanises others” based on their membership in an identifiable group, even when the material does not include a direct target.
For the last three months, the micro-blogging platform has been developing a new policy to address dehumanising language that makes someone less than human, Twitter said late on Tuesday.
“Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, religious affiliation, age, disability or serious disease).
“But there are still Tweets many people consider to be abusive, even when they do not break our rules.
“Better addressing this gap is part of our work to serve a healthy public conversation,” Vijaya Gadde and Del Harvey from Twitter said in a blog post.
Many scholars have already examined the relationship between dehumanisation and violence.
Twitter is now asking its 336 million users to give feedback on this to ensure how this policy may impact different communities and cultures.
“For languages not represented on our platform, our policy team is working closely with local non-governmental organisations and policy makers to ensure their perspectives are captured,” saod the blog post.
The users would have time till October 9 to provide Twitter with feedback on the new policy.
“We’re experimenting with a new way to write and roll-out policy and rules. Let us know what you think,” tweeted CEO Jack Dorsey.