The social media platform Twitter creates rules to keep users safe on its platform. These rules continuously evolve to reflect the realities of the world that the company operates within. In addition, the main focus of Twitter is to address the risks of offline harm. Moreover, some researches show that dehumanizing language can increase that risk.
As mentioned, the company constantly develops Twitter Rules to respond to the ever-changing challenges and behavior with serving the public conversation. Still, it understands how important the consideration of a global perspective is. It is also aware that it needs to identify the impact that its rules may bring to various communities and cultures. Since last year, Twitter has already prioritized feedback from its users, external experts, and its own teams. This is to inform the constant development of the platform’s hateful conduct policy.
Expanding Twitter’s Hateful Conduct Policy
Twitter encourages people to express themselves on Twitter freely. Still, harassment, abuse, and hateful conduct continue to have no place on the platform. In July 2019, the company expanded its rules against hateful conduct. This is to include language that dehumanizes other0 people in terms of religion or caste. But in March this year, the company expanded the rule to include language that dehumanizes people based on ethnicity, race, or national origin.
Also, Twitter will require tweets that degrade other people to be removed from the platform once other users report them. The company will continue to surface potentially violative content through proactive detection and automation. If an account repeatedly breaks the rules of Twitter, the platform may temporarily suspend or block the Twitter account.
Twitter’s Approach to Addressing Hateful Conduct on Its Platform
The rules set by Twitter help in setting expectations for all those who use the platform. These rules are updated so that they can keep up with the ever-evolving online speech, behaviors, and experiences that Twitter observes. The company applies its iterative and research-driven approach in terms of the expansion of the rules of Twitter. It also reviewed and incorporated public feedback to make sure that it considers a vast array of perspectives.
With every update to the policy, Twitter has sought to expand its understanding of cultural nuances. Besides, it makes sure that it can enforce the rules consistently. The company has also benefited from the feedback given by different communities and cultures who use Twitter regardless of where they are.
Below is the consistent feedback that Twitter receives:
Twitter’s Dataset: Narrow Down What is Considered
According to respondents, the term “identifiable groups” was too broad. They should also be allowed to engage with hate groups, political groups, and other non-marginalized groups using this type of language.
Many individuals wanted to call out hate groups with any language that they want. There are also instances when people want to call friends, fans, and followers in endearing terms like “monsters” and “kittens.”
In terms of languages, Twitter users believe that the proposed change could become better by providing more details, examples of the violations covered, and the explanation for when and how context is considered. Twitter incorporated this feedback when it refined this rule. It also made sure that it provided additional details and clarity across all the rules that it publishes.
Consistent Enforcement: Twitter Follower Examples
Many Twitter users raised concerns about the platform’s ability to enforce its rules consistently and fairly. For this reason, it developed a longer and more in-depth training process with its teams. This is to ensure that they were better prepared when they review reports. The growth of Twitter followers of all users is evidence of their success in 2021.
That being said, despite all these improvements, Twitter recognizes that it will still make mistakes. It is committed to continuing to work further to strengthen both its appeals process and enforcement process. This move aims to correct its mistakes and prevent similar ones as the company moves forward.
Twitter’s Trusted Partners
Twitter is aware that it does not have all the answers. For this reason, apart from public feedback, the company works in partnership with its Trust & Safety Council. It also has partnerships with other organizations all over the globe. The organizations have deep subject matter expertise in this area.
As a part of the update, Twitter has also summoned a global working group of third-party experts. They will help Twitter about how it could appropriately address dehumanizing speech that revolvers around the complicated categories of ethnicity, race, and national origin. These experts have also helped Twitter to understand better the challenges that it would face. Besides, they will help the company in answering questions such as:
1. How Twitter protects conversations that people have within marginalized groups, such as those that use reclaimed terminology.
2. How the company can factor in considerations in terms of whether a given protected group has been marginalized historically or is being targeted into its evaluation of the severity of harm.
3. How it ensures that its range of enforcement actions takes context entirely into account.
4. How Twitter accounts for power dynamics that can come into play in various groups.
Conclusion: It’s Getting Better!
According to Twitter, it will continue to build the platform for the global community that it serves. It will also make sure that the voices of people will help shape the platform’s rules and how the company works.
The company also said that it continues to look for opportunities to expand and evolve its policies so that it could better handle the challenges it faces currently. It said that it would constantly update users on what it learns and how it plans to address it. Moreover, it will continue to provide regular updates in terms of all of the other works that it is doing to make the platform a safer place for users.