Years back, most people would have absolutely no idea what an algorithm is. Is it a musical instrument or construction equipment, they might ask. Of course, the answer is no.
An algorithm is a finite sequence of well-defined, computer-implementable instructions that aims to solve a problem. If that sounded like riffraff of gibberish to you, here is an easier take. Algorithms are step-by-step processes or rules computers use to solve a problem or do a task.
The thing is, the word is now not as uncommon as it used to be. That is thanks to the rise of social media networks. They always talk about how their algorithms enhance the user’s experience and the likes. So, people gained a general idea about what algorithms are. However, in almost more than half of the time algorithms are mentioned in articles, they are pointed to be causing problems.
Negative Impacts Of Algorithms
Social media networks want us to keep scrolling on our feeds for hours on end. To do so, they use algorithms that identify the trending topics and show them to us. Various studies found out that posts that generate strong emotional responses are proven to be the most engaging. Unfortunately, anger is one of these emotional responses. As a result, we often see triggering content that makes our blood boil in rage. It gets worse when we look at the comments and read about the disagreements. So, knowingly or unknowingly, social media algorithms divide us. It happens so much that some people have left social media for good. They consider them to be “very toxic” websites. Their sentiment is understandable since social media platforms are considered tools that connect people. But, based on what is happening, they are doing the opposite. There is nothing to be blamed other than the algorithms.
Algorithms keep us angry all the time, which is not the social media platform’s intention. They want us to enjoy and have a great time when we are on their apps and websites. Their current algorithms may have backfired and done the opposite, but that does not mean they cannot fix it. The largest social media platforms at least have the decency to admit that their algorithms are fuelling divisions. For that, they apologize. But words are not enough, are they?
Social media giants are now investigating the negative effects their algorithms are causing and may cause in the future. In Twitter’s case, it is launching its latest algorithmic research effort. It is called the ‘Responsible Machine Learning Initiative.’
Responsible Machine Learning Initiative
Twitter’s initiative will monitor the impacts of their shifts in algorithms. It aims to prevent their machine learning systems from causing various harmful elements like bias.
The Responsible Machine Learning Initiative will address four key pillars:
- Taking responsibility for our algorithmic decisions
- Equity and fairness of outcomes
- Transparency about our decisions and how we arrived at them
- Enabling agency and algorithmic choice
Twitter explains that its system uses machine learning that impacts hundreds of millions of Tweets per day. The problem is, it can sometimes cause the system to behave differently than intended. But, then, finding the root of the problem does not mean they can solve the issue immediately. They cannot just modify the algorithms, for even subtle shifts can impact people using Twitter. So, the developers would need to study the changes and use their findings to build a better product.
The Algorithms Can’t Help But Learn To Divide Followers
Twitter will find an algorithm that does not pose many potential societal harms by addressing the key pillars. But, at the same time, the new and improved algorithm will still maximize engagements. Admittedly, it will be a difficult task to find that middle ground. The two streams are just too conflicting. Still, that is a journey Twitter followers new and old are ready to take. The social media giant plans to tackle it by instituting more specific guidelines as to how ML is applied. It is hoping that by doing so, it can build a more beneficial, inclusive platform.
Twitter’s META team will be leading the Responsible Machine Learning Initiative. META stands for ML Ethics, Transparency, and Accountability. It is a special team created by Twitter consisting of a dedicated group of engineers, researchers, and data scientists. These individuals will be collaborating across the company to assess downstream or current unintentional harms in the algorithms Twitter uses. Furthermore, the META team is tasked to identify which issues need to be prioritized.
The following are the first three analyses Twitter will be doing. They will be available to the public in the following months.
First, Twitter will do a gender and racial bias analysis of its image cropping (saliency) algorithm. Next in line is a fairness assessment of its Home timeline recommendations across racial subgroups. Last, getting more Twitter followers will analyze content recommendations for different political ideologies across seven countries.
Explainable ML and BlueSky Initiative
Twitter is also introducing explainable machine learning solutions. These will help users have a better understanding of Twitter’s algorithms, what informs them, and how they impact what they see on the platform.
The Responsible ML will also include Twitter’s ambitious BlueSky initiative. It is Twitter’s effort to decentralize social networks. Basically, BlueSky will allow users to create their own social media platforms.
In tandem with Responsible ML, BlueSky will let users select what algorithms they want to be applied to their accounts. That is as opposed to being guided by an overarching set of platform-wide rules. With algorithmic choice, Twitter says that people will have more input and control in shaping what they want Twitter to be for them.
While these projects from Twitter look promising, their success is not guaranteed without users’ feedback. So, Twitter is asking its user base to participate in sending feedback. It says those will help in making more informed decisions that will lead to a better product.
At the end of the day, we, the users, are the ones who will benefit the most from these shifts and changes. So it is just right that we give our support.