Alphabet CEO Sundar Pichai, Facebook CEO Mark Zuckerberg, and Twitter CEO Jack Dorsey attended a hearing held last March 25. The hearing is about social media’s role in today’s society and how to regulate them better. Today, Twitter announced a new CEO, Parag Agrawal. Parag has been the CTO of Twitter for years, and one of his semi-secret projects is called Project Bluesky.
In this hearing, both Democrats and Republicans tried to pin down the CEOs of the tech giants. They all want them to be held accountable for recent issues and happenings.
Each lawmaker was given a strict five-minute window. To maximize this 5 minutes, they went with a strategy that prevented the chief executives from stalling. This is done by limiting their answers to yes or no and cutting them off if they try to explain further. They also tackled as many issues and topics as they could. All grounds were covered, whether it is about coronavirus or global warming misinformation, child exploitation, or extremism. By doing this more direct method, there were able to get more answers than usual. Despite this, the hearing still lasted for about five hours.
Here are some of the issues raised against each chief executive.
Twitter’s CEO Ran Project Bluesky – A Revolutionary way to handle the spread of information to and from Twitter Followers
There are many potential outcomes and goals for this semi-secret Project Bluesky, and some are more universally amiable than others. Obviously, with Twitter’s scattered past, one of the things they focus on controlling is the dissemination of false information. Depending on your perspective of corporate news, this could easily describe your opponent’s major news networks or your own.
Like many before it, Project Bluesky aims to solve the age-old dilemma of questionable content by forming relationships with teams and cultures from pro-freedom entities such as Mastadon to pro-protectionism entities such as ActivityPub to form a cohesive, open-source pathway for auditing information in real-time. While this may prove impossible on its face – that is one very narrow niche idea that may prove especially helpful. This is taken directly from their quote on Wikipedia, but it reads, “preventing virality algorithms avoid reinforcing controversy and moral outrage” is an issue they want to help manage.
It’s pretty interesting to note that this one goal and aim alone would find mass approval on both sides of the over-politicized isle that find this one aspect of information as a critical fuel source that spreads incorrect information across the web. It’s not that it’s just wrong information; it doesn’t matter; it’s that it is believable information that is designed to trigger very deliberate emotional cue’s in the reader, forcing them to engage with the content and spread it like a virus across their desired social network.
The very act of sharing items on emotion isn’t even necessarily about attacking the information or spreading it but demonstrating your own value as an opponent to the content and context of the article. You are standing up and saying, or even warning, your audience and your followers that this is a real risk to them, and they should know about this very real danger to their lives and their loved ones.
Virality Algorithms Reinforcing Controvesy and Moral Outrage is Happening on a Social Level for all Twitter Followers
Every day articles compete for the main headline that caters to these highly sought-after over-sharing members of society, both on Twitter and off. These specific followers love making new and dangerous information viral and sharing it with their Twitter followers, YouTube subscribers, and blog readers. The moral outrage is a modus operandi, and their own virtue is what they are really trying to share. The self-sustaining ecosystem is one where just by playing the game, you convert your raw time into a Twitter currency, functionally buying yourself Twitter Followers who you know will be highly engaged to your content.
If they knew the information was outright wrong, the fallback on them would be minimal because it’s not about the information. It’s about demonstrating yourself as a good person in a morally questionable modern world. The moral outrage rarely accompanies the behavior one imagines with “morally outrageous” activities, such as yelling, screaming, kicking. People like Alex Jones, who have made entire careers off being morally outraged while acting morally outraged, demonstrate just how rare it is. They show the behavior doesn’t have to match the supposed reaction. After all, the goal is really just to convince a few more people than the original reader to move their mouse ever so slightly, then to make a click on their share button. What events might fit this new type of category of information, for better or for worse?
Riot on Capitol
One of the main topics of the hearing is the attack on the U.S. Capitol that happened on January 6. Lawmakers wanted the social media executives to admit that they were responsible for organizing the riot. Dorsey was the only one to say they were partly responsible for the attack, but it was unforeseen. He stated that the company saw no signs of violence days before the attack.
Lawmakers were also suspicious as to why Donal Trump’s Twitter account was permanently banned. Dorsey clarified that the action was done because they believe Trump’s tweets are harmful and could lead to more violence.
During the hearing, Dorsey was asked why racist hashtags are not banned from the network. This issue was raised because of the amount of hate Asians are getting these days. The anti-Asian movements have been spreading hate on Twitter with hashtags like #chinesevirus. Lawmakers are wondering why hashtags like this are still allowed. Dorsey defended by saying that those who fight against racism can also use the hashtags to their advantage.
Aside from the riot in the Capitol, another big topic in the hearing is the role played by the tech industry in spreading false information. Pichai tried to prove their company’s innocence right off the bat. In his opening testimony, Pichai claimed that Google had done a lot to limit misinformation, especially about COVID-19. He also listed the ways they are doing that.
For instance, when people search for information about the coronavirus, Google does not use its regular algorithms to show results. Instead, they provide links to World Health Organisation Websites since they are the most reliable source of information. They also display data on how the virus is spreading at the top of the results. Furthermore, Google shows nearby areas where a user searching about vaccines can get shot.
In addition to these efforts, Pichai also revealed how the company removed misleading content. Youtube took down 850,000 videos when Google blocked 100 million ads about the coronavirus.
Exploitation of Children
Democrats claimed tech companies are not doing enough regarding cyberbullying and social media addiction among children and teens. Meanwhile, Republicans were concerned about how Google and Facebook are making money by advertising to children. Pichai defended by saying that children 13 and below are not allowed on the site, and no money comes from them.
However, contrary to what he says, there is an ongoing case against Youtube Kids. Youtube, a child company of Google, faces allegations of collecting personal data from children without parental consent. This information is used to send them targeted advertisements. Both of these acts are prohibited by the law. These led the legislators to question why the platform even exists. They claimed that the service aims to hook children to social media and exploit them for profit.
The law has long prohibited unauthorized drug sales on online platforms. Naturally, drug companies that are caught guilty of doing the act are reprimanded accordingly. Still, posts keep on popping up. Rep. David B. McKinley (R-W.Va.) raised this subject against Mark Zuckerberg and asked him why he should not be held liable as well.
The chief executive defended by insisting drug sales were not happening on his network. Pieces of evidence of Ritalin, Xanax, and Adderall sales pointed out by lawmakers prove otherwise.
Facebook said they are doing their best at removing this kind of content from their systems.
Facebook was also asked why they are not doing anything concerning misinformation about climate change. Climate change is one of the biggest problems of this generation. So, spreading awareness is very important.
Rep. A. Donald McEachin (D-Va.) acknowledged Facebook’s efforts in fighting misinformation about this topic. However, he was dissatisfied with how it was executed. He wanted to know why Facebook is not being as strict here as they are in their campaign against coronavirus misinformation.
Zuckerberg answered this by saying that Facebook understands that climate change is a huge threat to all of us. But, unlike coronavirus, misinformation about this topic will not cause any imminent physical harm to anyone. This method is how Facebook approaches misinformation. They classify them according to how harmful they are to one person. Then, they decide on how tight regulations should be based on that.
Calls to Make Changes in Section 230
Section 30 of the Communications Decency Act is a law that protects tech companies from being held responsible for their user’s posts. Lawmakers from both parties are petitioning to make changes so social media and the tech industry can be regulated better. Though, they disagree on how it should be executed. Republicans want companies to be more strict with the content uploaded to their websites. Once a post violating the law has been found, the company will also be held accountable. On the other hand, Democrats want companies to loosen their content moderation. They believe it takes away the freedom of speech of their users. Facebook has also given suggestions to which the Twitter and Alphabet CEOs agreed.