This combination of 2018-2020 photos shows, from left, Twitter CEO Jack Dorsey, Google CEO Sundar Pichai, and Facebook CEO Mark Zuckerberg. Less than a week before Election Day, the CEOs of Twitter, Facebook and Google are set to face a grilling by Republican senators who accuse the tech giants of anti-conservative bias. Democrats are trying to expand the discussion to include other issues such as the companies' heavy impact on local news.  The Senate Commerce Committee has summoned Twitter CEO Jack Dorsey, Facebook’s Mark Zuckerberg and Google’s Sundar Pichai to testify for a hearing Wednesday. The executives have agreed to appear remotely after being threatened with subpoenas. (AP Photo/Jose Luis Magana, LM Otero, Jens Meyer)

Congressional lawmakers on both sides of the aisle seem to agree on at least one thing: Something needs to change in the world of social media. And that “something” is a key legal protection shielding platforms from liability over user content.

One effort is even bipartisan. Sen. Brian Schatz (D, Hawaii) and Sen. John Thune (R, S.D.) joined forces on The PACT Act, a bill that would force large technology platforms to remove unlawful content within four days of a court order.

But ask the leaders of Facebook, Twitter and Google parent company Alphabet how things should change, and the answers vary — from support for reform of the law, Section 230 of the Communications Decency Act, to a desire to keep it intact.

The three tech executives made their opinions known through written testimony ahead of a hearing by the U.S. House Committee on Energy and Commerce on Thursday, which will target the misinformation that fueled the Jan. 6 insurrection in the Capitol.

Considering the vast terrain of Facebook, Twitter and Google — which also extend to Instagram and YouTube — and their place in a broader sector that includes Pinterest, Snapchat, TikTok and many others, any repeal or reform of the law would be far-reaching in scope.

Here’s where the tech giants stand:

Facebook: Chief executive officer Mark Zuckerberg is in favor of reforming the rule in a “thoughtful” way, depending on whether companies are trying in good faith to moderate their own networks. Those that do shouldn’t be held liable “if a particular piece of content evades its detection,” he said, calling it “impractical for platforms with billions of posts per day.”

As for who should have the responsibility of evaluating a company’s efforts, it shouldn’t be the company itself. “Definitions of an adequate system could be proportionate to platform size and set by a third-party,” he said. Lawmakers should also “act to bring more transparency, accountability and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal.”

Twitter: Jack Dorsey believes Twitter took decisive actions against vaccine and election misinformation, labeling problematic posts and banning repeat offenders — most famously, former President Donald Trump. (Notably, Facebook and YouTube also shut down or suspended Trump’s accounts.) The Twitter CEO explained that moderating content “in isolation is not scalable,” and indeed, the company is testing ways to crowd-source the effort through a tool dubbed “Birdwatch.”

What goes a long way there, he acknowledged, is trust in the platform to handle content appropriately, and that demands transparency. “We believe that people should have transparency or meaningful control over the algorithms that affect them,” Dorsey wrote. “We recognize that we can do more to provide algorithmic transparency, fair machine learning, and controls that empower people.”

Google: Suggesting that no changes should come to Section 230, Alphabet’s Sundar Pichai pointed to “unintended consequences” of messing with the status quo. That could include potential impact on free speech, he noted, as well as on “the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”

Still, Pichai seems to know that some sort of action is necessary, even if not legally mandated. Like Dorsey, he brought up transparency, along with better policies that inform users if their content’s targeted for removal and allow for decisions to be appealed.

Across the board, all of the executives talked about transparency and the need to take other proactive measures. But only Zuckerberg seems open, even supportive, of legislating that.

In understanding why, it’s important to note that Facebook has made a big show of its efforts to address misinformation. The company now has 35,000 employees and multiple tools and programs dedicated to it, with changes to the algorithm leading to the removal of 1.3 billion fake accounts between October and December 2020.

Organizations like online advocacy group Avaaz criticized the delay, and the estimated 10.1 billion views of toxic, misleading content it missed ahead of the insurrection. But the investments still rose to enormous levels that few other platforms could meet.

If Facebook is calling for reform that makes Section 230 protections contingent on similar requirements, at a minimum, that would likely suit the social giant just fine. It’s already met this threshold, and it can put more resources into it, if need be. Meanwhile, it may see the law turn into something else, going from a shield to a weapon — one pointed in the direction of other platforms.

load comments
blog comments powered by Disqus