The Congressional probe into last year’s Capitol riot in Washington, D.C., landed on Silicon Valley’s front stoop on Thursday, with subpoenas from the House January 6 panel demanding answers from Facebook parent Meta, Twitter, Reddit and YouTube about misinformation in their networks.
While Congress remains divided on most matters, it has been united in its scrutiny of how tech platforms operate. The reasons vary, from accusations of censorship of conservative voices to letting false information proliferate. But the opacity cloaking these networks has frustrated both sides of the aisle, compelling them to haul in tech leaders to testify numerous times over the years.
The latest subpoenas mark a new level of pressure. Trying to understand the abstract issue of how social media shapes public discourse is one thing, but connecting the platforms to a concrete attack on the seat of democracy in a historic House investigation is another.
“Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps — if any — social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence,” Committee chair Bennie Thompson said, in prepared remarks.
“It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions.”
An earlier request in August asked more than a dozen websites and tech companies to preserve records related to the riot. Specifically, it was interested in any analysis of 2020 election misinformation and extremism, material provided to law enforcement and other relevant internal communications.
The response, if any, has been underwhelming, leading to the subpoenas.
From here, Committee members may find that the “basic questions” are not so basic after at all. Because the way misinformation spreads in social media platforms resembles a feature more than a bug.
The same algorithms that understand user preferences to target ads or suggest products also recommend other forms of content. Such systems usually prioritize engagement, so people who like, share or comment on certain types of posts tend to see more of the same — welcome to the echo chamber effect.
On their own, these types of information bubbles can divide people, leading to societal or cultural fractures. Pour misinformation and disinformation into the mix, and violence starts to look inevitable. Add the perfect conditions to maximize its spread, and a recipe for widespread radicalization comes into view.
Those conditions aren’t easy to change.
Consider the context: For large, publicly traded tech companies already scaled to massive proportions, showing further growth — a key metric for investors — poses a significant challenge. Slightly better numbers aren’t enough. They need enough gains to prove they aren’t stagnating.
Meanwhile, they run automated systems designed to spur user activity, and therefore ad dollars and sign-ups. Platforms built this way favor content that stokes engagement, and few things incite human reaction more than fear and anger.
Of course, intentions vary from one company to the next, and so does their tech. That’s why painting all social networks with the same brush doesn’t necessarily offer an accurate view. A more nuanced assessment has been impossible, however, because their inner workings are closely guarded secrets.
But one example has emerged. Armed with thousands of pages of company documents, Facebook whistleblower Frances Haugen revealed how the platform morphed into an “angrier place” in 2018, following a major update to the recommendations algorithm that elevated polarizing content. At the time, chief executive officer Mark Zuckerberg said the change would promote “meaningful social interactions.”
It has. Some of those interactions have become so meaningful, they’ve grabbed the House Committee’s attention. But at least business has been brisk.
Year-over-year, Facebook’s first quarter revenue went from $11.97 billion in 2018, when Zuckerberg announced the change, to $15.08 billion in 2019 and $17.74 billion in 2020. Last year, revenue soared at $26.17 billion, due at least in part to surging online activity across the board during the pandemic. That amounted to a whopping 48 percent year-over-year growth, followed up by a second quarter that clocked an even larger 56 percent increase, at $29.08 billion.
Revenue dropped to $29.01 billion in the third quarter of 2021, as the company battled an onslaught of bad PR over Haugen’s revelations. Long term, Facebook parent Meta sees growth in pursuing the virtual, 3D internet, or metaverse. In the meantime, it has to find a way to expand its already enormous footprint.
Now the House panel investigation demands answers from the company and its peers, with questions that cut to the heart of how these types of platforms work. In doing so, it also sends the unmistakable message that they must stop the spread of false information. But that’s far more difficult than they might imagine.