Social media sites like Facebook and YouTube are rife with misinformation and violence.

The debate around one of the few laws governing the Internet has heated up in recent years, but a new report from New York University lays out why the law needs an update and an expansion, not a repeal.

All social media companies are protected from liability for what any user posts (no matter how false or harmful to another person, or something broader, like a presidential election, let’s say) under Section 230 of the Communications Decency Act. It was enacted in 1996, two years before Google was founded, and eight years before Facebook, nine years before YouTube, 10 years before Twitter and 14 years before Instagram.

President Trump — vying for reelection in November amid an ongoing pandemic, an economic recession and several revealing books by journalists and former confidants — again on Tuesday tweeted his position that Section 230 should be repealed in its entirety, echoing unfounded claims of conservative bias online. “Stop biased Big Tech before they stop you!” he wrote.

But scholars at NYU’s Stern Center for Business and Human Rights think Section 230 is still important, if severely outdated.

“The law is still useful, but it could be more useful if it were revamped, if it didn’t just grant immunity to social media platforms and other commercial Internet entities and instead made that protection conditional,” Paul Barrett, the center’s deputy director and author of the report, said.

Under the center’s recommendation, Section 230 would remain essentially in its entirety, but companies like Facebook, Twitter and YouTube would have to work a little bit to maintain protection under it. They would have to take “reasonable steps” to monitor and reduce the amount of harmful content on their platforms and remove all content that is demonstrably false. Otherwise they could opt out, but would then be open to litigation regarding user content.

“There are people who have different wish lists of what they want to see the platforms doing, but the basic theme we heard is increased transparency and accountability,” Barrett said.

But there also needs to be enforcement of rules. So Barrett is recommending that a “digital regulatory agency” be formed to oversee the commercial Internet and ensure that Section 230 and the suggested updates to it are adhered to.

“It is an anomaly that the commercial Internet doesn’t have any kind of sustained oversight from Washington, D.C., when every other industry does,” Barrett said. “It would be helpful to have something like the FDA for the Internet, and I don’t see any logical reason at this stage for why that doesn’t exist.”

Such a regulatory body could require all Internet companies to disclose their algorithms, shedding light on exactly how and how fast misinformation spreads online — and how much such companies gain from its spread. It would also be allowed to sanction companies when they fail in combating the proliferation of harmful content.

Outside of momentary dips in stock prices and an occasional advertiser boycott, Internet leaders face almost no consequences for trading in misinformation, aggressive hate speech and sometimes extreme violence.

Facebook, for instance, has only just started to experiment with labeling news and videos that are intentionally fake and misleading, but it has repeatedly declined to remove such content. When a murderer in New Zealand last year decided to livestream on the platform his killing of dozens of people in mosques, it reportedly took an hour for Facebook to remove the video and it did so only after it was reported by local police. By then it had been viewed thousands of times and copied. Versions of the video are still on the site today, as well as on YouTube, which is owned by Google.  

Instead of its own regulatory body, leaders of Internet platforms, namely Mark Zuckerberg of Facebook and Jack Dorsey of Twitter, have instead been called before Congressional and Senate committees to be publicly grilled on internal policies and decision making. Democratic lawmakers have tended toward questions surrounding misinformation and government findings of Russian interference with social media in order to influence U.S. elections. Republicans have leaned toward complaints of alleged bias against conservative users, a line of thought supported openly by Trump.

It’s this kind of lack of forward momentum in politics that has Barrett not expecting much in terms of big change for Internet law anytime soon and certainly not before the November presidential election. It’s not the companies standing in the way.

“The companies would be willing to go along with certain types of regulation,” Barrett said. “But the partisan politics and the paralysis we’ve seen in Washington in recent years are far more daunting obstacles than getting the companies on board.”

And considering the current Trump/Republican lock on any legislation getting through, Barrett doesn’t foresee partisanship with regard to the Internet changing unless government leadership does. 

“We won’t see serious reform in this area,” he said, “until and unless there is Democratic control of both Congress and the White House.”

For More, See:

Facebook’s Civil Rights Auditor Calls Recent Decisions ‘Painful’

Facebook Tells Advertisers There’s No Perfect ‘Fix’ for Hate Speech

Who Will Regulate the Influencer Industry?

load comments
blog comments powered by Disqus