Facebook is still working to take user privacy more seriously, and its newest changes are aimed directly at advertisers.
As of July 2, the platform will require its advertisers to work within some extra safeguards around user data that’s the basis of targeted ads, what Facebook calls “custom audiences,” including disclosing where a given brand or marketer got the information it’s using for targeting.
Sometimes, the origin of a targeted user’s information will be from opt-in things like newsletters, but often it will be purchased from third-party firms that do little but gather up and sell people’s online data. If the latter is the source of an advertiser’s targeted campaign list, Facebook said, “We want to make that clearer to people.”
So now, advertisers will have to “specify the origin of the audience’s information,” be it from customers themselves, data brokers, or a mix of the two, and Facebook will include the information for users to see if they click the tab “Why am I seeing this ad?”
Beyond that, Facebook will also require a business or brand using Facebook to collect user data to have “any necessary permissions” from users before they share it with another party, like an advertising firm.
These changes are likely not exciting to brands, advertisers and third-party data brokers, but it looks to be something of a compromise. In the immediate aftermath of the Cambridge Analytica scandal, Facebook is said to have floated banning data from third-party brokers entirely, something that created “a lot of pushback and concerns” from advertisers, an industry executive last month told WWD.
But this is unlikely to be the last change Facebook makes to its data practices as heightened scrutiny on the issue continues. The platform is also continuing to deal with some fallout around its role in the advent of “fake news.” In addition to being in something of a war of words with major news outlets over its response, including allowing its algorithm to lump political stories in with political ads and plans to introduce a barometer of “trustworthiness” for news items distributed on Facebook, the rise of fake news has more people going to messaging app WhatsApp to share and discuss news, according to new research from Reuters Institute. But Facebook owns WhatsApp, making it not a blow to the platform, but a notable shift in how people use social media to discover and share the news. Partly driven by Facebook’s decision to downplay news on users feeds, usage of social media to see news dropped 6 percent over the last year after nearly a decade of “relentless growth.”
Reuters Institute’s Nic Newman said “change is in the air” in media as more news outlets are starting to focus on more quality content and paid subscriptions, as opposed to clickbait created mainly to pitch advertisers, while the platforms are looking to incorporate “trust” into their algorithms in light of politically driven manipulation.
Even still, Newman said “trust in news remains worryingly low in most countries,” including the U.S.
“On the business side, pain has intensified for many traditional media companies in the last year with any rise in reader revenue often offset by continuing falls in print and digital advertising,” Newman added. “Part of the digital-born news sector is being hit by Facebook’s decision to downgrade news and the continuing hold platforms have over online advertising.”
For More, See: