National

Fake Russian accounts bought targeted Facebook ads during 2016 campaign

This sign, posted outside Facebook headquarters in 2011, depicts the symbol with which users give their approval to each others' posts. Now Facebook has disclosed that a Russian company linked to a Kremlin intelligence operation bought sponsored ads targeting voters during the 2016 presidential campaign.
This sign, posted outside Facebook headquarters in 2011, depicts the symbol with which users give their approval to each others' posts. Now Facebook has disclosed that a Russian company linked to a Kremlin intelligence operation bought sponsored ads targeting voters during the 2016 presidential campaign. AP

Facebook representatives told House and Senate investigators Wednesday that a Russian company linked to a Kremlin intelligence operation used fake accounts to buy about $150,000 in ad posts targeting voters during the 2016 presidential campaign, people familiar with the matter said.

But a Facebook official told McClatchy that its search was intended to serve only as a starting point and was limited to accounts that could easily be traced to Russian actors — for example, if they were written in Russian or had a Russian Internet address.

The discovery, revealed to investigators for the congressional intelligence committees, marked the first confirmation that Facebook was at least an oblique tool of Russia’s election meddling campaign aimed at planting Donald Trump in the White House.

Facebook’s chief security officer said few of the roughly 3,000 ads found in its initial review, purchased over a two-year period beginning in June 2015, referenced the presidential campaign and only about 25 percent were geographically targeted. The ads focused “on amplifying divisive social and political messages across the ideological spectrum,” including gun rights and immigration, Alex Stamos said in a Facebook news post.

The disclosure, first reported by the Washington Post, is sure to fuel calls for a deeper review by Facebook into whether Russia also may have used other front companies or nonprofit groups to conceal the purchase of additional sponsored ads carrying harshly critical or fake news about Hillary Clinton.

“These disclosures may be the first layer in unraveling Russian efforts to utilize Facebook’s platform to influence voting behavior,” said Jonathan Albright, a Columbia University researcher who focuses on the digital spread of misinformation.

A Facebook official who declined to be identified said the ad purchases were traced to a maze of fake accounts emanating from a single company connected to a Russian “troll farm” in St. Petersburg that U.S. intelligence agencies have accused of circulating false information or propaganda that tended to benefit Trump.

The officials who described the disclosures said they lacked authorization to speak on the record.

Facebook, the popular social media network used daily by more than 1.3 billion people worldwide, has come under intense public pressure to take steps to ensure that it does not become an easy vehicle for spreading falsehoods about political candidates.

The congressional committees and a Justice Department special counsel, Robert Mueller, are investigating whether Trump’s presidential campaign may have colluded with Russia’s massive cyberattack.

Virginia Sen. Mark Warner, the ranking Democrat on the Senate Intelligence Committee, said in a brief interview with McClatchy Tuesday that the panel was looking hard at Facebook’s role.

“Clearly, if you look at Facebook’s response or nonresponse around our election versus how Facebook dealt with similar attacks around fake news in the French election, there were very different results,” he said. “You know, part of that may be they’ve been learning as well. But obviously, we’ve got lots of questions to ask.”

Warner has previously raised questions as to how Russia might have learned ways to target voters in Facebook’s network, seeming to suggest that would have required help from a Republican operative or another American.

Clearly, if you look at Facebook’s response or nonresponse around our election versus how Facebook dealt with similar attacks around fake news in the French election, there were very different results. You know, part of that may be they’ve been learning as well. But obviously, we’ve got lots of questions to ask.

Democratic Sen. Mark Warner of Virginia

Facebook took aggressive steps in advance of France’s elections last spring to intercept and take down spam accounts that were spreading fake news. Russia gave financial backing to unsuccessful right-wing candidate Marine Le Pen.

Warner said Facebook can determine who bought sponsored ads related to an election, which buyers were political campaigns and which were “third parties” that might have concealed Russian involvement.

The company, which has been a staunch guardian of its clients’ privacy, has yet to say whether it would identify any such third parties to Congress or the special counsel’s office. A Facebook spokesman said only: “We are cooperating with the investigations.”

While much has been written about Russia’s use of automated commands, known as “bots,” to spread fake news about Clinton via Twitter, Facebook’s algorithms aren’t so easily penetrated. To reach most people’s Facebook pages, a nonpaying sender must have been accepted as a “friend” by the would-be recipient.

But advertisers routinely buy sponsored ads that arrive near the top of people’s private news feeds. Presidential campaigns have adopted this route as a way to target key subsets of the electorate.

In his news post, Facebook’s Stamos said the social media giant was able to trace the ads to 470 inauthentic Facebook accounts and pages created “in violation of our policies.”

“Our analysis suggests these accounts and pages were affiliated with one another and likely operated out of Russia,” he wrote.

Stamos noted that earlier this year, Facebook announced improved technology for detecting fake accounts, as well as other actions to curb the flow of misinformation over its network.

In a white paper last spring, Facebook said it intended to expand its security focus from traditional abuses such as hacking and financial scams to “more subtle and insidious forms of misuse, including attempts to manipulate civil discourse and deceive people.”

The company stated recently, Stamos said, that “we will no longer allow pages that repeatedly share false news to advertise on Facebook.” The company is exploring several new automated methods toward policing such content, he said.

California Rep. Adam Schiff, ranking Democrat on the House Intelligence Committee, called Facebook's revelation "deeply disturbing" and said it raises questions "whether other platforms were similarly the subject of paid Russian interference, and whether geographic or other targeting reflects any potential coordination with the Trump campaign or other U.S. persons.”

Peter Stone is a McClatchy special correspondent

Greg Gordon: 202-383-6152, @greggordon2

  Comments