I’m becoming increasingly interested in bad journalism.1 I wouldn’t go as far to say that it was dangerous or even especially intentional if only to avoid the hyperbole for which I may one day directly criticize the industry.

It’s also important to avoid treating the industry as a monolith. Some outlets put out less stuff that raises my eyebrows, just as there are some journalists that do quality work.

I also don’t want to pretend that bad journalism is a new thing. Sure, there are new pressures exerted by the Internet, but hoaxes and other false stories date back to the birth of the printing press, and beyond.

What I’m really concerned with is the ethical implications of some of the stuff that I come across. I recently wrote an article outlining how a great number of news outlets, some of them major, repeated a “fact” that appears to be false, something I discovered by applying a small amount of common sense and research.

So now, I’m going to talk about Russia, which in the current political climate is like sticking a fork into an electrically-charged ant pile and hoping I don’t get shocked or stung. But, I’m after truth, and in this instance, it appears to be at the bottom of the ant pile, so in I go!

At this point in time, I think it’s hard to deny that Russia tried to have some sort of an influence over America and continues to try to interfere in American politics. Discord in America is good for Russia’s geopolitical aims, so it would be weird if they didn’t.

It’s important to note that America has a long history of interfering in foreign elections. We don’t exactly have the moral high ground here, but that doesn’t mean that election meddling is right, and it doesn’t mean that we should be blind to its effects.

In fact, we should be doubly sure we know what it means.

We can’t know for certain what outcome was most desired by the Russians when they undertook their actions, primarily because we’re talking about intent, which we can’t know unless they come out and tell us directly, and they’re never going to do that.

The best we can do is look at their actions and try to extrapolate their intent. A good way to start would be to look at the ads that Russia ran on Facebook. Intriguingly, it wasn’t all pro-Trump, anti-Hillary stuff. There are pro-Bernie ads, which could fit the previous mold, but there’s also anti-Trump ads in the mix. There are pro-Black Panther ads, pro-crop ads, anti-immigrant ads, “Woke Blacks” ads, all of which are topics that less directly relate to the election.

And then there’s the interesting claim by Rob Goldman, a Facebook executive, that Russia’s didn’t want to influence the election’s outcome so much as sow discord, which fits the pattern of the ads we’ve seen.

Furthermore, there’s strong evidence that Russia used Facebook to arrange pro-Islam and an anti-Islam rally at the same time, in the same place, on the same day, in Texas. Something the Russians seem to understand is that discord requires two sides to fight, and they appear more than willing to provide both sides with incentive and support.

Additionally, the Russians didn’t create the claims that they ended up amplifying. Nate Silver writes, “Would Clinton still have been ‘Crooked Hillary’ even without the Russians? Almost certainly. But the Russians were at least adding fuel to the right fire — the one that wound up consuming Clinton’s campaign.”

On the other hand, 56% of ad buys occurred after the election, which leads me to believe that influencing the elections’ outcome was either not their primary goal, or merely one goal among many.

So why is it that the people who track bots on Twitter are reporting that they’re almost exclusively spreading right-leaning messages? For instance, after last week’s shooting in Florida, multiple outlets reported that “Russian bots exploited Parkland shooting to spread pro-gun propaganda.”

The pattern would suggest that Russia would be promoting both sides—why aren’t they doing so here?

Maybe they are.

Is it possible that the bot-trackers are only tracking part of the network? Have they set their criteria wrong, and are they consequently feeding bad info to reporters?

The primary source reporters appear to be using is Hamilton 68. Right off the bat, I have a couple of problems.

First, the data provided is an abstraction/summary based on the monitored accounts, which means reporters are forced to use the website’s conclusions. The selection process for the monitored accounts is also somewhat subjective, listed on the site as the following:

1) identified as participating in specific disinformation campaigns synchronized with Russia Today and Sputnik News,

2) meaningfully linked to users who self-identified as promoting pro-Russian viewpoints, and

3) bots that provide support to members of the first two categories.

While the first point is objective, the second and especially the third selection criteria require at least some level of human interpretation to ascertain intent. Subjectivity clearly plays a role.

Second, the number of monitored accounts is shockingly low. As of today, Hamilton 68 is aggregating data from 600 accounts—well short of the 50,000 that Twitter claims are active. That alone suggests their data isn’t representative.

Third, their FAQ page is poorly written in such a way that suggests they may be Russian bots themselves. I jest, but there’s general wonkiness that probably means an engineer wrote the page instead of someone who knows how to write.2

Fourth, their criteria for selecting bots explicitly excludes “domestic political content,” which was a major factor in the Facebook ad spending. While it is possible that the Russians are using a different strategy on Twitter, the onus falls on Hamilton 68 to explain why that is. Could a significant piece of the picture be missing because they choose to ignore it at the outset?

Fifth, Hamilton 68 doesn’t publicly list which accounts they are monitoring. They claim that this is done to prevent a sort of arms race: public identification might spur the Russians to change what they are doing to obscure their efforts even further. The problem is that keeping this information hidden makes Hamilton 68’s claims unverifiable.

Sixth, there are conflicts of interest within the organization. Hamilton 68 is financed by the Alliance for Securing Democracy. Its director is Lauren Rosenberger, who previously served as a foreign policy advisor for Hillary for America, an organization which raised and spent more than $586 million dollars on Clinton’s behalf during the 2016 presidential campaign. Another staffer, Brittany Beaulieu, worked for at least three Democrats as an advisor on foreign relations.

Also on the team is Jamie Fly, who previously worked for Marco Rubio, a noted neo-conservative, and at times a never-Trumper. The other two staff members are David Salvo and Bret Schafer, whose political alignments I’ve been unable to discover.

As it stands, 3/5ths of the primary team has worked in capacities that suggest they would reasonably have anti-Trump or pro-Hillary bias, which is concerning.

Their advisory board is far worse. No less than four of the members3 worked in the Obama administration, for Joe Biden, or on Hillary Clinton’s campaign.

Another of the members, retired Admiral James Stavridis, was vetted to become Hillary Clinton’s running mate.

Michael Morell, former acting CIA director, wrote an op-ed endorsing Clinton which was published in the New York Times.

Mike Rogers hosts a show on CNN, was part of the committee that cleared Hillary Clinton of wrongdoing in Benghazi and has also accused the President of peddling conspiracy theories. It’s would be hard to say he’s providing any kind of objective advice.

Another Republican on the board, Michael Chertoff, called Trump “hysterical,” and said he would vote for Hillary Clinton. Similarly, Politico writes that Bill Kristol and Donald Trump “have emerged as rivals of almost cartoonish contrasts.”

Kori Schake self-identifies as a never-Trumper, and also signed a document stating that the signees would “commit [themselves] to working energetically to prevent the election of someone so utterly unfitted to the office.”4

Right off the bat, ten out of the twelve board members have either worked for Trump’s opposition or made public comments in opposition to him.5

You can’t make this kind of stuff up. Like, really?

Is it any wonder when the organization searches Twitter for Russian propaganda, they only seem to turn up with right-leaning content?

This is especially concerning given that subjectivity is built into its selection criteria, and the staff is full of people who have expressed distaste for the current conservative president.

I believe that the idea of tracking what the Russians are trying to do over social media is important. But we need a truly bipartisan group overseeing the project. As it currently stands, at least 13 out of 17 key members of Hamilton 68 are strongly biased in just one direction, and it seems to show in the results.

Journalists need to stop citing Hamilton 68 as a major source because they simply can’t be trusted—remember that they won’t reveal which accounts they are monitoring. There’s no way to fact-check them. We shouldn’t trust them just because they say they are trustworthy.

It’s important to note that they refer to both their organization and their advisory council as “bipartisan” and “transatlantic,” which is strange given its composition: mostly American Democrats and Never-Trumpers.6 If they can’t be trusted to correctly self-label, can their results be trusted?

Beyond the general questions about its credibility, Hamilton 68 admits some things about itself that makes me wonder why the media treats it as a good source of info on what the Russian Government is doing to influence America.

Some relevant quotes from Hamilton 68’s “How to Interpret the Hamilton 68 Dashboard” page:

“[I]t is neither our assertion nor belief that all messaging on the dashboard is created or approved by the Russian government. It would, therefore, be inaccurate to describe the network as a hierarchical, centrally controlled operation, nor would it be correct to assume that all activity within the network is the product of a synchronized disinformation campaign.”

“We also do not believe the influence network monitored by the dashboard is the only Russian-linked influence network employed by the Kremlin and its proxies. This is but one sample in a wide-ranging population of Kremlin-oriented accounts that pursue audience infiltration and manipulation in many countries . . .”

“The presence of a hashtag on the Hamilton 68 dashboard also does not necessarily suggest that the success of a hashtag on Twitter is the result of Russian influence operations, nor does it mean that the hashtag originated within the network we monitor. These networks often participate in trending hashtags, topics, and URLs that they did not originate, and that would likely promulgate without their support. Because of the points above, we emphasize that it is INCORRECT to describe content linked-to by this network as Russian propaganda. Rather, content linked-to by this network is RELEVANT to Russian messaging themes, and is used for purposes of both insinuation and influence.”

Intriguingly, they seem to be stating that the dashboard has limited use, saying that it’s not terribly comprehensive, but also that the network produces little of its own material, but rather amplifies pre-existing trends.

So, it’s really concerning when CNN reports that “Russian bots promote pro-gun messages after Florida school shooting,” and other organizations like Wired and Vanity Fair follow suit. Vanity Fair goes as far as to state that “[t]hese troll and bot armies seem to follow a specific strategy for injecting hashtags, memes, and conspiracies into the mainstream,” which stands in stark contrast to what Hamilton 68 says is going on.

Furthermore, Forbes published an article which claims that the bots are not, in fact, pushing pro-gun messages,7 and extreme misinterpretations of the Hamilton 68 Dashboard seem to be at fault.

Hamilton 68 isn’t doing much to correct the confusion. The aforementioned article from Wired reports that Bret Schafer, a staff member at Hamilton 68, “says the spike in shooting-related posts from Russia-linked bots is in line with what his group observed after last year’s shootings in Las Vegas and Texas.” Even if they don’t have final say in what the articles say, they’re actively participating in their production, which makes me think that they have to know that journalists are misinterpreting what they’re trying to say.8

You can’t trust Hamilton 68, and you should take any article that bases its conclusions on the site with a grain of salt. After all, it appears journalists are willing to bend the stats to fit a certain agenda, one they likely share with those who run Hamilton 68.

If Hamilton 68 wants to be taken seriously, it needs to come up with good answers to these questions:

  1. Why is it that you call your organization bipartisan when the vast majority of your staff openly identify as far-left, left, or moderate?
  2. What steps are you going to take to help prevent misinterpretation of your Dashboard?
  3. Why is your data so different from that released by Facebook?
  4. How are you going to improve your bot tracking in order to have a more representative sample?

If you liked this article be sure to check out these:
California 3-Pack
Decertified: The JPCOA Story
Diplomat, Go Home!

  1. Sorry about not posting yesterday. I was working on this and it turned out to be way more interesting than I initially thought, so you’re getting a massive double-post today.
  2. The use of the word “amongst” in particular bothers me.
  3. Specifically, Nicole Wong, Jake Sullivan, Julianne Smith, and Mike McFaul.
  4. They’re referring to Trump if that’s unclear.
  5. I also have my doubts about the other two. David Kramer worked for never-Trumper John McCain and was subpoenaed in the Steele Dossier investigation. Toomas Ilves wrote an op-ed in The Washington Post advising Trump on how to respond to Russian meddling. Both are concerning, but the evidence against them is not as strong as against the other people, which is why I’m listing them here, and not in the body of the piece.
  6. Even if we count the Never-Trumpers as Republicans, they are still greatly outnumbered by Democrats.
  7. A quote from the article: “Finally, there’s the issue of claiming that the messages being pushed are “pro-gun.” Just looking at the topics and hashtags used does not tell us what the content of the tweets contains and certainly does not reveal that they are mostly “pro-gun.” Some of them, like #gunreformnow and #guncontrol, which were most popular, seem more likely to have been anti-gun.”
  8. Or, more sinisterly, they’re okay with the misinterpretations being published.