I was determined to not talk more about fake news this week. I'd had in mind to do something about how the law affects the internet, but there were just too many good stories this week, especially the absolutely excellent writeup by Recorded Future about chinese activity in influence operations.
As the article says, there's a decided difference between an influence operation and an information manipulation operation. States have for centuries conducted various forms of propaganda operations, both abroad, and often on their own citizens. We have an in built assumption that western democratic states don't lie to their citizens, but TV shows and great movies have been made about the Art of Spin, and the ability of the media to highlight certain facts and hide others.
We view information manipulation as harmful distortion of facts, and that's a good line to draw in the sand. Unfortunately, while the focus is on hostile states conducting such operations, whether in the UK, the US or elsewhere, western democracy is quite capable of protecting and encouraging it's own citizens with free speech to spread harmful misinformation itself. That tension between the freedom to speak ones mind against the state versus the ability to spread harmful misinformation about vaccines and their relation to autism is a tough tension for a democratic society to live with. Is the casual spread of disinformation the cost of having free speech?
As with the reason for bridges collapsing, I think the low level of misinformation is probably an acceptable cost, but the sudden creation of amplifiers, in social networks, in group chat and the ability to spread your views much further and wider than was possible even just a decade ago. We've created harmful feedback mechanisms that simply exacerbate the damage and we don't have good dampeners to reduce that harmful feedback cycle yet.
Accompanying the article was an edited stock image of a generic millennial chap in plaid shirt and standard-issue beanie, or "trendy winter attire", as Getty put it.
The MIT journal's editor-in-chief, Gideon Lichfield, took to Twitter to tell a "cautionary tale" about what followed the article going live:
"We promptly got a furious email from a man who said he was the guy in the photo that ran with the story. He accused us of slandering him, presumably by implying he was a hipster, and of using the pic without his permission. (He wasn't too complimentary about the story, either.)"
Lichfield pointed out that he didn't think calling someone a hipster was "unflattering or unduly controversial" but contacted Getty to be safe.
The stock photo giant checked the model release and lo! The guy in the image wasn't even the same dude who was complaining. "He'd misidentified himself," Lichfield said.
"All of which just proves the story we ran: hipsters look so much alike that they can’t even tell themselves apart from each other."
I have no cyber angle for this, it just amused me a lot this week.
Have I Been Pwned is an aggregator that was started by security expert Troy Hunt to help people find out if their email or personal data has shown up in any prominent data breaches. One service it offers is a password search that allows you to check if your password has shown up in any data breaches that are on the radar of the security community. In this case, “ji32k7au4a83" has been seen by HIBP in 141 breaches.
Several of Ou’s followers quickly figured out the solution to his riddle. The password is coming from the Zhuyin Fuhao system for transliterating Mandarin. The reason it’s showing up fairly often in a data breach repository is because “ji32k7au4a83" translates to English as “my password.”
In yet another "Today I Learned", some fairly complex english passwords are actually simple passwords in other languages.
Luckily, Have I been Pwned has a list of the most common passwords, allowing you to check users passwords against it and ensure that your users don't use weak passwords that are commonly breached, without needing to know what they mean or why they are popular.
The result of the commands is then posted to a private Slack channel in a particular workspace using the embedded tokens.
Note that a side effect of this particular setup is that the attacker has no way to issue commands to a specific target. Each infected computer will execute the commands that are enabled in the gist snippet upon checking it.
The attackers also appear to be professionals, based on their way of handling their attack. They only use public third party services, and therefore did not need to register any domains or anything else that could leave a trail. The few email addresses we found during the investigation were also using trash email systems, giving the attackers a clean footprint. Finally, the watering hole chosen by the attackers can be considered interesting for those who follow political activities, which might give a glimpse into the nature of the groups and individuals that the attackers are targeting.
This is an interesting attack in that it is using newer techniques. With companies increasingly turning to slack and github, posting tools here and using those mechanisms are almost certainly going to be allowed, and are hard to inspect as part of command and control mechanisms.
It is worth noting that the wateringhole attack exploits a known and patched vulnerability (patch often), and the malware immediately exits if it detects almost any antivirus program running on the machine.
This update includes 1 security fix. Please see the Chrome Security Page for more information. [$N/A] High CVE-2019-5786: Use-after-free in FileReader. Reported by Clement Lecigne of Google's Threat Analysis Group on 2019-02-27 Google is aware of reports that an exploit for CVE-2019-5786 exists in the wild. We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.
A bug in Chrome that affects the stable build but has also been found in the wild?
Let's hope that your desktop estate allows you to easily roll patches out across the estate simply and easily, because this needs to be patched ASAP.
Based on this investigation, the Subcommittee concludes that Equifax’s response to the March 2017 cybersecurity vulnerability that facilitated the breach was inadequate and hampered by Equifax’s neglect of cybersecurity. Equifax’s shortcomings are long-standing and reflect a broader culture of complacency toward cybersecurity preparedness. The Subcommittee also lacks a full understanding of the breach, as the company failed to preserve relevant messages sent over an internal messaging platform.
A damning report into equifax with a few key phrases in the executive summary alone
" The usernames and passwords the hackers found were saved on a file share by Equifax employees. Equifax told the Subcommittee that it decided to structure its networks this way due to its effort to support efficient business operations rather than security protocols. "
"Equifax conducted an audit of its patch management efforts, which identified a backlog of over 8,500 known vulnerabilities that had not been patched. This included more than 1,000 vulnerabilities the auditors deemed critical, high, or medium risks that were found on systems that could be accessed by individuals from outside of Equifax’s information technology (“IT”) networks"
So that left insiders 7500 vulnerabilities that could be used, and I'm absolutely certain that anyone able to bypass any of the thousand external vulnerabilities would be on the network and able to take advantage of the remaining ones.
But critically is this little nugget: "In addition, the Chief Information Officer (“CIO”), who oversaw the IT department during 2017, referred to patching as a “lower level responsibility that was six levels down” from him [...] The Subcommittee interviewed current and former Equifax employees from the information security and IT departments. Their responses varied, but most said they believe that the security team’s actions were an appropriate response to the Apache Struts vulnerability. [...] The CIO at Equifax from 2010 to 2017 [... He] does not think Equifax could have done anything differently."
I think the problem here is really clear, and as I keep saying. If you have a cybersecurity team, a CIO, a technology strategy, and patching is not item number 1 on the agenda, then everything else is entirely pointless.
In a blog post written by Monika Bickert, Facebook’s vice president of global policy management, Facebook said it will begin rejecting ads that include false information about vaccinations. The company also removed targeting categories such as “vaccine controversies” from its advertising tools. Last month, the Daily Beast reported that more than 150 anti-vaccine ads had been bought on Facebook, which often targeted women over 25. Some of the ads were shown to users “interested in pregnancy.” In total, they were viewed at least 1.6 million times. YouTube similarly announced last month that it would begin preventing ads from running on videos featuring anti-vaccine content.
This is a good start on trying to reduce harmful misinformation on these social networks. I'm not sure how much people care about the advertising revenue, but while one can gain some benefit from it, there's an incentive for people to produce more content like it in search of some or any revenue. By reducing the financial incentive, we'll restrict the content production to only people who believe and care as least.
Facebook said about 175,000 people followed at least one of the fake pages, which included 35 profiles on Instagram.
The company said the pages "engaged in hate speech and spread divisive comments on both sides of the political debate in the UK".
"They frequently posted about local and political news including topics like immigration, free speech, racism, LGBT issues, far-right politics, issues between India and Pakistan, and religious beliefs including Islam and Christianity.
"We're taking down these pages and accounts based on their behaviour, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action."
Again clear evidence that dangerous groups intent on using disinformation tactics are not allied with any of our perceptions of political parties or issues of the day, but are instead only interested in increasing social division and creating a culture of fear, anger and hate.
This deconstruction in social cohesion will naturally lead to internal political strife and reduce energy and efforts to operate on the world stage, which gives an increased political and economic benefit to whichever country is behind this campaign.
“We will always seek to discover which state or other actor was behind any malign cyber activity, overcoming any efforts to conceal their tracks,” Hunt will say, according to pre-released extracts of his speech.
Western countries issued coordinated denunciations of Russia in October for running what they described as a global hacking campaign. Russia has denied the allegations.
In the United States, a federal special counsel is investigating Russian interference in the 2016 presidential election and possible collusion with Donald Trump’s campaign. Moscow has denied any meddling and the U.S. president has said there was no collusion.
Hunt will say there has been no evidence that foreign states have interfered with British votes but that unnamed hostile states are intent on using cyberspace to undermine Western democracies.
I'm dubious about the claim that there is no evidence that foreign states have interfered with british votes, I suspect it says more about the UK's ineptitude at discovering such activity rather than any lack of activity.
I'm also not sure about this strategy. I've seen no evidence so far of what effect the denunciations have actually had in reducing activity or changing the strategy of hostile states.
At this point, it is valuable to revisit why influence operations and propaganda can be so persuasive, and to use this research to counter those arguments. Again, according to research from RAND, propaganda (and resulting influence campaigns) are effective for the following five reasons:
People are poor judges of true versus false information, and they do not necessarily remember that particular information was false. Information overload leads people to take shortcuts in determining the trustworthiness of messages. Familiar themes or messages can be appealing, even if they are false. Statements are more likely to be accepted if backed by evidence, even if that evidence is false. Peripheral cues, such as an appearance of objectivity, can increase the credibility of propaganda. For those who use social media, knowledge is the greatest tool in combating influence operations. Social media users bear a greater responsibility to themselves and the American public to develop better means of detecting and dismissing influence attempts.
The difference between the russian internet research agency, and the efforts here by China could not be more evident. China has a far more traditional campaign of promoting the best of China in a campaign that bears a lot of similarity with the Britain is Great campaign from the UK Government.
But this list is interesting, and the research from RAND behind it is a fascinating read. I'm not sure that I agree with the finding though. To claim that social media users have a responsibility to educate themselves feels like a losing strategy, especially in the face of these reasons for the effectiveness of propaganda. Instead we need citizenry to be able to trust some forms of media to provide real facts, and to be able to assess information against that trustworthy standard.