I hope you are all starting to relax and get into the holiday spirit. This letter is a little late today because I was at a Christmas Party last night and it's taken a bit of time to be able to compose my thoughts!
It's been quite a year in 2018, and I'm quite looking forward to next year and seeing what happens. We've seen such a variety of things in 2018, from massive botnets, malware campaigns and phishing campaigns, to companies selling us security on the blockchain and AI driven firewalls.
This week, I think the thing that struck me the most was when I was out at drinks and talking to people about their predictions for 2019. One of the things that might totally change the security landscape next year is the increase in diplomatic pressure on the country level threat actors at the highest level. It sends a serious message to smaller countries, that they will be detected and called out on the world stage, and signals a new seriousness about telling the world what we know about the big countries as well. I'm interested to see what the response is in the new year, whether it results in a quieter safer year for us all. At this time of year, one can only hope.
Anyway, more about that next week as I prepare for a backward look at some of the biggest stories, including some of my favourites from the year, and a forward look to peoples predictions for 2019.
we found that 34 users accessed the REDACTED using single-factor authentication well past 14 business days, with some users not using CACs to access the REDACTED for up to 7 years.
At the REDACTED, a March 2018 scan revealed that REDACTED of the REDACTED vulnerabilities identified on a January 2018 network scan remained unmitigated. The REDACTED vulnerabilities consisted of REDACTED critical and REDACTED high vulnerabilities.
Although the vulnerability was initially identified in 2013, the REDACTED still had not mitigated the vulnerability by our review in April 2018
The REDACTED data center manager stated that he was not aware of the requirement to secure the server racks and keys, but considered the existing security protocols to be sufficient because the REDACTED limited who had access to the data center.
However, the REDACTED administrators stated that the REDACTED (network security device) used to protect the classified network lacked sufficient capacity (the amount of data that is able to be processed through a system) to support required intrusion detection and prevention configuration settings. Although REDACTED officials submitted a request in December 2017 to purchase technology that would support intrusion detection and prevention capabilities, the funding request had not been approved as of September 2018.
However, officials at the REDACTED, did not repair a known security issue with one of the facility’s doors. REDACTED security officials stated that the door’s sensor erroneously showed that the door was closed and the security sensor engaged when it was not. The security site lead at REDACTED stated that the door sensors have been a problem for about 4 years.
This is an amazing report into a system that is entirely classified, mostly held on military bases and obviously "did not follow the process". The report is worth reading almost just for the redactions, there's a great table about known vulnerabilities in which every field is redacted!
The procedures and processes all state that certain things must be true for the systems, but there isn't the budget, people or capacity to actually implement the processes. I wonder how many of our processes are in effect entirely unrealistic in their application on the ground, and whether we ever know.
This report doesn't call out the processes or procedures as broken, just that the military bases weren't meeting those processes. It's also worth being aware that they only audited 5 of the 104 DoD locations, and all 5 failed.
The NCSC assesses that it is highly likely that APT 10 has an enduring relationship with the Chinese Ministry of State Security, and operates to meet Chinese State requirements. Given the high confidence assessment and the broader context, the UK government has made the judgement that the Chinese Ministry of State Security was responsible.
This is the first time that the UK government has publicly named elements of the Chinese government as being responsible for a cyber campaign. It has previously attributed:
the WannaCry ransomware incident to North Korean actors;
a multi-year computer network exploitation campaign targeting universities around the world, including the UK, to the Mabna Institute based in Iran; and
a series of attacks including NotPetya, the WADA hack and leak and BadRabbit to the GRU (Russian Military Intelligence).
This is a clear change of tactic on the global diplomatic front. For years, the 5-eyes Governments have watched the behaviours of advanced state actors such as the MSS, GRU and North Korea, and has kept quiet about what it sees. We can only assume that there was some great information warfare plan going that included statements like "They know that we know that they know...".
This has clearly not been working. Some of the smart people working in the security agencies protecting these systems have constantly talked about the scale of the challenge of defending such a wide base of systems, including suppliers, critical national infrastructure and so forth. Ciaron Martin, the CEO of the NCSC, has been quoted as saying "It's not if, it's when the next big hack happens", and I agree.
So far, countries have been able to continue these attacks because there was high levels of deniability, and a determiniation that the UK/USA intelligence teams wouldn't say anything due to their secrecy.
I'd be really curious to know whether the decision to change tactic, to start calling out countries for their attacks, and to be public about it was driven not just by the scale of the attacks, but by the recognition that post-snowdon, the general populous is aware that spies... well, spy on stuff. I suspect that a lot of hesitance prior to Snowdon was not wanting to reveal the magnitude and capability that they do have, because nobody knew how it would be taken. I think it's fair to say that post-snowdon, not a lot changed, and the public gave a collective large shrug about it.
Anyway, this public attribution of nation state level actors, and the commitment to say that they will "expose your actions" is quite a change in diplomatic attitude, and I'm curious to see what 2019 brings as a result.
If you ask young people about how they use their social ties online, two-thirds of them say that when stressful or horrible things happen to them, they reach out for social support. We are really at risk when we have hard and fast rules – and when these things are wrapped up in what it means to be a ‘good’ parent – you can really have pretty negative downstream consequences for children’s social lives.”
This is interesting because it matches other digital cultural issues. People who believe that screentime is bad or that “online friends aren’t real friends” tend to dismiss the positive power of learning via screens or wide social networks that provide support.
Equally in digital, we see unbelievers who tell us that digital services aren’t real government services, that this agile stuff is fine for simple problems but doesn’t solve the real thing. These are the same cultural resistance to change and generational gap in understanding happening, just in different fields.
The transport secretary, Chris Grayling, who on Thursday said “substantial drones” had caused the chaos, admitted on Friday that it was uncertain whether there was more than one. He denied he had ignored warnings, and said he was planning to hold talks with airports soon to discuss the lessons of Gatwick and try to prevent similar disruption.
This attack was absolutely fascinating from an attack modelling scenario. It was low cost, and easy to carry out and had massively larger cost to the target than to the attacker. The fact that nobody has claimed responsibility makes me think that it was either a lone individual with a personal grudge, or an accident or joke that got out of hand.
The media and various others are calling for toucher measures such as a "wider exclusion zone", but this wasn't a drone flying at the edge or outside the exclusion zone that nearly caused an accident, it was somebody deliberately breaking that law already, so extending the exclusion zone makes no difference to the attacker in terms of cost of the attack.
It will be interesting to see this develop and see what they decide to prosecute for, based on what penalty they wish to apply. The biggest control here is a strong punitive punishment that is designed to deter others from trying this in future.
To launch an effective threat hunting program, you also need access to the right data. In terms of efficiency and accuracy, this should consist of internal data from the company mixed with external deep web, dark web, open source and third-party threat intelligence that provides context about threats manifesting through global cybercrime networks.
Threat Hunting is here to stay apparently. But if you read between the lines here, it's pretty clear that Threat Hunting is only really useful in your organisation if you've already got the basics done. You need a set of capabilities and capacity to actually handle the results of threat hunting, before you can actually get any value.
I'm seeing a lot of government departments who are happily investing in new shiny technologies and processes, like threat hunting, malware reversal, or AI/Blockchain technology, but lack the capacity to monitor their network, gather threat intelligence feeds, or patch or fix the things they find.
FireEye augments our expertise with an internally-developed similarity engine to evaluate potential associations and relationships between groups and activity. Using concepts from document clustering and topic modeling literature, this engine provides a framework to calculate and discover similarities between groups of activities, and then develop investigative leads for follow-on analysis. Our engine identified similarities between a series of intrusions within the engineering industry.
As well as an interesting write up of a new threat actor and set of practices on the scene, I was taken by this snippet. As an analytical capability, having a system that can look at threat actors behaviours, known reports on them and performi similarity should find cases where different tracked actors may in fact be the same actor.
Of course, we are assuming that there aren't actors out there who have been trained by the same trainers, that there aren't skilled adversaries who are capable of pretending to be another actor, and that there aren't actors who change their MO on a regular basis to avoid detection.
But this capability lifts the ability of defenders to track attackers and understand their motives and processes a little better.
Understanding that capacity planning is an enabling constraint, and cost of delay is a governing constraint, quickly helped me see other examples of enabling constraints.
This is recommended reading, as is the entire series here. Understanding that constraints can enable teams, systems and projects, seems kind of obvious, but a lot of the time we think of putting constraints into systems to control the bad behaviour, rather than to enable the good behaviour. The confusion in security controls and why many of them don't work is that some are Governing constraints (firewall rules) and some are enabling constraints (AWS Credential Management).
Where a security control allows the users to do something that they would otherwise have to do badly, poorly or in shadow IT (spin up new servers, create infrastructure), these are enabling constraints, and we need to understand the domains that our security processes and tools operate in.
Composer, and its database, originally started their lives in Guardian Cloud – a data centre in the basement of our office near Kings Cross, with a failover elsewhere in London. Our failover procedures were put to the test rather harshly one hot day in July 2015
This is a good read of a long running technical migration project, the planning, the mistakes and the final result.
But it's also worth noting that the Guardian decided to build their own internal cloud in their data centers, but discovered that running your own cloud takes a lot of operational experience, and moved instead to use AWS. It still astonishes me the number of times I have to make the same argument with people, that running your own cloud is a terrible terrible idea.