Cyberweekly #213 - When is a vulnerability not a risk?
Published on Sunday, October 16, 2022
We’re moving the dial on the visibility of vulnerabilities.
More than ever, we can get constant feeds of the vulnerabilities in software, of the CVE’s that affect us, and of the acres of supply chain exposure that most organisations have.
But making sense of that feed is even harder than ever. What we’ve done in many cases is make our haystacks bigger hoping to find more needles.
My team uses a model around risk that requires us to articulate risks as something that has an Actor, a Vector and an Asset. An actor might have intent to compromise your asset, and they might have the capabilities to use a vector to do so. An asset might have a vulnerability to a given vector. But it’s only with the confluence of all 3 that it becomes an active risk. A vulnerability with no actors who posses the capability to exploit it isn’t a risk, it’s just a vulnerability. An asset with a vulnerability, but with no actors who can exploit it with their capabilities isn’t a risk either.
Of course, in the real world, we’re not playing chess. We don’t have complete visibility of every actor, what their intents are and what their capabilities are. Instead the world is more like a hidden information game, where there might be actors that we don’t even know exist, and some of them might possess capabilities we didn’t know exist. [For fun, I discovered when checking I had these terms right, that there is a variation of chess that uses hidden-information called Dark Chess]
We can theorise or predict which of our assets are most valuable, and we can rely on threat intelligence to tell us what capabilities various actors possess, but in order to totally reduce risk to 0, we would have to assume that there is always an actor out there with intent and capability for each vulnerability. That’s potentially a fine risk position for a mythological world class intelligence agency to take, but in the real world, we also have a limited budget, limited staff and therefore we need to ruthlessly prioritise our vulnerabilities.
We therefore need to determine when a vulnerability isn’t a real vulnerability, or rather when it doesn’t pose a significant risk to us. Simply counting numbers of vulnerabilities patched or managed is a bit of a false metric. We need to count those, but that’s just counting the work that we are doing. We should also aim to reduce the number of vulnerabilities that require action. We can do that with regular maintenance programmes, if we know that most systems are patched on a rolling 30 day cycle, then low and medium vulnerabilities (those with low actor intent; lack of workable exploits; or ones that are hard to exploit on our system for some reason) can be left to be mopped up by our patching cycle. That then reduces the number of vulnerabilities we are working on to the high criticality ones, and the ones that affect systems that are harder to patch.
This focus on ensuring that we are measuring the right thing can help us to move from a simple strategy of “patch all the things” that has a lot of fire and fury, but little impact, into a place where we can show impact of our patching systems, and hopefully sleep better at night.
- “I just ran ‘create-react-app’, why do I have vulnerabilities already?”
- “Why are these vulnerabilities when these are just dev dependencies?”
- “Almost all of these npm-audit detected vulnerabilities are false positives”
- Insufficient internal capacity to provide oversight directly
- A small pool of established vendors
- Losing experienced engineers to companies that handle this better.
- Engineers becoming architects to get promoted and then contributing less value to the company because it's not the best use of their skills.
- Engineers staying where they are to avoid becoming architects, becoming disgruntled about stagnation in their career (and their pay), and no longer delivering at the level they used to.
r2c blog — Software supply chain security is hard
https://r2c.dev/blog/2022/software-supply-chain-security-is-hard/
While there is some variance in how SCA tools approach scanning and alerting, fundamentally they all work in the same way: they look at your manifest files, lockfiles, and more, and compare them to a database to figure out how safe your open source dependencies are. This type of check is simple and noisy; it flags the packages you use that are vulnerable but doesn’t account for how you actually use those packages in practice.
I’ve spoken to security and developer teams big and small, and they’ve echoed the same sentiment: “SCA tools are false positive factories!” Take npm’s SCA tool, npm-audit: when it was released, it sparked widespread confusion across the community:
And this is just one case where the tool didn't live up to expectation; there is a whole slew of SCA tools out there that evoke a similar reaction. Now, if you’re an AppSec engineer at a company tasked with securing hundreds of repositories, have limited political capital with developers, and every day sees something like this — that’s just so frustrating!
The SemGrep team are pretty smart and it’s a great tool. This is highlighting a real problem with the SBOM and supply chain movements. It’s easy to vaguely handwave and say “log4j is bad”, but in reality, vulnerabilities are more complex than that.
SemGrep’s supply chain tool tries to do exactly what they articulate here, it uses reachability analysis to determine whether or not the vulnerable functions are called, not just whether the library is linked. Of course though, this is going to depend on higher calibre vulnerability reports, we’ll need to be clearer about what is actually wrong with a given vulnerability, and for some vulnerabilities, we might need to trace what data is passed through the system, to determine whether an attacker can actually achieve an impact.
We’re also going to need to work out how we talk about this, because this form of analysis in my mind, is perfect for triaging and prioritising remediation work, but if there is a known vulnerability in a library and there is a fix available, then you should be looking to patch it eventually. Knowing that the vulnerability doesn’t apply in this case can mean that you don’t need to patch urgently, wake people up etc, but you’ll still want someone to be picking this up and updating the library as part of your regular hygiene maintenance.
Getting better by doing less – Public Strategist
https://www.publicstrategist.com/2009/10/getting-better-by-doing-less/
The starting point for a lot of this is using customer contact as a means of understanding what is working – or not working – in the wider organisation and then doing something about it. As Bill put it in an interview when he was still at Amazon,
You can have a great overall culture, with real empathy for the customer and passion for fixing the problems. You can have individual reps who say, ‘this customer is really upset, and I have to deal with it.’ I think we do that. What’s missing almost everywhere else is, even if you have the empathy and the passion and you address the customer’s problem, you haven’t really given good customer service in total. You haven’t done that until you have eliminated the problem that caused her to call in the first place.
Typically, contact centres need to respond to demand for contact from customers, but have no influence on the causes of the demand or any ability to communicate problems, still less to get them fixed. Those parts of the organisation which are generating the demand, meanwhile, may have no inkling of the impact that their activity or inactivity is having on customer contact. More importantly, they will have no means of using the pattern of contact as a tool for diagnosing the causes of contact. Amazon went from 360 reason codes which contact centre operators were supposed to record against incoming customer contact to a much shorter list which used customers’ own language and which was simple enough for operators to memorise. The central question turned out to be, ‘where’s my stuff?’. That might look obvious for an organisation which is essentially a web site sitting on top of a logistics operation, but it has much wider application – ‘what’s going on, and why hasn’t it happened yet?’ is behind an awful lot of contact.
This, from 2009 by PubStrat came up via twitter this week and just reminded me of the way that we measure effectiveness and efficiency. This can be relevant in IT service desks (closing tickets fast isn’t as good as reducing the number of tickets), as well as in the security analyst space (detecting more incidents isn’t necessarily a good thing, you want the number of things that become an incident to go down after all).
Sometimes this creates odd perverse incentives for people measuring things, after all if you want “detected cyber incidents” to go down, you can just turn off your detection capability. Somehow you need people to be incentivised that “Ability to detect” is going up while “actual detections” goes down.
“Charbonneau Loops” and government IT contracting | Sean Boots
https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/
How do Charbonneau Loops happen?
In most cases, I should be really clear, they don’t involve actual corruption. Charbonneau Loops happen whenever you have:
And, it almost goes without saying, you’re doing work with a sufficiently high level of complexity that it requires active oversight. This includes infrastructure construction projects (like highways), and it also includes IT implementation, client service delivery, and a whole range of other fields.
It doesn’t include everything; landscaping and gardening, for example (with apologies to gardeners!) doesn’t necessarily need the same level of oversight. If done poorly, it likely won’t affect people’s life and livelihood the way a collapsed bridge or failed benefit program IT system would.
In some areas – public sector IT, for example – “oversight” can refer to a range of things. In this context, it could include planning procurement activities, and developing documentation for these; it could include project management and coordinating the work of other vendors; it could include reviewing and approving other vendors’ work; and it could include more traditional oversight roles like security compliance or privacy assessments.
You can see #1 above show up when you see government organizations issue RFPs for “procurement support” (like Dan’s New York City example, above), for different kinds of “design authority” work, to write requirements, or to support project management offices.
I really like this definition, and I see it happening in a lot of other places as well. The hollowing out of expertise in the contracting organisation means that the only people capable of writing the procurement notices, defining the success criteria, and running the procurement ends up being the same people who can actually deliver the outsourced contract.
I think there’s a danger in the solution that we just need to insource more expertise though. Firstly, sometimes its right that big organisations (like Governments) don’t have full time experts in some of these areas. Running a permenant in-house team costs money as well, and with limited headcount, that’s got to be balanced against the other things the org could focus on.
Secondly, once you have an in-house team, “build it yourself” can start to become the default even when it would be more efficient and better to outsource it. Things you build and run should be related to your core competence as an organisation, but once you have inhouse developers or technologists, you’ll end up with them building their own solutions to all kinds of problems. And once you build it, you need to run it and maintain it.
Now a good use of that can be to build prototypes, learn about a problem, but scrap them and use the learning to write a good outsourced strategy and procurement note. You know enough to do it well now. But how often have we heard that a prototype is “80% of the way there, so going into production”
If this stuff was easy, we’d all know and agree the solutions, but it’s nice to see people like Sean articulating the problems for us.
Microsoft Office 365 Message Encryption Insecure Mode of Operation
Microsoft Office 365 Message Encryption (OME) utilitises Electronic Codebook (ECB) mode of operation. This mode is generally insecure and can leak information about the structure of the messages sent, which can lead to partial or full message disclosure. As stated in the NIST’s " Announcement of Proposal to Revise Special Publication 800-38A ": "In the NIST National Vulnerability Database (NVD) , the use of ECB to encrypt confidential information constitutes a severe security vulnerability; for example, see CVE-2020-11500 ." Description Microsoft Office 365 offers a method of sending encrypted messages. This feature is advertised to allow organization to send and receive encrypted email messages between people inside and outside your organization in a secure manner. Unfortunately the OME messages are encrypted in insecure Electronic Codebook (ECB) mode of operation.
This is a bit of a weird and difficult one. We know that ECB is generally considered insecure and shouldn’t be used. But there are legacy systems out there that only support reading encrypted emails in the legacy ECB mode, and of course, you don’t know by email whether the person you are addressing something to is using a modern email client or a legacy one.
Secondly, the impact of ECB mode is one that is really easy to demonstrate, as they do in the article, by taking a picture with words on it, and then showing that the contrast still shows through. That shows that patterns in the original data can be picked up in the encrypted data. But in reality, nobody emails pictures with just writing on, and very few data files enable an attacker reliably tell enough from the patterns in the data. That means that although we know it’s bad, and it leaks information, the actual impact is very likely to be quite low on encrypted emails.
Should this be fixed? Probably. Will it cause issues with people using legacy versions of Office? Definitely. How bad is it? Well, it’s not great, but it’s also hard to see if it’s actually going to have the impact that the researchers claim.
The Dutch Tax Authority Was Felled by AI—What Comes Next?
https://spectrum.ieee.org/artificial-intelligence-in-government
Until recently, it wasn’t possible to say that AI had a hand in forcing a government to resign. But that’s precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.
When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority’s workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.
In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.
“When there is disparate impact, there needs to be societal discussion around this, whether this is fair. We need to define what ‘fair’ is,” says Yong Suk Lee, a professor of technology, economy, and global affairs at the University of Notre Dame, in the United States. “But that process did not exist.”
Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.
“The performance of the model, of the algorithm, needs to be transparent or published by different groups,” says Lee. That includes things like what the model’s accuracy rate is like, he adds.
I had missed this, and was chatting about AI and bias with a dutch gentleman at the conference last week, when he told me about the dutch scandal with their benefits system.
In this case, the AI was trained on existing data, which means that the bias almost certainly already existed in the humans doing the job. But what the AI did was to systematise and scale that bias, ensuring that the biases were applied across a much broader set and creating human misery as a result.
AS is covered in the article, the EU and governments around the world are worried about this. Technology and AI in particular is often cited as a huge efficiency gain, which can reduce public spending at a time when money is tight. But the rush to deploy these without working out how to think, talk and manage things like bias in the systems will create unfair systems that ruin lives.
We probably can’t eliminate bias in these systems, the training data will come from humans, some of whom will be biased. But by pretending it’s not a problem, and not training our staff to look for issues and seek to feed that back and correct the biases, then there will be no hope for those negatively affected.
Frustrations with hiring into staff level | Julie Pagano
https://juliepagano.com/blog/2022/10/02/staff-eng-hiring
This reminds me of the bad old days when engineers had to become managers to progress in their career. It was a bad idea that led to some pretty negative outcomes. Thankfully, the industry learned from this, and most companies now treat engineering management as a separate role with a separate ladder and progression. I don't think architect needs to be spun off into a separate role, but I do think it needs to be considered one of multiple variations for staff level engineers. This enables individuals to find career progression where they can deliver a lot of value to the company in areas where they excel. Not providing this option, leads to results like:
Even architect-focused staff engineers could really benefit from more hands-on time. Technology changes all the time, and people will struggle to keep up with it and provide solid technical leadership if they don't have time to touch code any more. I've seen myself and others burn out in this role because our day-to-day responsibilities did not leave space for tinkering, so we were doing it on nights and weekends to keep up with our areas of specialization. On the flip side, I've seen architects not do this and become increasingly out-of-touch, and thus less valuable to the company.
This is a good reminder that we still don’t have an industry wide recognised career description. In some companies, the most senior engineers are staff engineers, in others they are seniors or leads. In some places, architects are a distinct career path, and in others, they’re just the natural progression of software engineers.
But what’s important is that seniority shouldn’t necessarily be connected to managing more people. Forcing engineers or specialists to develop skills they have no interest in does not end well
The Fresh Phish Market: Behind the Scenes of the Caffeine Phishing-as-a-Service Platform | Mandiant
https://www.mandiant.com/resources/blog/caffeine-phishing-service-platform
Once an attacker has configured the necessary components of their main campaign tooling (as shown in Figure 10), they must then deploy their tooling (conventionally referred to as “phishing kits”) to their hosted campaign infrastructure. After that step is complete, all that is left to do is connect their deployed kits to their main Caffeine account via a special license token. At that point, an attacker is ready to go phishing! Deployment of Caffeine Phishing Kits: Preparing the Bait For most traditional phishing campaigns, phishermen generally employ two main mechanisms to host their malicious content. They will typically leverage purpose-built web infrastructure set up for the sole purpose of facilitating their phishing voyages, use legitimate third-party sites and infrastructure compromised by attackers to host their content, or some combination of both.
This is a nice breakdown of a Phishing as a service platform. It also provides some guidance on what is needed by the attackers to manage the later phases of the phishing system. They need some web hosting capability to actually host the phishing kit.
That’s a reminder that if someone is trying to compromise your hosting services, you might not really be the victim. Instead, the attackers might just want to use your infrastructure to run further campaigns in a deniable way.
UK spy chief: Britain must invest more to counter China’s tech dominance
https://www.politico.eu/article/uk-must-invest-more-to-see-off-chinas-tech-dominance-spy-chief-says/
In a speech to be delivered Tuesday, GCHQ’s Director Jeremy Fleming will warn the Chinese Communist Party is seeking to use technologies such as digital currencies and satellite systems to tighten its domestic grip and spread influence abroad.
“Technology has become not just an area for opportunity, for competition and collaboration — it’s a battleground for control, values and influence,” Fleming will say, according to an advanced copy of his speech at the RUSI think tank which has been shared with journalists. “Without the collective action of like-minded allies, the divergent values of the Chinese state will be exported through technology.”
This was an interesting speech, and I’ve heard this rhetoric before. We tend to think of technology as politically neutral, that it can be used for good and for ill by political intentioned users.
But in reality, the worldview that we have shapes the way that we build technology, and shapes the feature sets that it offers, and the interaction mechanisms that go with it. It also influences the way we govern those technologies
Companies filled with western ideals build technologies and processes that are open, that provide strong audit logs, that actively resist censorship or central interference. We run committees and leadership to democratic ideals (if not always in practice), and enable voting on features, customer engagement and of course, extraction of profit through sale of the data, possession of the IP and market forces.
Technologies from non-western companies have different ideals, whether enabling stronger authoritarian controls, assuming state intervention in governance, or extracting profit through low margins and high unit sales.
Even if we idealistically think that states are not competing on the global stage, the reality is that the way that the developers and owners of technology systems think will always influence the sort of product that gets built and how it works.
University of Glasgow - University news - AI-driven ‘thermal attack’ system reveals passwords in seconds
https://www.gla.ac.uk/news/headline_885914_en.html
Computer security experts have developed a system capable of guessing computer and smartphone users’ passwords in seconds by analysing the traces of heat their fingertips leave on keyboards and screens. Researchers from the University of Glasgow developed the system, called ThermoSecure, to demonstrate how falling prices of thermal imaging cameras and rising access to machine learning are creating new risks for ‘thermal attacks.’ Thermal attacks can occur after users type their passcode on a computer keyboard, smartphone screen or ATM keypad before leaving the device unguarded. A passerby equipped with a thermal camera can take a picture that reveals the heat signature of where their fingers have touched the device.
This is just a lovely and cute attack. Very little you can do about it other than hope that the complexity and cost of carrying it out simply isn’t worth the effort for most attackers