Cybersecurity has a strong militaristic tonality to it. We talk about attacks, weapons, actors, all with the cyber prefix of course.
But at its heart, the vast majority of cybersecurity activity isn't warlike or militaristic at all. The origin of the early hacking scene was that of inquisitive youths, exploring phone networks, systems and this interconnected network of devices for their own stimulation, curiosity and personal gain.
Of course, in the 30 years since the mid 80's, everything has been commercialised and hyped into billion dollar business that is critical to everything that we do.
But our language remains uniquely and specifically militaristic, which leads us to some very odd discussions at times. At the news that Google had spotted a chain of 11 0-days being used in an operation, one twitter commentator said it "was as if an aircraft carrier appeared in the middle of the pacific and nobody knows which country it belongs to".
Our mental models of attackers, and the capabilities that they use is derived from this language, so we talk about cyber-weapons, and their use as if there are good parallels with military assets and deployments. But a piece of malware is a software program, it exists entirely digitally, and it can be reused and copied as many times as needed. This allows the defensive side to potentially see it's use, and to either identify the vulnerabilities used and patch thm, as Google did, or to duplicate those vulnerabilities and use them for their own operations, as APT31 is reported to have done.
The question then comes up about whether we need legal or ethical controls around the appropriate use of these weapons, including the question of whether Google should have kept quiet about what the US would probably argue was legitimate, authorised and legal use of 0-day exploits. But if we think about this as software, it becomes rapidly clear that we are talking about software, something that can be altered, copied and distributed digitally far more easily and rapidly than any weapon can be. Furthermore, a simple change in defensive software can render an attacking piece of code completely inert and unable to affect a suitably patched infrastructure.
In order to have these conversations sensibly around the use and abuse of such software, we need to lean away from parallels with military language and idioms and be far clearer that we are talking about ethics is software engineering and society. Is there any situation in which a global software company should knowingly ship vulnerable code to their customers is a far better and more interesting question that allows for far more nuanced conversation than whether Google should disarm a western cyber weapon with a simple patch.
But Western operations are recognizable, according to one former senior US intelligence official.
“There are certain hallmarks in Western operations that are not present in other entities … you can see it translate down into the code,” said the former official, who is not authorized to comment on operations and spoke on condition of anonymity. “And this is where I think one of the key ethical dimensions comes in. How one treats intelligence activity or law enforcement activity driven under democratic oversight within a lawfully elected representative government is very different from that of an authoritarian regime.”
Instead of focusing on who was behind and targeted by a specific operation, Google decided to take broader action for everyone. The justification was that even if a Western government was the one exploiting those vulnerabilities today, it will eventually be used by others, and so the right choice is always to fix the flaw today.
In some cases, security companies will clean up so-called “friendly” malware but avoid going public with it.
“They typically don’t attribute US-based operations,” says Sasha Romanosky, a former Pentagon official who published recent research into private-sector cybersecurity investigations. “They told us they specifically step away. It’s not their job to figure out; they politely move aside. That’s not unexpected.”
If your offensive strategy is dependant on this behaviour, then you need a better strategy, because it's increasingly the case that tech giants are global actors in their own way, and they will act on their best interests (and by proxy the interests of their users), not based on geopolitical interests.
In this case, Google was able to detect and see the capabilities coming out of the C2 infrastructure. It's unclear exactly how Google was able to do that, but their network access at multiple points on the network probably gives them visibility that very few other actors have.
The most interesting point is that Google's project zero has decided to identify and pass on the zero-days that affect the underlying platforms, both Apple's and Microsoft's as well as just vulnerabilities in Google Chrome. Project Zero really views itself as acting on behalf of all internet denizens rather than simply for Google. Their attitude is that "What's good for the internet, is good for Google"
Google’s security experts have not yet formally attributed this campaign to any specific group, and all attribution options are still on the table — such as the attacks being the work of a state-sponsored group or a hacker-for-hire private company.
What is, however, undisputed is the fact that the threat actor has shown very advanced capabilities, allowing it to discover and deploy zero-days across a wide variety of platforms and software.
This is a good writeup of a few announcements from Google about a highly advanced campaign and a set of critical bugs that make an amazing chain to exploit a device remotely
There is a theory which states that if anyone will ever manage to steal and use nation-grade cyber tools, any network would become untrusted, and the world would become a very dangerous place to live in.
There is another theory which states that this has already happened.
What would you say if we told you that a foreign group managed to steal an American nuclear submarine? That would definitely be a bad thing, and would quickly reach every headline.
However, for cyber weapons – although their impact could be just as devastating – it`s usually a different story.
We began with analyzing “Jian”, the Chinese (APT31 / Zirconium) exploit for CVE-2017-0005, which was reported by Lockheed Martin’s Computer Incident Response Team. To our surprise, we found out that this APT31 exploit is in fact a reconstructed version of an Equation Group exploit called “EpMe”. This means that an Equation Group exploit was eventually used by a Chinese-affiliated group, probably against American targets.
This isn’t the first documented case of a Chinese APT using an Equation Group 0-Day. The first was when APT3 used their own version of EternalSynergy (called UPSynergy), after acquiring the Equation Group EternalRomance exploit. However, in the UPSynergy case, the consensus among our group of security researchers as well as in Symantec was that the Chinese exploit was reconstructed from captured network traffic.
The case of EpMe / Jian is different, as we clearly showed that Jian was constructed from the actual 32-bits and 64-bits versions of the Equation Group exploit. This means that in this scenario, the Chinese APT acquired the exploit samples themselves, in all of their supported versions. Having dated APT31’s samples to 3 years prior to the Shadow Broker’s “Lost in Translation” leak, our estimate is that these Equation Group exploit samples could have been acquired by the Chinese APT in one of these ways:
Captured during an Equation Group network operation on a Chinese target. Captured during an Equation Group operation on a 3rd-party network which was also monitored by the Chinese APT. Captured by the Chinese APT during an attack on Equation Group infrastructure.
Absolutely fascinating read that shows that it’s likely that advanced nations state backed actors are able to capture the use of adversaries exploits, decompile them and then reuse the code in their own way.
However, the overly dramatic “weapons” analogy in use here irritates me. There’s a huge difference between a weaponised exploit and almost all military capabilities. “Cyber weapons” if you must call them that are regularly caught, exposed and the vulnerabilities they exploit are patched. There is no analogy here that we should be using
Few outside law enforcement knew of Clearview’s existence back then. That was by design: The government often avoids tipping off would-be criminals to cutting-edge investigative techniques, and Clearview’s founders worried about the reaction to their product. Helping to catch sex abusers was clearly a worthy cause, but the company’s method of doing so — hoovering up the personal photos of millions of Americans — was unprecedented and shocking. Indeed, when the public found out about Clearview last year, in a New York Times article I wrote, an immense backlash ensued.
Facebook, LinkedIn, Venmo and Google sent cease-and-desist letters to the company, accusing it of violating their terms of service and demanding, to no avail, that it stop using their photos. BuzzFeed published a leaked list of Clearview users, which included not just law enforcement but major private organizations including Bank of America and the N.B.A. (Each says it only tested the technology and was never a client.) I discovered that the company had made the app available to investors, potential investors and business partners, including a billionaire who used it to identify his daughter’s date when the couple unexpectedly walked into a restaurant where he was dining.
Computers once performed facial recognition rather imprecisely, by identifying people’s facial features and measuring the distances among them — a crude method that did not reliably result in matches. But recently, the technology has improved significantly, because of advances in artificial intelligence. A.I. software can analyze countless photos of people’s faces and learn to make impressive predictions about which images are of the same person; the more faces it inspects, the better it gets. Clearview is deploying this approach using billions of photos from the public internet. By testing legal and ethical limits around the collection and use of those images, it has become the front-runner in the field.
This is a troubling ethical question. Where can this data come from, what right did Clearview have to do this. But equally the intent here was good, the company is seeking to track down and help catch and prevent child sexual exploitation.
However the bit that caught my eye is that no matter how positive the intent, the executive who took this tool designed for law enforcement use, and used it on his daughter's boyfriend in a restaurant shows that we always have to assume that people will abuse our technologies, and work out what that means, and how we can limit or prevent such abuse.
Fashion retailer FatFace has paid a $2m ransom to the Conti ransomware gang following a successful cyber attack on its systems that took place in January 2021, Computer Weekly has learned.
The ransomware operators had initially demanded a ransom of $8m, approximately 213 bitcoin at the prevailing rate, but were successfully talked down during a protracted negotiation process, details of which were shared with Computer Weekly after being uncovered by our sister title LeMagIT.
During discussions, the negotiator for the Covid-hit retailer told Conti’s representative that since shutting its bricks-and-mortar stores, it was making only 25% of its usual revenues from its e-commerce operation, so to pay an $8m ransom would mean the end of the business. This was rejected by Conti on the basis that FatFace’s cyber insurance policy, held with specialist Beazley Furlonge, covers extortion to the tune of £7.5m – substantially more than $8m.
As it closed out the negotiation, the Conti gang advised FatFace’s IT teams to implement email filtering, conduct employee phishing tests and penetration testing, review their Active Directory password policy, invest in better endpoint detection and response (EDR) technology – the gang apparently recommends Cylance or VMware Carbon Black for this – better protect the internal network and isolate critical systems, and implement offline storage and tape-based backup.
There's a lot to dig into in this story, from the fact that Fat Face sent out a notification to customers that had the typical "we take security seriously" lines, but atypically told customers that they were being informed of the breach confidentially and they couldn't tell people (which went down about as well as you'd think it would).
But the tactics of Conti here are modelling that of a advanced combiantion of security consultancy and ransom operator. They refer to their teams as "red teams", and they've given Fat Face a set of recommendations to implement to improve their system and prevent another ransomware incident.
Furthermore they exfiltrated 200GB of data, have searched through it and knew exactly what the cyber insurance payouts would be, so that they could set the ransom accordingly.
In conversations between the victim and REvil, which started on March 14th, the Acer representative showed shock at the massive $50 million demand.
Later in the chat, the REvil representative shared a link to the Acer data leak page, which was secret at the time.
The attackers also offered a 20% discount if payment was made by this past Wednesday. In return the ransomware gang would provide a decryptor, a vulnerability report, and the deletion of stolen files.
At one point, the REvil operation offered a cryptic warning to Acer "to not repeat the fate of the SolarWind."
REvil's 50 million demand is the largest known ransom to date, with the previous being the $30 million ransom from the Dairy Farm cyberattack, also by REvil.
That's a big ransomware payment, I wonder if they'll pay, or what the impact will be. I've not seen any follow up on this so far.
Today we welcome the announcement of sigstore, a new project in the Linux Foundation that aims to solve this issue by improving software supply chain integrity and verification. Installing most open source software today is equivalent to picking up a random thumb-drive off the sidewalk and plugging it into your machine. To address this we need to make it possible to verify the provenance of all software - including open source packages. We talked about the importance of this in our recent Know, Prevent, Fix post. The mission of sigstore is to make it easy for developers to sign releases and for users to verify them. You can think of it like Let’s Encrypt for Code Signing. Just like how Let’s Encrypt provides free certificates and automation tooling for HTTPS, sigstore provides free certificates and tooling to automate and verify signatures of source code. Sigstore also has the added benefit of being backed by transparency logs, which means that all the certificates and attestations are globally visible, discoverable and auditable.
Nice project, as has been demonstrated so many times recently, dependency management in most languages does not do enough signature verification.
Of course, the hard parts of this is not knowing that it was signed, but that the signature came from the author you expected, which is probably the hardest thing to address for open source. In essence, just because a given dependency is signed, doesn't mean it wasn't signed by the attacker.
This starts to give us tools that we can use, such as cosign, the first prototype from the project if you want to try it
These are the main topics of this Awesome Kubernetes (K8s) Security List. Everything related to the Security of Kubernetes (and its components such as CoreDNS, etcd) either for learning, breaking or defending it, will be added down below. If you have any other good links or recommendations, feel free to submit a PR!
This is a great curated list of resources for security people, engineers or managers who need to understand more about Kubernetes
NSA’s Best Scientific Cybersecurity Research Paper Competition was initiated in 2013 with the intent to encourage the development of scientific foundations in cybersecurity and support enhancement of cybersecurity within devices, computers, and systems through rigorous research, solid scientific methodology, documentation, and publishing. Papers published in peer-reviewed journals, magazines, or technical conferences are eligible for nomination.
The National Security Agency’s Research Directorate selected “Spectre Attacks: Exploiting Speculative Execution” as the winner of its 8th Annual Best Cybersecurity Research Paper competition.
This looked at the best papers from 2019.
Spectre was the start of the speculative execution attacks hitting the mainstream, and is a worthwhile paper if you didn't read it.
If you'd like to see the competition it was chosen from, then there's a good list of other potential sources.
The thesis is we can deploy a number of Canary Windows Services which keep track of how many are running. If these Windows services are stopped (via net stop, sc or similar) not during a host shutdown we are then able to respond automatically. This automated response involves firstly triggering a canary token and then hibernating the host.
By doing so we:
Alert the defensive function with a high-signal alert. Minimize the impact via likelihood of successful encryption. Give the best chance of recovery.
Absolutely lovely use of canaries to detect some of the first actions that malware will take, and act to prevent the malware continuing (by hibernating the machine).
The question for me is what a typical user would do if their machine simply hibernated itself, or shut itself down. Even IT professionals first instinct would almost certainly be to turn the machine back on straight away.