Cyberweekly #232 - Are you radiating your intent?
Published on Sunday, November 19, 2023
The story of the Mirai creators is one of the most interesting I've read all year, and while it would be a stretch to say that they had good intentions with every step, I think it is clear that they didn't really intend to end up where they did, but every individual step felt like a logical step forward.
Our intention can really matter in digital and security, and one of the things that I commonly tell my teams is to be explicit about what they are trying to achieve. I use the term "Radiate intent", and I've found in numerous meetings over the years that telling someone not what you want them to do, but what you are trying to achieve first, can often massively unlock better challenge and support from others.
When you ask a question about "What's the best way to inspect the IP of an authenticating account", then people might explain the answer to that question. But if you are clearer about saying "I want to confirm that logins as administrators are coming from a trusted source", then the answer might be "Well, you don't want to check IP, have you considered PassKeys" or something like that.
Radiating intent is important but difficult because many of the people we work with are short on time, patience and attention, so what can feel like extra time spent explaining why you want something can feel like you are wasting peoples time. But too often our actual intent is hidden to others, who have to infer it from our actions.
Being clear and honest with others about your intent also helps you to decide whether your actions match your intent, and they help you to be honest and clear with yourself about what your intent actually is. That level of self knowledge can help you to set a good constructive intent and vision for the work you are setting out to do.
It's far harder to just take the next step in what is ultimately a bad plan if you have a clear vision of where you want to get to at the end.
- Richer functionality than email,
- Lacking centralized security gateways and other security controls common to email and
- Unfamiliar as a threat vector to your average user compared with email.
-
The Wikipedia link must contain a reference at the end of the first paragraph.
-
The first word of the second paragraph in the Wikipedia article must be a top-level domain (TLD) such as in, at, com, net, us, etc.
-
The above two conditions must appear in the first 100 words of the Wikipedia article.
The Mirai Confessions: Three Young Hackers Who Built a Web-Killing Monster Finally Tell Their Story | WIRED
https://www.wired.com/story/mirai-untold-story-three-young-hackers-web-killing-monster/
One asked how this group of young adults with no criminal records had justified to themselves carrying out such epic acts of digital disruption. Paras answered for all of them, explaining how incremental it had all felt, how easy it had been to graduate from commandeering hundreds of hacked computers to thousands to hundreds of thousands, with no one to tell them where to draw the line. “There was never a leap,” he says. “Just one step after another.”
Another student asked how they had kept going for so long—how they believed they could evade the FBI even after they had been raided. This time it was Dalton who answered, overcoming his anxiety at speaking in front of crowds, in part thanks to better treatments that have helped to alleviate his stutter. He explained to the class that they had simply never faced an obstacle to their hacking careers that they hadn’t been able to surmount—that, like teenagers who have no experience of aging or death and therefore believe they’ll live forever, they had come to feel almost invincible.
Throughout the presentation, Shapiro says, he was struck by the youthful nervousness of the three Mirai creators and the fact that, even as they spoke, they never turned on their webcams. The hacker threat that he’d once been sure must be the Russians, that had felt so large and powerful, was just these “young boys,” he realized. “Young boys who don’t want to show their faces.”
This is an absolutely fabulous read, really detailing the experiences of the people behind the Mirai botnet. This little quote is a perfect example of how each step just felt obvious to them, and how you can see them digging further and further down.
NCSC Annual Review 2023 - NCSC.GOV.UK
https://www.ncsc.gov.uk/collection/annual-review-2023
This effort has been an iterative one, as initial prompts used included ‘cyber security’, ‘future’ and ‘technology’. These prompts alone generated the stylised green coding, dark quasi-dystopian images and men in hoodies hunched over laptops which we have become accustomed to, reinforcing a stereotypical representation of cyber security.
When asked to show people within these images, biases were common, too.
When we amended the prompts to incorporate ‘inclusion’, ‘open and resilient society’ and ‘diversity’, the images began to change – and with our design agency, we created a front cover which better aligns to the kind of future the NCSC aims to shape for the whole of society.
These tools will have their uses but what this exploration reinforces is that an inclusive, diverse, and open future of cyber security requires our collective intent – it will not happen organically, without effort.
Lots of interesting tidbits in here, as always, from the numbers on how NCSC’s takedown or vulnerability reporting services are doing, to the focus on scaling the effort. It’s clear that NCSC is pushing that cybersecurity is something that the entire community needs to do, and it’s job is to catalyze, inspire, and direct, but that it cannot be all things to all people, which is a natural maturation of an organisation this size.
But I thought the comments around the encoded biases in AI were really interesting here. Of course we know that it’s a cliche to depict hackers in hoodies with green screens. But AI is really good at spotting and regurgitating cliches, and that can bake them into our writing, our images and soon our videos and audio in a way that we may not question.
As they say here, this reinforces that intent really matters in the use of AI, and we have to be explicit about the kinds of things we want to see come out of it.
AlphV files an SEC complaint against MeridianLink for not disclosing a breach to the SEC (2)
Earlier today, AlphV added MeridianLink to their leak site. MeridianLink ( MLNK ) is the provider of a loan origination system and digital lending platform for financial institutions. AlphV’s listing has been temporarily removed to be updated, but DataBreaches has learned some additional details from someone involved in the attack.
The attack was last Tuesday, November 7. According to AlphV, they did not encrypt any files, but did exfiltrate files. MeridianLink was aware of it the day it happened. According to AlphV, no security upgrades were made following the discovery, but “once we added them to the blog, they have patched the way used to get in,” DataBreaches was told.
[…]
AlphV reported MeridianLink to the SEC for alleged failure to timely file.
AlphV wrote:
“We want to bring to your attention a concerning issue regarding MeridianLink’s compliance with the recently adopted cybersecurity incident disclosure rules. It has come to our attention that MeridianLink, in light of a significant breach compromising customer data and operational information, has failed to file the requisite disclosure under Item 1.05 of Form 8-K within the stipulated four business days, as mandated by the new SEC rules.
This is an interesting turn of events. Ransomware providers are seeking ways to continue to monetise their breaches. SEC rules don’t affect all organisations, but after the SEC investigation into Solarwinds, and the fact that MeridianLink is likely a financially regulated company, this sort of extortion tactic feels like something that ransomware operators can do.
We’ve already seen examples in the EU of ransomware operators saying that if the company pays the ransom to return the data, the company will avoid GDPR fines (which is false, once breached, the company is legally required to report itself to a data protection registrar, regardless of whether it pays the ransom or not).
But you can imagine that ransomware operators self-reporting the breach to the data protection registrar, or at least threatening to in order to increase the number of companies just paying the ransom to make the whole thing go away. This would be especially true of the large number of SME companies that get attacked, as they may well fear the potential fines more than the bigger enterprises.
NSA and ESF Partners Release Recommended Practices for Software Bill of Materials Consumption > National Security Agency/Central Security Service > Press Release View
The National Security Agency (NSA), Office of the Director of National Intelligence (ODNI), the Cybersecurity and Infrastructure Security Agency (CISA), and industry partners have released a cybersecurity technical report (CTR), “Securing the Software Supply Chain: Recommended Practices for Software Bill of Materials Consumption. ” The guidance in this release aids software developers, suppliers, and customer stakeholders in ensuring the integrity and security of software via contractual agreements, software releases and updates, notifications, and mitigations of vulnerabilities.
[…]
The co-authors of the ESF report observe an increase in cyberattacks that highlight weaknesses within software supply chains. This in turn increases the potential for supply chains to be weaponized by national state adversaries who can access software via several means including, but not limited to, the following: exploitation of design flaws, incorporation of vulnerable third-party components into a software product, infiltration of the supplier's network with malicious code prior to the final delivery of the product, and injection of malware within the software deployed in the customer environment.
Following these observations, the report provides guidance in line with industry best practices and principles, including managing open source software and software bills of materials (SBOM) to maintain and provide awareness about the security of software. Specifically, the report details SBOM consumption, lifecycle, risk scoring, and operational implementation with the goal of increasing transparency in the software management cycle and giving organizations access to risk information.
One of my team members recently remarked that SBOM’s are a bit like ingredients lists on mass market food. Almost nobody looks at it unless you have a need to, at which point you tend to really need to.
Getting SBOM’s off of the ground requires signficant government intervention because a lot of the time, they’re effort to create, and nobody is going to read them. That creates an inhibitor for organisations to spend time and energy actively creating and maintaining them for their software.
This guide goes about things the other way, instead of focusing on why and how you might create and maintain an SBOM, this report sets out how you might go about making use of one. It goes into detail about how you can embed that into your risk system for determining which products you use have the highest risks.
It’s a nice idea in theory, but I doubt that most companies actually have a mature risk function for their products that this can embed into. However, when there’s news of a new vulnerability that you should do something about, having done the things in this report will make it significantly easier to assess and respond effectively.
Announcing Microsoft Secure Future Initiative to advance security engineering | Microsoft Security Blog
We must continue to enable customers with more secure defaults to ensure they have the best available protections that are active out-of-the-box. We all realize no enterprise has the luxury of jettisoning legacy infrastructure. At the same time, the security controls we embed in our products, such as multifactor authentication, must scale where our customers need them most to provide protection. We will implement our Azure tenant baseline controls (99 controls across nine security domains) by default across our internal tenants automatically. This will reduce engineering time spent on configuration management, ensure the highest security bar, and provide an adaptive model where we add capability based on new operational learning and emerging adversary threats. In addition to these defaults, we will ensure adherence and auto-remediation of settings in deployment. Our goal is to move to 100 percent auto-remediation without impacting service availability.
[…] Second, we will extend what we have already created in identity to provide a unified and consistent way of managing and verifying the identities and access rights of our users, devices, and services, across all our products and platforms. Our goal is to make it even harder for identity-focused espionage and criminal operators to impersonate users. Microsoft has been a leader in developing cutting-edge standards and protocol work to defend against rising cyberattacks like token theft, adversary-in-the-middle attacks, and on-premises infrastructure compromise. We will enforce the use of standard identity libraries (such as Microsoft Authentication Library) across all of Microsoft, which implement advanced identity defenses like token binding, continuous access evaluation, advanced application attack detections, and additional identity logging support. Because these capabilities are critical for all applications our customers use, we are also making these advanced capabilities freely available to non-Microsoft application developers through these same libraries.
To stay ahead of bad actors, we are moving identity signing keys to an integrated, hardened Azure HSM and confidential computing infrastructure. In this architecture, signing keys are not only encrypted at rest and in transit, but also during computational processes as well. Key rotation will also be automated allowing high-frequency key replacement with no potential for human access, whatsoever.
This is a bold letter, that covers Microsoft’s plans to ensure that it is producing quality and secure software that runs so much of the world.
Couple of things that really stood out, firstly that they are starting to recognise how powerful secure defaults are. The number of times that we see examples of compromises where it could have been prevented simply had the users turned on the security features that they were entitled to, but didn’t know how to. Enabling security features by default will massively improve that.
Secondly, the recognition of how important user and administrator identity is is quite nice. Of course subject to the attack a few months ago has also made Microsoft realise how critical their own signing keys are to their own security. Here’s hoping they’ll also invest in providing patterns and capabilities to enable their customers to do the same
AI Chatbots Just Showed Scientists How to Make Social Media Less Toxic
https://www.businessinsider.com/ai-chatbots-less-toxic-social-networks-twitter-simulation-2023-11
On a simulated day in July of a 2020 that didn't happen, 500 chatbots read the news — real news, our news, from the real July 1, 2020. ABC News reported that Alabama students were throwing "COVID parties." On CNN, President Donald Trump called Black Lives Matter a "symbol of hate." The New York Times had a story about the baseball season being canceled because of the pandemic.
Then the 500 robots logged into something very much (but not totally) like Twitter, and discussed what they had read. Meanwhile, in our world, the not-simulated world, a bunch of scientists were watching.
The scientists had used ChatGPT 3.5 to build the bots for a very specific purpose: to study how to create a better social network — a less polarized, less caustic bath of assholery than our current platforms. They had created a model of a social network in a lab — a Twitter in a bottle, as it were — in the hopes of learning how to create a better Twitter in the real world. "Is there a way to promote interaction across the partisan divide without driving toxicity and incivility?" wondered Petter Törnberg, the computer scientist who led the experiment.
[…]
But Törnberg's work could accelerate all that. His team created hundreds of personas for its Twitter bots — telling each one things like "you are a male, middle-income, evangelical Protestant who loves Republicans, Donald Trump, the NRA, and Christian fundamentalists." The bots even got assigned favorite football teams. Repeat those backstory assignments 499 times, varying the personas based on the vast American National Election Studies survey of political attitudes, demographics, and social-media behavior, and presto: You have an instant user base.
Then the team came up with three variations of how a Twitter-like platform decides which posts to feature. The first model was essentially an echo chamber: The bots were inserted into networks populated primarily by bots that shared their assigned beliefs. The second model was a classic "discover" feed: It was designed to show the bots posts liked by the greatest number of other bots, regardless of their political beliefs. The third model was the focus of the experiment: Using a "bridging algorithm," it would show the bots posts that got the most "likes" from bots of the opposite political party. So a Democratic bot would see what the Republican bots liked, and vice versa. Likes across the aisle, as it were.
All the bots were fed headlines and summaries from the news of July 1, 2020. Then they were turned loose to experience the three Twitter-esque models, while the researchers stood by with their clipboards and took notes on how they behaved.
Absolutely fascinating use of a large-language-model. It will of course have a number of issues since telling a large language model “You are a X person” doesn’t actually do that, it just starts a document that the large language model thinks is statistically likely to continue.
But in terms of generating and testing hundreds or thousands of independent agents, using an LLM in this manner is really interesting, and it’s nice to see someone doing something really positive and different with LLMs.
Hacking Google Bard - From Prompt Injection to Data Exfiltration · Embrace The Red
https://embracethered.com/blog/posts/2023/google-bard-data-exfiltration/
Recently Google Bard got some powerful updates , including Extensions. Extensions allow Bard to access YouTube, search for flights and hotels, and also to access a user’s personal documents and emails.
So, Bard can now access and analyze your Drive, Docs and Gmail! This means that it analyzes untrusted data and will be susceptible to Indirect Prompt Injection.
I was able to quickly validate that Prompt Injection works by pointing Bard to some older YouTube videos I had put up and ask it to summarize, and I also tested with
Google Docs
.Turns out that it followed the instructions
This was disclosed to Google and promptly fixed (pun intended), but it’s worrying that basic indirect prompt injection like this can be left in such a system.
Anything that can scan your internal company data can almost certainly also be fooled into believing that the document is also part of the prompt.
Applying AI to any data that isn’t structured or is provided by end users will be a risk you’ll need to carefully consider.
Phishing Slack for persistence and lateral movement
https://pushsecurity.com/blog/phishing-slack-persistence/
While IM platforms were initially used solely for internal communications, organizations quickly realized that IM platforms could be used to communicate with external groups, individuals, freelancers, and contractors, with the hope of fewer emails and more instant communications.
We now have Slack Connect and Microsoft Teams external access to support this, with Slack Connect introduced in June 2020 and Teams introducing it in January 2022. This external access has increased the attack surface of these platforms considerably.
Despite decades of security research, email security appliances and user security training, email-based phishing and social engineering is still commonly successful. Now we have instant messenger platforms with:
There’s also a sense of urgency associated with IM messages due to the conversational nature compared with emails. Combined with a history of increased trust, we have the ingredients for increased social engineering success.
Some really powerful examples here. I hadn’t quite realised that an application with the right permissions could spoof the look of posting as a user in quite this way.
The controls to apply here are counterintuitive to the very notion of tools like Slack, it’s to lockdown the applications that your staff can install and validate and verify them so that you don’t enable every SaaS app in the world to just post messages into your core team communication channels.
Slack, and may other core comms systems, need to ensure that their permissions are granular enough that it’s easier to tell what a given app is doing and can do, and make it easy for administrators to only grant the permissions that are necessary
eSentire | The Wiki-Slack Cyberattack Analysis by eSentire’s Threat…
https://www.esentire.com/blog/the-wiki-slack-attack
The Wiki-Slack Attack is an authoritative user redirection attack to drive victims to an attacker-controlled website. From there, the attacker must provide their own malware for users to download and execute. Browser-based attacks, of course, shouldn’t be underestimated, given they are one of the primary types of initial access malware observed leading to Ransomware today, including BatLoader, Nitrogen, Gootloader, SolarMarker, and SocGholish.
The attack starts when a Wikipedia link is shared in Slack under three conditions:
This will cause Slack to mishandle the whitespace between the first and second paragraph, spontaneously generating a new link in Slack (Figure 1). If we look directly at the Wikipedia page, we can see this generated link doesn’t exist there (Figure 2).
Attacking users by causing or taking over the generated link, meaning that slack (or I’m sure Teams and other apps that provide previews) will generate a preview that looks enticing but is actually malicious is incredibly devious.
As an attacker, if you control the destination webserver, you can likely detect the slack instance that is generating a preview as well, meaning you could return the malicious content only to specific users
Zimbra 0-day used to target international government organizations
In June 2023, Google’s Threat Analysis Group (TAG) discovered an in-the-wild 0-day exploit targeting Zimbra Collaboration, an email server many organizations use to host their email. Since discovering the 0-day, now patched as CVE-2023-37580 , TAG has observed four different groups exploiting the same bug to steal email data, user credentials, and authentication tokens. Most of this activity occurred after the initial fix became public on Github. To ensure protection against these types of exploits, TAG urges users and organizations to keep software fully up-to-date and apply security updates as soon as they become available. 0-day discovery, hotfix and patch TAG first discovered the 0-day, a reflected cross-site scripting (XSS) vulnerability, in June when it was actively exploited in targeted attacks against Zimbra’s email server. Zimbra pushed a hotfix to their public Github on July 5, 2023 and published an initial advisory with remediation guidance on July 13, 2023. They patched the vulnerability as CVE-2023-37580 on July 25, 2023.
TAG observed three threat groups exploiting the vulnerability prior to the release of the official patch, including groups that may have learned about the bug after the fix was initially made public on Github. TAG discovered a fourth campaign using the XSS vulnerability after the official patch was released. Three of these campaigns began after the hotfix was initially made public highlighting the importance of organizations applying fixes as quickly as possible.
With targets in Greece, Moldova, Tunisia and Vietnam, it’s a good reminder that although much of our cybersecurity news is UK/US focused, serious attacks are global and affect governments all around the world.
All of these attacks used the same exploit, making it so that clicking a link in an email in the Zimbra webmail system could inject a script into the webmail client, enabling the attackers to do anything with the webmail system that the user could, such as read emails, forward emails or exfiltrate data.