Cyberweekly #233 - AI both is and isn't an existential threat

Published on Sunday, December 03, 2023

I don't think that AI is coming for your job, and I am on the doubting side that AI is going to rise up and pose a fundemental existential threat to humanity.

But I do think that the rapid rise of AI (by which we almost always mean large language models and ML these days) is creating some fundamental new risks to the way we live our lives and do our business.

It's not that I anthropomorphize AI, thinking that it's becoming self-aware, might have desires, feelings or an ability to rip free of its controls and dominate the world. Rather I see it as a form of incredibly dangerous shadow IT for organisations that might actually be impossible to control and create information and security risks that are too complex or difficult to mitigate.

In some cases, that might not be a bad thing, as sometimes security is the enemy, the bit of the organisation that is the most hidebound and resistant to change and new things. Some security teams see a threat in everything that we do, and often are not set up to also consider the business opportunities and capabilities that can be made available.

But almost all of our information security concepts for the last 50 years of computerisation are predicated on the concept that we can manage information held within the organisation and we're careful about how we let information out of the organisation.

Almost all AI tools give you incredibly significant amounts of power, in exchange for access to your data and information, and many of them can access your corporate data through the actions of just individuals in the organisation. The person who installs a new browser extension that can read every page they look at, even if it's your corporate intranet. The calendar startup that can read every meeting you run, the attendees and timings and work out which ones are important or unusual. The mail replying bot that can read all the emails sent to the user, even the boring seeming ones sent to large mailing lists on internal mailing lists.

These uses of AI are trivially easy for users to enable, and exfiltrate internal company data to the company involved. And in most cases, we know almost nothing about those companies and how they think about security. Most AI companies are building at such breakneck speeds that they might, if we are lucky, be thinking about the security of their own platforms, and ensuring that their product isn't being misused. But the amount of information flowing into those organisations about potential mergers and acquisitions, SEC investigation worthy internal reports on internal operations, communications of intent to move into and launch in new markets.

To the individual who just wants a more organised inbox, or wants to write better code, the value of the information is almost nil. It's almost impossible for someone to realise that these signals might exist in the calendars they can see, the tone of the corporate emails sent out, or the kind of code they are writing today. But to the aggregators of this information, not just the big AI companies, but the startups who are building on top of this ecosystem, that information might be valuable and interesting.

The horse has bolted in this case, and developers, case officers and office workers are already starting to expect that they can use these tools in their everyday lives. Developing procedures within your organisation that prohibits these tools will be ignored, or worked around by people who feel overworked and undervalued.

An interesting job was flagged to me this weekend, the AI Safety Institute in the UK is looking for a CISO, for someone who can help protect the AI Safety Institute itself as it grapples with the technologies it needs to use, but also to help ensure that security is part of everything that is considered when talking about AI.

AI poses, in my mind, a significant threat to the security of our organisations, structures and information. The ease of use, ability to absorb large amounts of information and analyse it make it one of the most capable actors out there. Luck may determine whether the eye of sauron that is the attention of the analysts in those organisations actually cares about our information, but regardless of what we think about it all, it's fast becoming the reality for most organisations operating today, and it's only going to become ever more present. If you are thinking about the right governance and processes in your organisation, then you are probably too late.

But as always, the technology isn't the issue at it's heart here. The issue is that this is trivially easy to use shadow IT that can bypass all of your controls. If your staff and people care about the value of the information they hold; if they have the tools that feel as productive as they can be; then maybe they'll think twice about each system that they enable. Here, as in many other places, your staff are your first line of defence, and your best impact comes from trusting them and enabling them to do the right thing

    Snyk's 2023 AI-Generated Code Security Report | Snyk

    https://snyk.io/reports/ai-code-security/

    80% bypass security policies to use AI, but only 10% scan most code

    While most organizations of respondents had policies allowing at least some usage of AI tools, the overwhelming majority reported that developers bypass those policies. In other words, the trust in AI to deliver code and suggestions is greater than the trust placed in company policy over AI.

    This creates tremendous risk because, even as companies are quickly adopting AI, they are not automating security processes to protect their code . Only 9.7% of respondents said their team was automating 75% or more of security scans, even though developers using AI tooling are likely producing code more quickly. This lack of policy compliance plus increased code velocity makes automated security scanning even more important than ever before.

    There’s huge amounts of risk with AI, but this is I think the most risky part of it all. The vast majority of companies that have in-house development teams are probably using AI generated code whether they want to or not, and development teams don’t have the right policies or even understanding to set the right policies to make good of this situation.

    Should development teams be using Github co-pilot on the code they write? I don’t think we know yet as an industry, and yet it’s almost impossible to prevent, and likely that at the individual developer level that it’s difficult to understand the reasons why you might not. Policies that forbid the use of AI tools are likely to make sense at the organisational level and yet rankle at the individual level, and that’s what makes these tools so dangerous.

    Effective obfuscation - by Molly White - Citation Needed

    https://newsletter.mollywhite.net/p/effective-obfuscation

    Some have fallen into the trap, particularly in the wake of the OpenAI saga, of framing the so-called “AI debate” as a face-off between the effective altruists and the effective accelerationists. Despite the incredibly powerful and wealthy people who are either self-professed members of either camp, or whose ideologies align quite closely, it’s important to remember that there are far more than two sides to this story.

    Rather than embrace either of these regressive philosophies — both of which are better suited to indulging the wealthy in retroactively justifying their choices than to influencing any important decisionmaking — it would be better to look to the present and the realistic future, and the expertise of those who have been working to improve technology for the better of all rather than just for themselves and the few just like them.

    Great analysis from Molly, who has earned the ire of many crypto-bros by being a Web3 and Crypto doubter and is now moving on to being an AI doubter.

    Don’t be fooled by all the talk of AI doom or AI essentialism. Sure some of the questions raised might be interesting, but by and large it’s a bit of a side show compared to the impact of improved AI tools on everyday working life.

    Guidelines for secure AI system development - NCSC.GOV.UK

    https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development

    AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.

    For this reason, the guidelines are broken down into four key areas within the AI system development life cycle: secure design , secure development , secure deployment , and secure operation and maintenance . For each section, we suggest considerations and mitigations that will help reduce the overall risk to an organisational AI system development process.

    Theres some great stuff in this guidance, although it does suffer from the fact that AI is being adopted repeatedly throughout the supply chain. It says this further down, because it models the world into “AI Provider” and “AI User”, and says that providers are assumed to be managing the data, the prompts, and the responses.

    But of course, with ChatGPT’s new marketplace and the continual development of new startups based on top of existing technologies, much of the time your provider will actually just be a user of another AI provider further down the line.

    The guidance here works well if you are a provider of direct AI capabilities, or if you are a provider who is one degree removed, so you are building ontop of a first tier provider like ChatGPT, Anthropic etc. But if you are too far up the stack, you’ll need to be focused on determining whether your providers meet these requirements as well as meeting them yourself.

    Will ChatGPT write ransomware? Yes.

    https://www.malwarebytes.com/blog/news/2023/11/will-chatgpt-write-ransomware-yes

    Eight months ago I concluded that “I don’t think we’re going to see ChatGPT-written ransomware any time soon.” I said that for two reasons: Because there are easier ways to get ransomware than by asking ChatGPT to write it, and because its code had so many holes and problems that only a skilled programmer would be able to deal with it.

    ChatGPT has improved so much in eight months that only one of those things is still true. ChatGPT 4.0 is so good at writing and troubleshooting code it could reasonably be used by a non-programmer. And because it didn’t raise a single objection to any of the things I asked it to do, even when I asked it to write code to drop ransom notes, it’s as useful to an evil non-programmer as it is to a benign one.

    And that means that it can lower the bar for entry into cybercrime.

    That said, we need to get things in perspective. For the time being, ransomware written by humans remains the preeminent cybersecurity threat faced by businesses. It is proven and mature, and there is much more to the ransomware threat than just the malware. Attacks rely on infrastructure, tools, techniques and procedures, and an entire ecosystem of criminal organisations and relationships.

    For now, ChatGPT is probably less useful to that group than it is to an absolute beginner. To my mind, the immediate danger of ChatGPT is not so much that it will create better malware (although it may in time) but that it will lower the bar to entry in cybercrime, allowing more people with fewer skills to create original malware, or skilled people to do it more quickly.

    This is both worrying, and as Marcus says, probably not the biggest worry. For decent operators, this isn’t going to change the price of fish, it’s not like the barrier to entry is the inability to write the ransomware code itself, especially with the rise of Ransomware As A Service operators.

    But lowering the barriers to entry make it easier for that market to broaden, and makes a far easier first step for budding criminals to get embedded in a world that they may regret being part of

    “Hilarious” Conference Scandal

    https://buttondown.email/grugq/archive/hilarious-conference-scandal/

    A scandal and dumpster fire happening on Twitter with some guy who is running a conference. I’ll leave the threads here because it is just a wild ride.

    Basically, the organizer of DevTrinity conference is accused of adding fake female speakers to the website to make it appear more diverse.

    […]

    It gets even crazier, of the three fake women speakers at least one of them is an Instagram account run by @eduardsi himself

    And the proof it is definitely him? Bad OPSEC gives it away

    This looks like awful behaviour, and seems to have been going on for years. This went from “Hilarious” in air-quotes to “Awful” when it was discovered that the event organiser has also been running a female presenting sock puppet account in tech for a number of years. Of course, the easy ability to generate images from AI makes it ever easier to setup sock puppets that look realistic

    OMGCICD - Attacking GitLab CI/CD via Shared Runners

    https://pulsesecurity.co.nz/articles/OMGCICD-gitlab

    Let’s start with a high level diagram explaining a basic shared-runner attack. Here’s what the general infrastructure might look like: Two different staff members sharing a CI/CD system. An intern that only has access to a specific project, and a staff engineer that has access to the CoreProductionAPI which is our pretend organisation’s main product:

    Let’s imagine we have an attacker who has compromised the intern (or we just hired a particularly malicious intern…). Using a malicious pipeline, this attacker can now compromise the shared runner which will continue to also be used to deploy the production system. Inevitably, this exposes production credentials to the attacker.

    Since one shared runner is used to execute both pipelines, the aim of the game for the attacker becomes determining how the pipelines are isolated from each other on that runner. Often, they aren’t.

    This sort of attack on the shared core infrastructure is an interesting pattern that isn’t yet well understood by a lot of infrastructure engineers. The systems you use to run the build and testing phase of your pipeline needs to be able to execute code that is given to it, but also needs to not run malicious code.

    Sandboxing these environments between projects is hard and true separation through physically distinct build boxes per project is expensive and wasteful. But not separating these things really can put you at risk from a number of fairly sneaky, advanced but incredibly damaging attacks.

    Getting Started | How to Rotate Leaked API Keys

    https://howtorotate.com/docs/introduction/getting-started/

    What are the security implications of a leaked secret? Depending on the permissions and third-party services involved, a leaked key could provide attackers with the means to orchestrate sophisticated social engineering campaigns or gain control over your entire online infrastructure.

    Remediating leaked API keys and tokens requires a systematic and efficient approach. The most secure way to remediate a leaked secret is key rotation. What is Key Rotation? Key rotation refers to the process of (1) generating a new API key, (2) rendering the compromised key obsolete, and (3) updating the associated systems with the new key (like your CI/CD pipeline). This practice helps minimize the impact of an exposed API key or password. Important: Before you rotate an API key, it’s important to ensure you don’t disrupt any applications/services using that API key. Review which applications/services use the affected API, and make sure you have the appropriate permissions to change the API key once rotated.

    This is both fabulous and slightly depressing that it needs to exist. This is a series of tutorials for teams to walk through if they have managed to leak a key somehow. It covers most of the major secrets that will be used in a modern application.

    Note that the approach is always the same, generate a new key, rotate the key in use in production, and then invalidate the existing key. That playbook is worth testing with your systems and staff to ensure you can do it if you detect a key leak situation.

    How to Build Trust - Jacob Kaplan-Moss

    https://jacobian.org/2023/nov/16/how-to-build-trust/

    If you, like me, are a regular reader of Allison Green’s Ask a Manager , you’ll notice a running theme: much of the time, the problem isn’t the problem the writer is asking about, the problem is that their boss sucks .

    In many of these cases, “your boss sucks” really means is something like “they are failing at the job of management”. We see managers refusing to deal with improper workplace behavior, avoiding performance conversations with people whose poor work is impacting the rest of the team, enforcing policies without nuance, etc. Needless to say, these managers don’t win their team’s trust!

    So, building trust has to start with being good (or at least competent) at the basics of management . This means living some basic values – honesty, integrity, kindness, etc., – and it means doing the work: one-on-ones, feedback, coaching, performance management, project/process management, and so forth.

    Importantly: you need to be able to do these basic activities even in a low-trust environment. For example, you can’t wait to give feedback until you’ve won the trust of your team; you have to learn give good feedback (i.e. fair, actionable, positive as well as negative, etc.) even when you don’t have a ton of trust yet. And, you’ll find that when you do, your team seeing you do your job will help build that trust.

    The key observation here is that the techniques that help build trust are also good management . They aren’t things you do just so you can build trust and then relax; they’re activities and behaviors that you do regardless of the level of trust on your team.

    […] Give credit; take blame As a manager, you are responsible for the combined output of your team. This means that when your team scores a win, you do deserve some credit for it. It can be tempting to take accolades without acknowledging the folks on your team who did the work. Don’t! Nobody likes a credit-stealing manager!

    Instead, make sure to always credit the people on your team who contributed to a success. When the team wins, make sure the narrative is that it’s all because of the work of the individual(s) on the team. Try to make your role in the success invisible. On the other hand, when your team stumbles, make it your fault. The narrative should be: the team did their best, but the surrounding structure was wrong. It was a management failure, not on any individual. “Give away your toys” Trust is reciprocal: when you demonstrate that you trust the folks on your team, they’re likely to return that trust. One way to extend trust is delegation : giving people on your team the opportunity to take on some of the leadership aspects of your role. When you trust them to do work that’s more important or visible, you’ll help them trust you.

    But don’t delegate the boring, tedious, or uncomfortable parts of your job; instead, “give away your toys” . The best work to delegate – both in this trust-building context and more generally – is the work that you yourself love.

    There’s lots of good advice in this article, but of all of the things called out that are both good management and help build trust in your team, the two that really stand out to me are the ones about giving credit; take blame, and giving away your toys.

    Giving credit but taking blame has a powerful psychological impact on individuals in your team, and also hints at your role as a manager. Your role is to create an enabling context for your staff to succeed, and if they’ve failed, then that means that you failed.

    But this second one about giving away your toys is critical and somewhat new to me. It’s natural human nature to delegate away the things you don’t want to do. But of course, your staff want to work on the most exciting and interesting projects, which are the ones you are least likely to naturally delegate. I’d add to this advice that you need to remember that delegation is about giving away the power and responsibility to your staff member. That means not being a “helicopter manager”, and hovering and checking on them constantly, but instead investing trust in your staff that you think they can do the job without constant supervision

    State of Cloud Security | Datadog

    https://www.datadoghq.com/state-of-cloud-security/

    Managing IAM permissions of cloud workloads is not an easy task. Administrator access is not the only risk to monitor—it's critical to also be wary of sensitive permissions that allow a user to access sensitive data or escalate privileges. Because cloud workloads are a common entry point for attackers, it’s critical to ensure permissions on these resources are as limited as possible.

    “Insecure default configurations from cloud providers optimize for the ease of initial adoption, but they are often at odds with the level of hardening required for most production deployments. Initial configurations such as over-privileged IAM roles and permissive firewall rules tend to have inertia—once deployed and working, they are more challenging to lock down after the fact. Therefore, the discipline of hardening the security of initial configurations early in the deployment lifecycle is a key virtue of a mature cloud-native organization's culture.”Brad GeesamanStaff Security Engineering, Ghost Security

    Interesting read, covering a number of factors we’ve talked about before, from long lived access tokens, lack of MFA and too many permissions.

    But as pointed out in here, the hard problem comes because the defaults in most out-of-the-box cloud environments are designed for speed and ease of development, rather than maintaining the security. Using some of these features securely, from ensuring MFA to requesting and using short lived tokens make it more complex to get started with the cloud.

    AI-dvent of Code 2023: Day 1 | Tales about Software Engineering

    https://beny23.github.io/posts/advent_of_code_2023_day_1_ai_experiments/

    So it is that time of the year again. Advent of Code is back. Yey!

    This means I get to try to look at a new language again. This time, why not Kotlin? But as an extra challenge, I thought why not see how the vaunted LLMs would help. Is AI really the accelerator that would elevate a mere developer to a rockstar ninja (whatever that is)?

    I have to add that I am a bit of an AI sceptic and keep saying that

    I welcome LLMs! Fixing the stupid mistakes that will be made will keep me in gainful employment well past my retirement age

    But it is easy to shoot from the hip, so I figured I’ll see whether it can help me to develop some code.

    I love advent of code, it’s a great opportunity to try out your programming experiences and try something new. Gerald never disappoints each year with an interesting blogpost on how he is solving it, and his decision to tackle it with the support of AI means that this entire blog series is going to be fun to watch.

    In this edition, Gerald discovers that prompting the AI properly is an art in and of itself, and that you need to understand what it’s trying to tell you so you can discard the chaff from the wheat