Cyberweekly #199 - Learning by doing

Published on Sunday, June 19, 2022

How do we learn stuff?

There's a lot of different theories about how people learn.

One of my teacher friends always gets quite frustrated at the common tropes of "learning styles" because there's quite a lot of evidence to debunk the learning styles myths.

Instead, what's clear is that we all learn through a number of different means. That simply having information quoted at us, and having to memorise and regurgitate it isn't good for anybody.

A variety of learning mechanisms is needed for humans, from spaced repetition, memorisation, cramming, but also learning by doing, and critically learning by failing.

In work and academia, we often prioritise a kind of "best practice" model of knowledge, as if there is just one good way of doing something, and that might be the only way to do that thing.

Instead there's a lot of good practice, and some bad practice that we can learn from people around us. But most of the time the thing we have to learn as we move from junior or novice into a more journeyman/worker level is not just how things should be done, but what they feel like as they are going wrong.

When I learned to sail, I fell in the water a lot. Learning to trust the feeling of the boat, the way that the weight and wind would shift as you started to turn, and the tell tale flapping noises when you didn't have the sail set quite right is something that can only be learnt through experience. Someone can tell you what to listen for, they can explain what it will feel like, but your experience is important to center those descriptions in your lived life and work out for yourself how to do it.

When we talk about learning resources for our junior, for people who are up and coming in the industry, we should be trying to ensure that they aren't just finding theoretical lists of knowledge. While that stuff might be important for the knowledge of the facts, what is just as important is learning from others through how they fail, is following along with someone as they do stuff right and watching for the misteps they don't take. The SANS "New to Cyber Field Manual" does;'t just list some content you should learn, but finds and highlights communities to join, experts and mentors and conferences. Alongside that, I would add the sort of hack-me projects and sandboxes that are listed in a great recent reddit thread about learning. The ability to sit and try stuff out, and see what works and what doesn't is a really useful learning tool for juniors, seniors and masters alike.

Secondly, we never really stop learning. As Simon Wardley graphs beautifully in one of his threads of graphs, the more expert we become, the more we acknowledge how little we actually know. Learning has to be a continual experience, because the moment we think we understand everything about a subject is the most dangerous point. If we stop there, we confuse our limited ability to understand the extent of a domain with our ability to know everything we understand. This causes our ego to write cheques that our understanding simply cannot cash.

One of my favourite ways of improving my knowledge is by teaching others. Only when you start to try to explain something complex do you realise the edges of your understanding, and in attempting to answer deceptively simple questions, you find yourself having to think much more deeply about the subject. This introspection can really help us to understand fallacies or conceptual short cuts that we took when we were junior, and to reevaluate them in light of our growing understanding. It's one of the reasons that I love a conference talk that is a retrospective on "how we did X". The speaker in those talks is rarely presenting a view of "this is the best way" or "follow me, for I know everything". Instead they are simply telling a story about how they faced a complex problem, and how they solved it. That level of introspection required to give that talk probably gives them significantly more insight into the problem than someone who simply did it, and listening to those stories always inspires me to find out more about this thing or that thing, rabbit holes that always fascinate, illuminate and educate me.

Why this self reflective missive on education this week? Well partly because it's a good theme to riff on. But also because the looming milestone of 200 CyberWeekly newsletters next week is causing me to reflect on my journey over the last 4 years, and why I write this newsletter. I'll save the real introspection for next week, but suffice to say, I mostly write this newsletter for myself, because the act of reading, filtering, selecting, commenting and writing all help me to think through a wide variety of topics and areas. Doing so out-loud to hundreds of people each week can be intimidating, but ultimately, I think it drives me to be more thoughtful, to question more, and to consider stuff more carefully. In short, it keeps me learning, every week, week on week. And that can only be a good thing.

    7 Absolute Truths I Unlearned as Junior Developer

    Not all experience is created equal.

    My experience coding in my bedroom, working as a student, working in CS research, and working at a growing startup are all valuable kinds of experience. But they aren’t all the same. Early in your career, you can learn 10x more in a supportive team in 1 year, than coding on your own (or with minimal feedback) for 5 years. If your code is never reviewed by other developers, you will not learn as fast as you can – by an enormous factor.

    I also learned that job titles don’t “make” you anything.

    It’s kind of like, being a CTO with a 5-person team is different than with a 50-person team or a 500-person team. The job and skills required are totally different, even if the title is identical. So just because I had a “senior” job title did not make me a senior engineer at all. Furthermore, hierarchical titles are inherently flawed, and difficult to compare cross-company. I learned it’s important not to fixate on titles, or use them as a form of external validation.

    Many conference talks cover proof of concepts rather than real-world scenarios.

    Just because you see a conference talk about a specific technology, doesn’t mean that company is using that tech in their day to day work, or that all of their code is in perfect shape. Often people who give conference talks are presenting toy apps rather than real-world case studies, it’s important to distinguish the two.

    Focus on automation over documentation where appropriate.

    Tests or other forms of automation are less likely to go out of sync. So instead I try to focus on writing good tests with clear language, so developers working on code I wrote are able to see how the project functions with working code. Another example is automating the installation of an application with a few comments, rather than a long and detailed installation guide.

    There’s a lot of good points in this article, so I’ve only pulled out a couple that really resonated with me.
    There’s some really good points in here, and there’s a really good meta point about how much you have to unlearn as you progress in your career. That point in your career where you think you know everything is probably the most dangerous, because in some areas you are acting like a senior developer, but with maturity and experience comes the ability to work out what lessons can be mapped to new organisations, and what lessons were context specific. The real skill of a senior or principal developer is knowing just how much there is that you don’t know

    Are TryHackMe paths "Complete Beginner" and "Cyber Defense" good for getting some basic knowledge about cybersecurity?

    I think you may need this btw. Here are some resources that I’ve come across which very useful to me when I start learning to hack. Hopefully this can help you.

    This is a nice list of self taught security resources that can help a budding security researcher in their goal to learn all there is to know

    Scaling Appsec at Netflix (Part 2) | by Netflix Technology Blog | Jun, 2022 | Netflix TechBlog

    A few years ago, we published this blog post about how we had organized our team to focus our bandwidth on scalable investments as opposed to just traditional Appsec functions, which were not scaling well in our rapidly growing environment. We leaned into the idea of strategic security partnerships and automation investments to create more leverage for application security. This became the foundation for our current org structure with teams focused on Appsec Partnerships and Appsec Engineering . In this operating model, we provided critical Appsec operational services to Netflix — including bug bounty, pentesting, PSIRT (product security incident response), security reviews, and developer security education — via a shared on-call rotation.

    Over the past few years, this model has allowed us to focus on investments like Secure by Default for baseline security controls, Security Self-Service for clear actionable guidance and Vulnerability Scanning at scale for software supply chain security. We wanted to share an update on learnings from this model, how our needs have evolved, and where we expect to go from here.

    Say it with me, “You are not NetFlix”. Just because Facebook, Apple, Amazon, Netflix or Google do something doesn’t mean that you should. But if you’ve got a reasonable size budget for an AppSec team, then looking at how someone like Netflix scales its security programme is worth doing. The original was good, but this a few years on gives a good look back on what worked and what needed improving.


    One of the most asked questions we get is

    “How Do I Get Started in Cybersecurity?”

    Unfortunately, there isn’t a simple answer that works for everyone. This guide was created to help YOU figure out the best path to get into cybersecurity. Use it to help develop your skills and find a network of people to support you getting into the industry

    This is a nice summary of some of the better news sources, community and training that you can pick up. Of course it emphasises the SANS community of mentors and leaders, but if you are getting into cybersecurity, it's most certainly not a bad start, and the advice is all stuff I agree with as well.

    A Cyber Threat Intelligence Self-Study Plan: Part 1 | by Katie Nickels | Katie’s Five Cents | Medium

    There are many ways to learn. While some people prefer to have a live instructor in a course, others are great at doing self-study. I teach SANS FOR578: Cyber Threat Intelligence , which is a great course if you want to learn about cyber threat intelligence (CTI), but I realize not everyone can afford it. Here’s the good news: if you are committed, you can learn a lot of the same concepts that paid courses teach, but on your own. It won’t be the same, but you can still learn a ton if this learning style works for you. I wanted to share a self-study plan to help out anyone who wants to take the initiative to learn about CTI. There are lots of great resources out there, but I realize as you’re starting out that someone saying “go look at all the things!” isn’t that helpful because you’re not sure where to look. My goal is to bring together free resources I’d recommend studying and provide a minimal framework and question to help tackle them.

    This is an absolutely fabulous reading list, and well worth going down even if you are somewhat experienced at threat intelligence. Starting with Shermen Kent and analytical doctrine is so important, but easy to miss if you are self-taught. But Kent brings it home to the fact that threat intelligence must have a purpose (policy in Shermans day), and it must have rigour. Many professional cyber threat intelligence practitioners could do with a refresh on this.

    I note that the link to the summary of the Kent doctrine has moved on the CIA website (skip to page 9 for the doctrine itself)

    April 2022 Incident Review | Heroku

    According to GitHub, the threat actor began enumerating metadata about customer repositories with the downloaded OAuth tokens on April 8, 2022. On April 9, 2022, the threat actor downloaded a subset of the Heroku private GitHub repositories from GitHub, containing some Heroku source code. Additionally, according to GitHub, the threat actor accessed and cloned private repositories stored in GitHub owned by a small number of our customers. When this was detected, we notified customers on April 15, 2022, revoked all existing tokens from the Heroku Dashboard GitHub integration, and prevented new OAuth tokens from being created. We began investigating how the threat actor gained initial access to the environment and determined it was obtained by leveraging a compromised token for a Heroku machine account. We determined that the unidentified threat actor gained access to the machine account from an archived private GitHub repository containing Heroku source code. We assessed that the threat actor accessed the repository via a third-party integration with that repository. We continue to work closely with our partners, but have been unable to definitively confirm the third-party integration that was the source of the attack. Further investigation determined that the actor accessed and exfiltrated data from the database storing usernames and uniquely hashed and salted passwords for customer accounts. While the passwords were hashed and salted, we made the decision to rotate customer accounts on May 5, 2022, out of an abundance of caution due to not all of the customers having multi-factor authentication (MFA) enabled at the time and potential for password reuse.

    The actor managed to get into Heroku’s system via some unknown method, presumably through a token in something.
    That token took them to the Heroku build tokens, which got them into the build system, which got them into source code, which contained more tokens, which got them into databases and on and on it goes

    This kind of slow unravelling of your system is a nightmare, because no individual decision can fix this. Secondly, in order for systems to run, those tokens and access credentials must be available to running systems at some point, and so this kind of attack is almost impossible to practically defend against. Sure one can theoretically replace almost all of these tokens with public/private key encryption and use HSM’s, but thats expensive, slow, prone to breakage, and when you are talking a wider ecosystem like this, at least one of your suppliers or partners isn’t going to support that mechanism.

    Detection here seems to be key. Enumerating metadata about customer repositories using those tokens should be an anomalous action that could have been detected. Even better, if some of those tokens had had limited scopes and grants, it would have been even more obvious that they were being used from a location or in a way that wasn’t intended. But actually detecting that in the noise of all these tokens being used correctly is stupidly hard.

    We don’t get to see the details of a true APT attacking an infrastructure provider, but this shows that they can be patient, careful and methodical about their attacks

    Symbiote: A New, Nearly-Impossible-to-Detect Linux Threat

    What makes Symbiote different from other Linux malware that we usually come across, is that it needs to infect other running processes to inflict damage on infected machines. Instead of being a standalone executable file that is run to infect a machine, it is a shared object (SO) library that is loaded into all running processes using LD_PRELOAD (T1574.006), and parasitically infects the machine. Once it has infected all the running processes, it provides the threat actor with rootkit functionality, the ability to harvest credentials, and remote access capability.

    The Birth of a Symbiote Our earliest detection of Symbiote is from November 2021, and it appears to have been written to target the financial sector in Latin America . Once the malware has infected a machine, it hides itself and any other malware used by the threat actor, making infections very hard to detect. Performing live forensics on an infected machine may not turn anything up since all the file, processes, and network artifacts are hidden by the malware. In addition to the rootkit capability, the malware provides a backdoor for the threat actor to log in as any user on the machine with a hardcoded password and to execute commands with the highest privileges. 

    Since it is extremely evasive, a Symbiote infection is likely to “fly under the radar.” In our research, we haven’t found enough evidence to determine whether Symbiote is being used in highly targeted or broad attacks.

    Symbiote is very stealthy. The malware is designed to be loaded by the linker via the LD_PRELOAD directive. This allows it to be loaded before any other shared objects. Since it is loaded first, it can “hijack the imports” from the other library files loaded for the application. Symbiote uses this to hide its presence on the machine by hooking libc and libpcap functions.

    This is a very smart set of evasive techniques. Once the malware is on the machine, discovering it with any security software is going to be very hard. Your best bet then is to try to catch this before it lands.

    Private Access Tokens: eliminating CAPTCHAs on iPhones and Macs with open standards

    oday we’re announcing Private Access Tokens, a completely invisible, private way to validate that real users are visiting your site. Visitors using operating systems that support these tokens, including the upcoming versions of macOS or iOS , can now prove they’re human without completing a CAPTCHA or giving up personal data. This will eliminate nearly 100% of CAPTCHAs served to these users.

    What does this mean for you?

    If you’re an Internet user:

    • We’re making your mobile web experience more pleasant and more private than other networks at the same time.
    • You won’t see a CAPTCHA on a supported iOS or Mac device (other devices coming soon!) accessing the Cloudflare network.

    This is an interesting technology. It uses a form of public/private key encryption to use the hardware on your phone to create an unforgable cert that can be sent to Cloudflare to validate that you have a real phone.

    Interestingly, that suggests that Cloudflare at least, think that Captchas are primarily there to solve the problem of scalable fake devices being used as a bot farm, but pictures from the last few years of review farms show thousands of low end devices in hardware rigs to create reviews or click adverts. In those cases, the device is real, but the user is not.

    Hertzbleed Attack

    Hertzbleed is a new family of side-channel attacks: frequency side channels. In the worst case, these attacks can allow an attacker to extract cryptographic keys from remote servers that were previously believed to be secure. Hertzbleed takes advantage of our experiments showing that, under certain circumstances, the dynamic frequency scaling of modern x86 processors depends on the data being processed. This means that, on modern processors, the same program can run at a different CPU frequency (and therefore take a different wall time) when computing, for example, 2022 + 23823 compared to 2022 + 24436 . Hertzbleed is a real, and practical, threat to the security of cryptographic software. We have demonstrated how a clever attacker can use a novel chosen-ciphertext attack against SIKE to perform full key extraction via remote timing, despite SIKE being implemented as “constant time”.

    I don’t really agree that this is a real and practice threat to the security of cryptographic software. It’s a really nice advancement of theoretical attacks that can result in much interesting further research, and it enables researchers to turn power differential attacks into remote timing based attacks, which is a massive increase in vulnerability class. But, this particular case is reliant on some errors in the way that SIKE is implemented, which can be fixed in software. Without the odd behaviour of SIKE implementations when facing a 0-block, detecting the additional timing is hard.

    Secondly, all timing based attacks tend to be tested on target vulnerable systems that are not otherwise busy with variable workloads. They are incredibly vulnerable to noise, which makes the attacks slow and painful to carry out.

    Finally, for some cryptographic operations, certainly, the most common ones such as in TLS, the use of perfect forward secrecy, meaning the creation of new private keys on a message by message basis is designed to resist the exploitation of this kind of attack.

    In other words, this is a big step forwards in vulnerability research for cryptographic attacks, and if you make cryptographic hardware, rely on SIKE in a specific mode, or are worried about attackers who are going to spend millions of dollars and years of researcher time on weaponising this, then you should worry. For the rest of us, the most common cryptographic primitives still feel about as secure today as they were a week ago.

    What’s wrong with delivery management? | by Jonny Williams | Dec, 2021 | Medium

    “Delivery” as a concept is problematic. So much so that I’ve started to resent its use in many conversations. Delivery is regularly used to infer that a “thing” will be delivered. In a lot of organisations, this means the delivery of a project with tangible outputs. In the context of an agile environment, we can rationalise the “thing” to be the delivery of steps towards a goal. However, in my experience, the delivery of a “thing” can regularly be the wrong course of action. Delivery should change direction, or even stop, as soon as we recognise a bad idea (or a bad goal). In a context with fixed scope, this might mean nothing has been delivered within scope. Or to phrase it in a way that I have heard from senior colleagues in the past “nothing has been delivered”. How do you square that as a DM? Especially if you are measured based upon output. In order to avoid delivering bad ideas, teams often segregate the thinking from the doing or the design from the delivery. Having a discovery phase before a delivery phase. Defining delivery as its own entity leads us back down the rabbit hole of a design, implement, chuck it over the fence lifecycle. John Cutler recently said, “I’m seeing ads for something called a delivery manager. Who takes care of the software after it is born?”. Delivery is often used to mean “the implementation of pre-defined ideas”, but if we see delivery as the goal (or the strategy) we can easily end up in feature factory mode, delivering many “things” that add zero value to anyone, which end up abandoned because nobody thinks about life after delivery.


    I had to work hard within teams to qualify the management aspect of my DM job title to be about managing the environment we all worked in or the blockers that got in our way. However, being expected by senior colleagues to represent the team in townhalls or performance manage engineers actively worked against the idea that the team didn’t need a “manager” from a waterfall world. We have an appetite to retain old ways of working while attempting to apply new ways of thinking.


    Fundamentally delivery manager is a problematic job title because one person alone cannot be responsible for delivery. As Allen Holub says, “I’ve lately seen the title “Agile Delivery Lead.” There is no such [thing]. It’s the entire team’s job to deliver.” Delivery is a team sport. I regularly hear “Well, you are responsible for delivery” from senior colleagues, and I always feel like saying “…no I’m not, we all are”. If anyone ever doubts that, I would love to see what type of product or service a room full of DMs would deliver alone. It probably wouldn’t be anywhere near as good as one delivered by a multi-disciplinary team.

    This is a cracking review of the problem of "Delivery Management" as a profession and job title.

    There's a lot of history here, with GDS's decision over a decade ago to pick a job title that didn't align to any existing role, but also did align directly with any specific delivery methodology.

    But in many organisations, the temptation to simply rename existing people and processes without actually changing what they do pollutes the space and concept and means that many people who are great at their job end up calling what they do "delivery management".

    One of the hardest things in big distributed organisations is ensuring that language is being used consistently and appropriately by everyone. Otherwise you end up with people talking past one another without even realising it.

    This kind of self-aware critique of our own conceptions is vitally important to improving the entire communities self actualisation and come to a shared language. A great read, thoroughly recommended