We're bad as an industry at passing on knowledge. Much of the digital transformation of the last 10 years was a repeat, or reapplication of the agile evolution in software development, which mostly came from the agile manifesto from 2001. But much of that was taken by people who learned the lessons that manufacturing had to learn from Toyota and the Toyota Production System from as late as 1975. We can trace OKR's back through Google to Intel, and back to Andy Grove in the 70's.
We find that many people learn by rote, a good example is where someone copies code from stack overflow without understanding it. I've noticed people starting to talk about moving away from building "single page applications" and considering the idea of server side rendering. Most of these people were not around for the origins of the internet, for building websites in PHP and Java Enterprise Server Pages and all that "fun". But somehow, as an industry, we've managed to fail to archive, summarise and pass on that knowledge.
And it's not even just as an industry, we're bad at this within teams. How do you explore new ideas and new concepts as a team? How easy is it to look back at the other exploratory projects that teams for the last 5, 10 or 20 years have done and read the outputs or build on the learning they had? I can assure you that many of the problems that you face now are likely very similar to those faced by staff that far back, and if you are lucky, you'll work with clever people who are still around from then. But with staff turnover the way it is, there's a good possibility that none of the people who were there to make a decision even 2 years ago are still around and available to answer your questions.
One of the things that we can do is actively prioritise taking time to research areas before we start, looking at other industries, other teams, and at our own history before we start, to see if there's anything we can learn. Secondly, we can make an effort to teach people. When we explain things to others, we need to arrange our thoughts, and talk it through logically. If we do that, and we document that, we make it far more likely that the people in 5 years time who ask why something is the way it is can find out the answer.
Know the policy history. Most things have been tried before. People move about and lessons are quickly forgotten or, on occasion, buried. Repeating a prior error is wasteful but equally those errors may have arisen for reasons no longer current or relevant.
Don’t assume there is a reason for everything. Don't assume that nothing has a reason. Remain intolerant of blocks and delays but be strategic about how you display that intolerance.
Strongly hierarchical, vertically integrated organisations can turn policy into action quickly but perform very poorly when required to address 'system' challenges. Frame your work accordingly.
Government is not really a 'thing'. It is a patchwork composed of diverse organisations with differing cultures, priorities and processes. There are more differences than commonalities: an adaptive and flexible approach is required.
There's a few of these that are relevant and interesting, but these last four are applicable to many large organisations. Once an organisations passes a certain size, it's hard to keep up organisational memory, share knowledge and get horizontal silo-crossing work agreed and done
An inability to ask for help is not the domain of introverts. We certainly make it a more laborious mental process, but in any group of humans where there are those who know and those who are learning, the latter population is hesitant to ask for help because they don’t want to appear dumb to those who have clearly figured it out.
This is ludicrous. Those who know would greatly benefit from teaching those who don’t know, and those who don’t know would equally benefit from learning.
Yeah. We’re in a hurry. We have a deadline. Everyone is scurrying around so competently. In this hurry, we create the erroneous perception that stopping to teach is somehow slowing the team down when the reality is that we are not just investing in future speed, but in team health: the selfless act of teaching is one of the greatest accelerants to building trust in a team.
Taking the time to pass on your knowledge is not only valuable for reducing the bus factor for the team, but in many cases, articulating the explanation can be valuable to the existing team members for sorting, coordinating and arranging their thoughts, and often times can result in the kind of mental leap that helps fix otherwise unsolved problems. This is very similar to Rubber Duck Debugging but the output is not just more organised thoughts, but clear documentation for everyone who comes after you.
All complex systems are faced with the same problem: although humans are fallible and make mistakes, they cannot be designed out. This is not just because humans design, maintain, operate and promulgate technology, tools and tasks that allow regular systems function, but also because they keep all these disparate components together. Complex systems themselves are naturally unsafe. It is the people and teams within them that allow them to achieve high standards (Dekker 2002).
It is with this in mind that there has been much speculation on how to learn from other industries to address safety issues. We at Great Ormond Street Hospital were able to learn from the Ferrari F1 team, which comprises a complex system, and apply this knowledge to improving a critical handover process, thus developing new ways to think about safety in high risk surgical care.
Though it is possible to argue that these small problems were not affecting patient care, our research (Catchpole et al. 2006), and the work of others in the growing field of patient safety, was starting to suggest that the small things really do matter. The high risk of handovers was identified in the Bristol Royal Infirmary enquiry, while previous research at Great Ormond Street Hospital found problems in handovers, with several recent events and near misses identified to be partly attributable to this poor performance. We felt that these risks could relatively easily be reduced by small process changes, but we needed to understand how.
I, Dr. Ken Catchpole, author of this article, joined the project team, and the two ICU doctors and I were invited to the Ferrari headquarters in Maranello, Italy to discuss pit stops with the race technical director. We showed him a video of our process and discussed at great length how Ferrari achieved the performance levels in pit stops that we sought. Upon return to the UK, we were also able to obtain the views of two British Airways pilots on approaches to structuring teamwork and communications.
Earlier, a Failure Modes and Effects Analysis had been conducted to understand where the biggest risks in the process might lie. After deliberating at some length over the lessons learned and how we might translate them into the highly technical tasks of ICU handovers, we eventually derived a process that included the entire range of elements that we had learned.
I was talking to someone the other day and casually said something like "Like that time where the ICU learned from the F1 team" to be met with a blank stare. I realised that not everybody has heard this story, so I dug out an old but great story of a team learning heavily from a totally unrelated industry, and then applying their findings. Includes how they evaluated those changes, determined if they were helping, and convinced people to adopt the new process.
We think that the skills inherent in dance and parkour, like agility, balance, and perception, are fundamental to a wide variety of robot applications. Maybe more importantly, finding that intersection between building a new robot capability and having fun has been Boston Dynamics’ recipe for robotics—it’s a great way to advance.
One good example is how when you push limits by asking your robots to do these dynamic motions over a period of several days, you learn a lot about the robustness of your hardware. Spot, through its productization, has become incredibly robust, and required almost no maintenance—it could just dance all day long once you taught it to. And the reason it’s so robust today is because of all those lessons we learned from previous things that may have just seemed weird and fun. You’ve got to go into uncharted territory to even know what you don’t know.
https://www.youtube.com/watch?v=fn3KWM1kuAw is a lovely video showing the Boston Dynamics robots dancing in time to music. Obviously, it's all choreographed, and carefully synchronised, and a blatant marketing move by a robotics company wanting to sell its robots (and potentially make them "fluffier" and more palatable to normal people).
But this insight in the interview was interesting to me. You can't always just have a vision for where you want to go. We talk a lot about self organising teams, and pushing decision making down to the team, but to do that you need a vision for them all, and you need to get that vision from somewhere. There's also a mode of exploratory thinking that requires you to try thing you haven't tried before, because that exploration helps you test your vision, and sense the environment and context around you.
As the use of HTTPS continues to increase across the Web, we need more support from Certificate Authorities that issue the certificates to make it all work. I'm a huge fan of Let's Encrypt and what they're doing, but if we want to encrypt the entire Web, we can't rely and depend on a single organisation to help us do that. That's why I'm happy to announce another free CA to help us get there!
Existing Options Of course, Let's Encrypt is my primary recommendation when anyone asks me about a CA. They're free to use, simple and reliable. Something else I always tell everyone though, especially in our TLS/PKI Training, is that you should have a backup CA. Your certificate makes your website work, and if your certificate stops working, your website stops working! There are many reasons a certificate can stop working, with the usual one being expiration, but the fact remains you need a new one. Now, if Let's Encrypt are having a bad day and you can't get a certificate from them for whatever reason, you have a problem. This is why a backup CA is so important, we must have other options.
I've previously spoken about two other CAs that offer free certificates via an ACME API, Buypass and ZeroSSL. You can see the blog posts about each of those two CAs linked there, but today I'm focusing on another option we now have.
SSL.com We can now bring the total number of CAs that you can use quickly, easily and for free up to four!
This is important, because like Scott, while I love LetsEncrypt, having more CA's provide certificates through the ACME protocol gives us not only redundancy, but removes the critical security reliance on a single provider being competent, capable and unhackable.
I have been watching multiple threat actors, including groups operating from US internet service providers again and deploying in methods similar to Hafnium back in January-March.
The Exchange patches from April and May 2021 cover the ProxyShell vulnerabilities, however Microsoft’s messaging of this has been knowingly awful.
Microsoft decided to downplay the importance of the patches and treat them as a standard monthly Exchange patch, which have been going on for — obviously — decades. You may remember how much negative publicity March’s Exchange patches caused Microsoft, with headlines such as “Microsoft emails hacked”.
Why these vulnerabilities matter
However, the vulnerabilities in question are extremely serious, and reported by the same person as the person who reported ProxyLogon — aka March’s Exchange’s vulnerabilities.
They are pre-authenticated (no password required) remote code execution vulnerabilities, which is as serious as they come. Additionally, during the ProxyLogon attacks in January-March, attackers needed to know an Exchange administrator mailbox, and hardcoded to administrator@ in proof of concept code. This mailbox only existed if you installed Exchange as that account, and accessed email, which is a minority situation — therefore most orgs got away with it.
However, with ProxyShell this does not apply — you do not need to know the identify of an Exchange administrator in advance
Kevin takes objection to the way that Microsoft have managed this, with some good evidence I'll say, and in particular, it's clear that Microsoft's security plan is to strengthen and secure Microsoft 365 constantly, and that they care far less about whether or not you patch your on-premise instances.
Regardless of how you feel with this strategy, it just adds to my ever constant refrain, stop running your own mail servers and pay someone competent to do it for you.
We break a ransomware incident into three phases:
- Initial access
- Consolidation and preparation
- Impact on target
In each phase different attackers use different tools and techniques, but the goals of each attacker remain the same. By understanding the goal of the attacker, we can refine our defences to make it harder for them to achieve, it regardless of which tools or ransomware variant you’re dealing with. It’s important not to rely on a single security control and take a defence-in-depth approach. An incident response plan should be prepared ahead of time taking into consideration how you would respond and recover from a ransomware incident.
To help prepare your defences we’ve mapped our critical controls along various stages on the lifecycle of a ransomware incident. This will help identify potential gaps in your defences and make sure the implementation of the critical controls gives you the best chance of stopping a ransomware attack before it’s too late.
We have have created a visual representation of the lifecycle of a ransomware attack focusing on the most common pathways we see in reported ransomware incidents. It’s not an exhaustive description of every possible pathway but serves as a start for you to plan your defences.
This is a lovely resource. Note that even my all-time favourite control, MFA, only protects against the path of initial access that comes via Internet exposed services. If you are really serious, you are going to have to also look at end user device controls such as disabling macros, application whitelisting and one that's not on the map, but application sandboxing as well.
The Domain Name System or DNS is a never-ending source of amusement and amazement. If you have been dealing with just about anything related to operations on the internet, you know that it's always the DNS in the end, what with its almost 100 different resource records and, uhm, shall we say, "interesting" security threat model.
But today, let's talk about Top-Level Domains, or TLDs. You know, .com, .org, .net, .gov, .vermögensberatung and .香港 - those guys. As you know, the entire domain name space consists of a tree of domain names; the (common) root of the DNS tree is . (dot), and the tree sub-divides into zones consisting of domains and sub-domains:
A fascinating read into how DNS is organised and coordinating around the world.
On July 19, 2021 I discovered a terrorist watchlist containing 1.9 million records online without a password or any other authentication required to access it. [...] Each record in the watchlist contained some or all of the following info: * Full name * TSC watchlist ID * Citizenship * Gender * Date of birth * Passport number * Country of issuance * No-fly indicator
This is very much "if you don't laugh, you will cry" territory.