A lot of cybersecurity and digital maturity models tend to assume that high performing teams are what economists call "rational actors". We assume that people will follow procedures, that they like rules, and that they don't take decisions that would cause them personal harm, or secondarily that would cause them long term harm.
But in reality, we are all just humans who are trying to manage increasingly complex worlds around us. We make mistakes, and we often take suboptimal decisions for a variety of reasons. When we assume that organisations who reach a higher level of maturity will move away from procedures and policies towards enlightened self determinism (a philosophy I subscribe to), we need to be careful that we understand that these systems are much more vulnerable to bad actors, to poor decisions and to selfish short termist actions. Some poeple would argue that procedures and policies protect us from those people, but they tend to be hyperbolic in their achievements (just see a redteam in action to see how effective those policies actually are), and they restrict the best and brightest from achieving using the latest tools.
From the managers who mentally keep track of procedures and policies but never communicate them, to people making suboptimal decisions because they are drunk or on reddit, we see countless bits of evidence of how these systems don't really achieve what they set out to, but yet our idealism in the view of a "Secure Development Life Cycle" or change management persists in believing that we can bring order to the chaos that is humanity.
Speaking of which, apologies for the delay last week. After much too-ing and fro-ing with TinyLetter, it turns out that linking to a domain related to gambling triggered all of the anti-spam measures at their end, and it took a long time for someone to work out what had happened or why. I'm somewhat pleased that this was the first newsletter in 6 months to be more than a day delayed, so I'm pretty happy with those innings. Should be back to normal from now on.
That small, almost imperceptible little bump on my left hand was a constant reminder that even the most sophisticated and fool-proof technologies are no match for human incompetence.
If you invent a fool proof technology, the universe will simply create a better fool.
"Rogue Raspberry Pi found in network closet. Need your help to find out what it does," geek_at claimed along with a few pictures of the device – a Raspberry Pi with an unidentified USB dongle stuck into it.
The post included some intriguing details: the network closet it was found in is always locked, requires a key, and very few people have the keys. The Linux-powered Pi was trying to connect to a nearby wireless network. It included Docker containers that were updated every 10 hours. And it connected via a VPN to the Balena platform – which is typically used for large internet-of-things system.
Love a good story, and this is a fun one.
One of the things to be taken from this is the reminder that there’s a lot of enthusiastic Cybersecurity expert amateurs out there and sometimes diagnosing and dealing with an incident just needs someone to follow the plan and execute it.
And don’t call the FBI if you aren’t in the US!
This 24 bit key changed daily. If the president ordered a nuclear attack, “the launch control officer would open his safe and pull out a binder with information about what his key should be for each day,” Zimmermann said. “He would compare it and, if it matches, he would proceed.”
The Pentagon didn’t exactly live up to the spirit of the anti-tamper measures. Until 1977, military leaders interceded and set the complicated PAL codes to eight zeroes across all systems. Nuclear launches have gotten much more complicated since the early days of the Cold War and permissive action link.
This little nugget in the middle of a story about how people are reverse engineering the “cryptography” in fallout 76 is quite interesting.
The cryptographic routine that fallout 76 uses however sounds like the first substitution cypher I broke as a kid in the “USBourne Official Spys Handbook”, my favourite book when I was 7!
This is a great resource for teams and I’m surprised it’s taken this long. I’d love to see this in every organisation not just for user research but for documenting the findings of discoveries.
We still tend to run a discovery with an intent to progress into a full service. I wonder how differently we’d run them if the point was purely to fill a library of “discovered problems or user needs” instead?
It is easy to work out how to deceive foreign publics, but far, far harder to know how to protect our own. Whether it is Russia’s involvement in the US elections, over Brexit, during the novichok poisoning or the dozens of other instances that we already know about, the cases are piling up. In information warfare, offence beats defence almost by design. It’s far easier to put out lies than convince everyone that they’re lies. Disinformation is cheap; debunking it is expensive and difficult.
Even worse, this kind of warfare benefits authoritarian states more than liberal democratic ones. For states and militaries, manipulating the internet is trivially cheap and easy to do. The limiting factor isn’t technical, it’s legal. And whatever the overreaches of Western intelligence, they still do operate in legal environments that tend to more greatly constrain where, and how widely, information warfare can be deployed. China and Russia have no such legal hindrances.
This was an interesting if slightly overhyped read. If you took all the claims in this article to be factually accurate, you’d believe that information itself had been effectively weaponised for the last decade or more.
But success in individual cases does not mean general or generic success, and I don’t think any disinformation campaign has managed general success at swaying entire populations towards a desire opinion.
Even the Russian influence campaign on the us elections looks more to me like it emphasised natural division and pushed people towards the edges of the Overton window rather than influenced people on a specific topic in a specific direction.
But the stage is set and the natural assymetry of information warfare is going to cause us global problems in the next few decades for sure.
The bill, produced by Russia's communications ministry, bars unauthorized people from creating and publishing databases of personal data drawn from official sources, and fines anyone violating that rule. It also requires that state agencies setting up systems for handling personal data consult with the Federal Security Service, Russia's main domestic intelligence agency.
I’m somewhat surprised that this wasn’t already the law!
His management style was to use his own extensive knowledge to decide what needed to be done, and to present to others how it should be done. As a consequence of this, adequate procedures to implement the high level requirements of the SMS had not been prepared. In particular, there were no procedures for the use, inspection and maintenance of much of the Permanent Way department equipment. Additionally, there was no comprehensive training programme for volunteer staff, and no structured approach to risk assessment or formal work planning.
Although this individual had retired from both posts 9½ months before the incident, the department continued to operate in the same manner with none of the persons fulfilling the roles raising any shortcomings with the ELR management; this is possibly due to their expectations and the lack of auditing and compliance (paragraph 61).
The fact that the previous Permanent Way department supervisor had been reporting to himself in his role as Civil Engineering director limited the opportunities for the shortcomings of his approach and methods to be identified and corrected.
I love reading incident reports of various forms, because you can see things that happen consistently in many accidents. As Jon Allspaw would say, "we should more often ask ourselves why failures didn't happen, than why they did", so don't take this as a "If you don't have processes, accidents are guaranteed, but without good procedures, accidents are more possible.
This one was fascinating to me because the management of the safety procedures here is very reminiscent of security management in many organisations that I've worked in, and that I've done myself. A senior manager with dated experience, perfectly capable of maintaining everything while they are present, but with everything falling apart as soon as they leave? That sounds familiar!
Smart people sometimes devalue other skills, like relationship building, and over-concentrate on intellect. Very smart people sometimes see their success as inevitable because of their intellect, and don’t see other skills as important.
If you replace the perjorative "smart" with "technically capable", I think a lot of this still holds true, and doesn't reflect on our intelligence, it just reflects that any industry that focuses on success as being within a speciality, tends to ignore everything outside that speciality. This creates an unbalanced and undiverse workforce, but also creates a career limiting ceiling where people get to a certain level being wildly successful and told repeatedly how good they are, and then get stuck trying to get to the next level because they don't have the skills to operate at that level.
We’ve attempted to codify a lot of what we’ve been hearing from various units into a framework around five[sic] high-level themes:
- Political environment
- Institutional capacity
- Delivery capability
- Skills and hiring
- User-centered design
- Cross-government platforms.
This is a great report and digital teams should read it all.
It’s interesting to me that security doesn’t come up at any point in this entire report. The digitisation of government goes on regardless of the opinions of security people and I think that’s probably right. It shows how far most security teams are from engaging with digital work and how digital people still view security as primarily a blocker to what they want to achieve.
If I were to define security in that maturity matrix, it would be interesting to think about what high capability security teams actually look like, because I think they wouldn’t look like most of our security teams today.