I believe that we fail to learn well enough from the past.
I was in a conversation last week, where I was explaining how I wanted to build community, and think about how information is kept and managed, and one of my very smart coworkers responded to me with links to 15 year old academic papers on the subject of building communities and collaboration with a frustrated retort of "How can you go ahead with this if you haven't read any of this stuff". [And once I've read and digested properly, that'll be the focus of an upcoming cyber weekly].
We, in technology, have a tendency to think that we are always the first people doing the things we are doing. And we might be in our area, but in many cases we are treading worn paths that have already been trodden by others in other industries, and we're not very good at learning from them.
On the other hand, we also have a tendency to focus fighting the battles in front of us, the battles that form part of a war that we may well have already lost. I gave a talk this week about security and technology and used the Maginot line as an example of this. France built the Maginot line because it was trying to prevent a repeat of World War 1, where infantry advanced slowly and trench warfare was endemic. The fortresses that formed part of the Maginot line were amazing fortresses and would have been indestructable to an oncoming force in 1939, had the Germans not invested in mechanised transport, light tanks and fast moving units, who were able to simply go around the defensive line. The french had focused on winning the last war, whereas the Germans had focused on fighting the next one.
Those who don't know history are doomed to repeat the mistakes of the past, but those who don't lift their eyes from ground will never see what's coming over the horizon.
Last month Nova Scotians got official-looking letters notifying them of wolves being released. Soon loudspeaker truck(s) began driving around playing howls
This is a thread and a half.
A training program that got out of hand and had an impact on the wider community. That suggests a program that was not ethically managed or run. If you are running training simulations, whether phishing, red team, or game days, your staff should be able to tell that they are in a simulation, and you should be able to intervene to prevent it spilling out of control.
In the mid-1990s, Bristol, England, saw very high mortality for surgery in congenital heart disease, followed by contentious public inquiry. One of the important findings of a subsequent study was that the journey from the operating room to the intensive care unit (ICU) was high risk.
In Formula One motor racing, the pit stop team completes the complex task of changing tires and fueling the car in about seven seconds. The doctors saw this as analogous to the team effort of surgeons, anesthetist, and ICU staff to transfer the patient, equipment, and information safely and quickly from operating room to ICU.
GOSH doctors visited and observed the pit crew handoff in Italy, noting the value of process mapping, process description, and trying to work out what people’s tasks should be. Following their trip, the GOSH team videotaped the handover in the surgery unit and sent it to be reviewed by the Formula One team. From the analysis came a new handover protocol with more sophisticated procedures and better choreographed teamwork.
The real gain for patients was safety. Results showed that the new handover procedure had broken a link between technical and informational errors. Before the new handover protocol, approximately 30 percent of patient errors occurred in both equipment and information. Afterward, only 10 percent occurred in both areas.
The value of looking at processes with a fresh pair of eyes is important. In this wonderful case study, the teams recognised that their existing process had dangers, so were receptive to change. They looked outside their technical work to see analogous processes being done and looked to apply the same process improvements. This resulted in a marked increase not just in efficiency, but a reduction in errors.
To reduce friction and build alignment I’ve come up with 4 metrics that Enabling Teams can adopt and track to measure the quality of service they are providing to their users (the delivery teams)::
- Cycle time — e.g. the time between a request for a new procurement through to the people (or thing) arriving in the delivery team.
- User satisfaction — How satisfied the delivery team is with the service offered by the Enabling Team.
- Effort — the amount of effort incurred by the user of the Enabling Team service.
- Quality — the quality of the people (or thing) that the process has produced e.g. the quality of a supplier that has been procured or the quality of the space provided by an Estates team as measured by the delivery team (the user).
I'm a big fan of Mark's work and these metrics for enabling teams are well laid out. The first two are simple and measurable, making them ideal for OKR's for enabling teams.
The effort one is a tricky one to measure, you can just measure it in time taken, but if your process is bad, but your users have got used to it, sometimes they can be faster than an easier process. What you want to measure is not just time taken to complete the user end of the process, but some level of thought or attention needed as well. Proxy metrics in terms of mistakes, clarifications and similarity of request might work to give you a sense of this.
Quality is of course the hardest to measure for an enabling team, but so critical. I have no good ways to measure this in an independent way for the team, and it might be something that has to be measured through qualitative measures, such as the feeling for quality from the users and sponsors.
For now, we simply want to frame the reboot with some issues and paradoxes. They’re similar to those that inspired us to launch the site way back in 1997, which says perhaps that we made no difference (likely) or that we're tracking some pretty big problems (more likely).
Issues: People like to talk about “systems” but they rarely stop to analyze their risks from a systemic perspective.
Decision makers at all levels still struggle to contextualize complex problems and consider possibilities that challenge their prevailing mindsets.
As important as the technical aspects of our challenges are, we tend to overlook the operational and strategic issues that inform our day-to-day security decisions.
We still build and link systems without much thought to their ultimate vulnerability.
Adversaries still find ways to beat us at a fraction of the cost of our defenses.
(And yes, red teaming still looks a lot like pentesting.)
The red team journal is worth adding to your regular reading for a fairly thoughtful look at red teaming and how it fits into a wider security and organisational picture.
The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect.
"This 'keyword warrant' evades the Fourth Amendment checks on police surveillance," said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. "When a court authorizes a data dump of every person who searched for a specific term or address, it's likely unconstitutional."
The keyword warrants are similar to geofence warrants, in which police make requests to Google for data on all devices logged in at a specific area and time. Google received 15 times more geofence warrant requests in 2018 compared with 2017, and five times more in 2019 than 2018. The rise in reverse requests from police have troubled Google staffers, according to internal emails.
I'm not sure how troubled I am by this. I can see the US arguments against this, that someone searching for unnecessarily broad terms would result in a dragnet crossing the entire population, but my gut feel is that in this case, where the search term is more or less unique, that there's less of an overly broad case to respond to.
This is about the investigators have reasonable suspicion that a crime has been committed, and they are seeking evidence both to cast suspicion on people, but also to remove people from the investigations. In the case of a Geofence warrant, proving that your phone was not at a given location when a crime was committed can be a good thing as well as a way of building a case that has to result in the police arguing beyond reasonable doubt that the suspect committed the crime.
I’m going to geek out on this thread more than normal b/c new Science paper is WTF bananas awesome, and I don’t want it to get lost in the news cycle. In criminal justice policy, there’s a bad habit of solving problems by increasing punishment.
The underlying piece or behavioural science proves what many in the digital transformation space have been saying on gut instinct for some time. Good well designed systems can massively reduce systemic failures, and generate both social benefits and economic benefits in reducing waste.
Hacking into Android in 32 seconds
Samsung S7 is connected to Pixel as HID device (keyboard) that tries to brute force lock screen PIN and then download, install and launch Metasploit payload
While a cool demo, and raising an interesting point about the correctness of allowing a USB input device to be attached to a computer while locked, this has one big flaw that the user did not admit to in the demo.
They disabled the lockout delay system. In a normal android phone, after 5 attempts, there's a 5 second pause per attempt, which ratchets up to 30 seconds per attempt fairly quickly. This demo would be far less cool if it took several years to bruteforce the PIN, which it will in almost all real conditions.
That's the choice successful startup founders are faced with. Build something good, and the buyout offers start rolling in. But while selling out in most other fields of creative endeavor is frowned upon, it's a given on the Web.
Maybe it shouldn't be. For every YouTube, there are horror stories of great people with great products, squandered in the yawning maws of uncaring corporate integration. Dodgeball gets lost in Mountain View. Beloved bookmarking services like Delicious become fields of information left fallow.
Some upstarts take an independent path. Consider Foursquare. Or Twitter. Or Facebook. Each spurned buyout offers, and none has ever been stronger. All managed to find a business model over time. Or even StumbleUpon, which only found its feet after its founder re-purchased his company from eBay and spun it off again as an indie.
It's no secret that for many entrepreneurs, the exit is always the goal. It's about the sellout before the first line of code is written. But for a select group, products are meant to be art. They are meant to literally change the world. And for those, selling out can be especially problematic.
This story, of the big corporation still trying to fight the last war, rather than fight a new war is characteristic of nature of technology companies and many other companies.
Yahoo bought Flickr, and didn't realise that some of the nascent features, around Social networks, would go on to become the core of Web 2.0, or if it did, it couldn't turn to focus on them. It was focused instead on beating Google at the search game, and at the content that it had discovered.
How sure are you that you are fighting the next war, and looking at the new and nascent things you are doing rather than simply trying to compete for second place in the old war?
The tech industry and its press have treated the rise of billion-scale social networks and ubiquitous smartphone apps as an unadulterated win for regular people, a triumph of usability and empowerment. They seldom talk about what we’ve lost along the way in this transition, and I find that younger folks may not even know how the web used to be.
So here’s a few glimpses of a web that’s mostly faded away
This is a slightly maudlin and occasionally rose tinted view of the web that we lost (I say as someone who spent months trying to wrestle OEmbed, Oauth, and OpenSocial API's into making sense).
The downside to much of the open web was the rise of the standard, and in particular the standard of standards, the XML Schema language, the endless ways that you had to describe the thing you were going to describe later.
But I do miss the great interconnected nature of it all, which I guess makes me old these days.