A long one this week, primarily because the US released the Worldwide threat assessment and the Intelligence Community strategy. This resulted in a lot of reading about various military systems and networks, which always fascinates me. I’ve tried to pick out some of the best and most relevant analysis, but I do recommend that interested people read the strategy and threat assessment themselves.
There is a worry for me that the US dominates “cyberspace”, and that the US jingoistic thinking tends to result in an us vs them attitude for offensive teams. The amazing report from Reuters about the UAE offensive program is insightful not just for the program itself, but also to try to reconcile their sources quotes and mental model of the world. Try to remember that concept as you read the security strategy and think about what that might mean for the US’s allies and partners across the world.
Thanks you for those who wrote to me after last week, telling me that you valued the wide selection of articles, and the comment and analysis. It sounds like I’m hitting the right spots, and wont change anything in the near future (other than improving my automation of producing this newsletter). Do remember to forward it on to people who you think may enjoy it, and encourage them to sign up as well.
he strategic environment is changing rapidly, and the United States faces an increasingly complex and uncertain world in which threats are becoming ever more diverse and interconnected. While the IC remains focused on confronting a number of conventional challenges to U.S. national security posed by our adversaries, advances in technology are driving evolutionary and revolutionary change across multiple fronts. The IC will have to become more agile, innovative, and resilient to deal effectively with these threats and the ever more volatile world that shapes them. The increasingly complex, interconnected, and transnational nature of these threats also underscores the importance of continuing and advancing IC outreach and cooperation with international partners and allies.
There’s a lot in here, and I recommend a read if you have the time. This is a well written strategy document that does set out a vision of where the US wants to go next. I feel like it is offensively weighted rather than defensively weighted, and outlines the US desire to maintain a monopoly on the global stage, but I think that’s somewhat to be expected.
Our adversaries and strategic competitors will increasingly use cyber capabilities—including cyber espionage, attack, and influence—to seek political, economic, and military advantage over the United States and its allies and partners. China, Russia, Iran, and North Korea increasingly use cyber operations to threaten both minds and machines in an expanding number of ways—to steal information, to influence our citizens, or to disrupt critical infrastructure.
At present, China and Russia pose the greatest espionage and cyber attack threats, but we anticipate that all our adversaries and strategic competitors will increasingly build and integrate cyber espionage, attack, and influence capabilities into their efforts to influence US policies and advance their own national security interests.
Want to know what is going on in the world? Want to know what capabilities we think nation states have and what they’ll do the US? This is the document to read.
I note with interest that the only thing worrying the US from the United Kingdom is the impact of leaving the European Union.
Nearly all information, communication networks, and systems will be at risk for years to come,” the 2019 national intelligence strategy reads. The strategy, which was released Jan. 22 by the Office of the Director of National Intelligence, is a four-year road-map for the American intelligence community.
While the strategy touches on other topics, such as counter-terrorism and counter-proliferation, cybersecurity is listed as a top priority.
“As the cyber capabilities of our adversaries grow, they will pose increasing threats to U.S. security, including critical infrastructure, public health and safety, economic prosperity, and stability,” the document reads.
Well that’s not terrifying at all, but matches what I think a lot of us in cyber security see on a daily basis. The growing race between offensive cyber security and defensive is still in a place where offensive capabilities far outstrip our ability to defend, and will continue to do so for some time.
But the Space Drone tugs, built by London-based Effective Space, are capable of so much more. Under the right circumstances, they—and spacecraft like them—could become weapons.
The 800-pound, cube-shape Space Drone is essentially an orbital tugboat. The spacecraft features a docking system and a small motor. A Space Drone can maneuver close to an aging satellite that’s slowly falling back to Earth, attach to it, fire up its own motor, and shove the comms sat back into its proper place.
Instead of boosting old but still active satellites, the Space Drones could attach themselves to orbital debris—which poses a collision risk to active satellites—and direct the junk into Earth’s atmosphere to burn up. There’s no reason a Space Drone couldn’t do the same thing to an active satellite, in essence hijacking it and sending it falling to its destruction.
"There is of course a potential that certain regimes will find such technology abusively interesting," said Daniel Campbell, the managing director of Effective Space. He compared the Space Drone to autonomous cars, lasers, drugs and software 'bots, all technologies with the "potential to be abused."
And to think, all this time I’ve been worried about cyber security when it sounds far cooler to be worried about security in space.
There are some lovely attacks that are possible with the increased democratisation of space, and commercial exploitation of it. But what worries me more is the idea that these satellites are controlled by computers, and how likely is it that these computers are well protected?
I’m pretty sure that attribution in space is far easier than in cyberspace, if a Chinese satellite destroys a US satellite, everyone will know what happened, but if a hacked US satellite takes out a German satellite, then that’s slightly more worrying
That was the case with the Mobility Air Force Planning System, which is mission planning software. The program, which started in 2012 using a traditional multiyear development approach, encountered the usual delivery risks and delays before the Air Force used the more streamlined DevOps approach. “We worked with the Air Mobility Command and our contractor to propose fielding version one into production despite some deficiencies in order to work directly with the end users and rapidly field fixes and improvements,” Wert reports. “In the last 10 months, we fielded four additional major versions of that software.”
Those 10 months included more than 500 fixes or improvements. During that same period, the software successfully processed over 39,000 flight plans and 3,800 sorties for Air Mobility Command while providing fuel savings through more efficient routing
This is a good read, and I’ve heard about the Mobility Air Force Planning System before at the Code for America summit last year.
What’s interesting to me is that they’ve used the language of DevOps here to actually talk about Agile as I would describe it. What they are talking about for me is a true implementation of agile, whereas whenever I’ve engaged with big programs (and the military has the biggest), agile has tended to mean “Working the same way, but now we have coloured post-its, comfy discussion areas and a daily standup”.
“We don’t have an environment where cyber warriors can train alongside non-cyber fighters. This must be done in a synthetic environment, but that is not available today,” McArdle told Fifth Domain.
One reason for the training gap is because live training of a cyberattack can have “cascading effects” and cause lasting damage, McArdle said. It is considered a non-starter. Another is that different services have different training philosophies. McArdle gave the example of how the Air Force trains its members with a “platforms” based approach, like an F-35 simulator, while the Army has a “mission” focused training ideology.
But perhaps the most significant reason is that knowledge of what may happen to soldiers' equipment during a cyberattack is a closely held secret. Because of classification issues, those who understand how electronic warfare might interrupt a soldiers' gear may not be able to explain it in an unclassified setting to those creating a simulation
The increasing gap in the military domains (Land, Air, Sea, Space and Cyber formally, I’d argue Information is a 6th domain) in terms of military doctorine and strategy is a fascinating one. The US military has divided the domains, and has military forces for each one, and naturally does blended operations, with land forces supported by air forces and so forth. But the physical domains all share a set of common principles and capabilities. It’s still impossible to physically move faster than the speed of light, and arguably difficult to move much faster than the speed of sound in a military context.
But cyber warfare can move at the speed of electrons, and the OODA loop for those operations is so much tighter that it feels like a completely different beast to the other domains.
Additionally, this little quip here that those who understand how electronic warfare might interact with the physical domain might be unable to explain in an unclassified way what might happen is a real problem in many cyber security domains. The complexity of cyber security technologies means that even the capabilities of attackers is considered classified, which makes giving remediation advice really hard for defenders and builders to work around.
“We’ve about exhausted our ability to achieve some kind of deterrent model that works,” said Robert Johnston, the security expert who investigated the 2016 DNC breach, and now heads the financial cybersecurity firm Adlumin. “You have indictments. You have Cyber Command releasing Russian malware. We ran psyops inside of Russia saying, ‘We know what you’re up to, stop it.’ Sanctions and diplomatic measures. The combination of all those isn’t enough to make it come to a complete halt.”
This is interesting, I thought the indictments might act as a pretty serious deterrent, and that we might see a massive scaling back of activity from the main 4 cyber actors on the nation stage on this basis. But if the activity is continuing, as brazenly as reported, then it’s clear that those public naming and shaming is not having an impact.
I expect to see this become a major talking point at the UN over the next year, with Governments in the west trying to work out how to apply some form of retributive punishment, without it coming back to bite them if their operations are discovered.
One to watch
While we continue to assess the impact on Federal infrastructure, we know enough to be concerned.
- We know an active attacker is targeting government organizations.
- Using techniques that aren’t especially innovative, we know they can intercept and manipulate legitimate traffic, make services unavailable or cause delay, harvest information like credentials or emails, or cause a range of other malicious activities.
- We know that this type of attack isn’t something many organizations monitor for or have tight controls around.
Because it’s our responsibility to take actions to protect Federal systems, we felt an urgent response was required to address the risk.
Informed by security researchers and in consultation with IT security teams across Federal civilian agencies, the Office of Management and Budget, and the National Institute of Standards and Technology, we’ve crafted a set of near-term mitigations that protect systems in a risk-informed, straightforward, and high impact manner. We’ve directed agencies to:
- Verify their DNS records to ensure they’re resolving as intended and not redirected elsewhere. This will help spot any active DNS hijacks.
- Update DNS account passwords. This will disrupt access to accounts an unauthorized actor might currently have.
- Add multi-factor authentication to the accounts that manage DNS records. This will also disrupt access, and harden accounts to prevent future attacks.
- Monitor Certificate Transparency logs for certificates issued that the agency did not request. This will help defenders notice if someone is attempting to impersonate them or spy on their users.
In several cases, the actions we’ve crafted are basic good practices anyway, and many agencies may have already taken the necessary mitigation steps.
These are good actions to take, and I think this was a good general summary. As I reported last week, the FireEye brief was a little peculiar, and they’ve clearly seen activity that indicates that certain state actors are actively carrying out this kind of attack, but I think these attacks are well within the capability of low level actors, and at minimum 2FA on your DNS records is vital.
In the highly secretive, compartmentalized world of intelligence contracting, it isn’t unusual for recruiters to keep the mission and client from potential hires until they sign non-disclosure documents and go through a briefing process.
When Stroud was brought into the Villa for the first time, in May 2014, Raven management gave her two separate briefings, back-to-back.
In the first, known internally as the “Purple briefing,” she said she was told Raven would pursue a purely defensive mission, protecting the government of the UAE from hackers and other threats. Right after the briefing ended, she said she was told she had just received a cover story.
She then received the “Black briefing,” a copy of which was reviewed by Reuters. Raven is “the offensive, operational division of NESA and will never be acknowledged to the general public,” the Black memo says. NESA was the UAE’s version of the NSA.
Stroud would be part of Raven’s analysis and target-development shop, tasked with helping the government profile its enemies online, hack them and collect data. Those targets were provided by the client, NESA, now called the Signals Intelligence Agency.
The language and secrecy of the briefings closely mirrored her experience at the NSA, Stroud said, giving her a level of comfort.
Beautifully presented, this deep investigation is eye openeing for a number of reasons.
Firstly, it clearly outlines the operational doctorine of an intelligence agency as it gets to it’s feet, and how it has used US experience of running an existing intelligence agency to staff that up and bring up the capability.
But further, it highlights the mental gymnastics that US based intelligence analysts go through. The american narrative, as originally exposed in the snowdon leaks, is very much that certain activities are acceptable providing they don’t target US citizens. That narrative holds true to the US’s isolationist tendency as well as their dogmatic belief that the only acceptable world order is a US run world order, and that any alternative models for economic, domestic or international diplomatic outcomes are unacceptable because they are not American.
This leads to the Us and Them attitude of intelligence analysts who view the US as always the Good Guys, and everyone else as essentially shades of “Bad Guy”. The targeting of journalists in any country, of any nationality, should be an anathema to people who believe in a free press, but instead it’s justified as being acceptable providing they aren’t US citizen journalists.
The ethical lines for the use of cyber weapons to surveil and track people around the world will be murky for years to come, and we need to accept that the lines will be drawn along nation state lines, but it’s foolish for US mercenaries who come out of the intelligence community and work for another government to think that the other nation will hold to the same lines that US does.
In 2016 and 2017, Karma was used to obtain photos, emails, text messages and location information from targets’ iPhones. The technique also helped the hackers harvest saved passwords, which could be used for other intrusions.
It isn’t clear whether the Karma hack remains in use. The former operatives said that by the end of 2017, security updates to Apple Inc’s iPhone software had made Karma far less effective.
Lori Stroud, a former Raven operative who also previously worked at the U.S. National Security Agency, told Reuters of the excitement when Karma was introduced in 2016. “It was like, ‘We have this great new exploit that we just bought. Get us a huge list of targets that have iPhones now,’” she said. “It was like Christmas.”
As well as the Reuters story about Raven itself, this is also an interesting read about how discovering or buying a major vulnerability in a common platform is the primary target for most nation states offensive cyber teams. The fact that they were able to use the vulnerability for 2 years before it was noticed and patched is but one data point, but also reminds us that patching our phones, devices and infrastructure is absolutely critical.
As I’ve always said, time and time again, having a patch management program that can deliver patches in time measured by hours absolutely must be your number one priority in a security program. All of the other things, while important and valuable, are meaningless without modern patched infrastructure
Googlers had lost access to employee-only iOS versions of their pre-launch test apps like YouTube, Gmail, and Calendar as well as their food and shuttle apps as well, causing a massive loss of productivity that will surely make it more careful about abiding by Apple’s policies.
TechCrunch reported Wednesday that Google was using an Apple-issued certificate that allows the company to create and build internal apps for its staff for one of its consumer-facing apps, called Screenwise Meter, in violation of Apple’s rules
I wonder how much of this we’ll see over the next few months. Apple provides some pretty clear guidance for these certificates and this is very firm misuse by otherwise large companies. The Enterprise signing certificate is supposed to be rolled out by your Mobile Device Management (MDM) solution, to allow you to manage installing updates to devices and to roll out mobile apps to your staff that aren’t available to the public (and therefore not publicly signed).
There have been a number of systems for rolling out “beta” applications to beta testers, where you want to iterate the application far faster than the AppStore policies will allow, but generally they don’t use the enterprise certificate.
The decision here to use the enterprise certificate by Google and Facebook is probably just one of laziness, they had a certificate in place already and the person managing the roll out program didn’t know that they weren’t supposed to use it.
This activity was typically only within reach of intelligence agencies or surveillance contractors, but now Motherboard has confirmed that this capability is much more widely available in the hands of financially-driven cybercriminal groups, who are using it to empty bank accounts. So-called SS7 attacks against banks are, although still relatively rare, much more prevalent than previously reported. Motherboard has identified a specific bank—the UK's Metro Bank—that fell victim to such an attack.
The news highlights the gaping holes in the world’s telecommunications infrastructure that the telco industry has known about for years despite ongoing attacks from criminals. The National Cyber Security Centre (NCSC), the defensive arm of the UK’s signals intelligence agency GCHQ, confirmed that SS7 is being used to intercept codes used for banking.
"We are aware of a known telecommunications vulnerability being exploited to target bank accounts by intercepting SMS text messages used as 2-Factor Authentication (2FA)," the NCSC told Motherboard in a statement.
I’m really pleased with this advice from NCSC.
Reporting on SS7 tends to fall into saying “the sky is falling”, and yes, it’s possible to exploit all kinds of issues with SS7 networks, and a replacement would be lovely. But it’s not large scale, and for a bank, it might fall well within their fraud tolerance. They can afford to lose some amount to SS7 fraud, providing it stops them losing more money to fraud conducted without SMS 2FA. I’d also then anticipate that starting to build pressure on the telcos to reduce the impact or likelihood of such attacks, or for banks to start to find publicly acceptable alternatives to SMS 2FA.
Right now, if you are building a digital service, having SMS 2FA is miles better than having no 2FA at all, and it’s incredibly hard to find another form of 2FA that is globally accessible by all users. Almost all of the alternatives just don’t have the uptake or market penetration to be acceptable to general consumers.
But suppose we have good math. Suppose that we have data about past crime rates that does not incorporate biased policing. Then would Saavedra be right to be so derisive? Is it absurd to think that algorithmic decision-making driven by bias-free math could be racist?
No, it is not.
Even with our clean data, it is likely that algorithms would recommend that certain neighborhoods with predominantly socioeconomically disadvantaged populations from historically marginalized racial groups face increased and targeted policing. This would indeed be a form of “bias-free” or “rational” profiling. But it would in no way take historic racism out of the equation.
The thought would be simple: There’s more crime here, so there should be more policing here. Nothing biased about that. One might argue such law enforcement decisions might be based on hard data; the algorithm merely presents us with the rational course of action. The problem is that this would conflate seemingly rational decisions with just decisions. In the case of algorithmic decision-making, a seemingly rational decision can easily be an unjust one.
There was a lot online about AOC’s comments on the bias of algorithms (and yes, I agree that she’s right, as outlined above). But this thought about the different between rational, unbiased and just is interesting.
The data might tell us about what actually happens in the world, and the world is not a just or unbiased place. But this just highlights the importance of good decision making, that has not just a simple set of outcomes (reduce crime in high crime areas), but actually has empathy for users, and has an understanding of when taking a rational decision might result in a worse outcome.
The argument that AI might make those bad decisions because it is “unbiased” to me just highlights that when we are writing policy, we need to be thinking of these things, and racially motivated policing decisions over the last few decades have shown that humans are just as bad at these decisions, especially where the policy makers lack empathy, normally because they have no contact with the operational arms who implement the decision, or with the users impacted by their decisions.
AI is simply automating what already happens, and I don’t think we actually have a good grip on writing good, unbiased policies as humans, so we should avoid automating those decisions until we can be confident what we want to achieve from that
It will look like in the UI like the other person has joined the group chat, but on their actual device it will still be ringing on the Lock screen.
The damage potential here is real. You can listen in to soundbites of any iPhone user’s ongoing conversation without them ever knowing that you could hear them. Until Apple fixes the bug, it’s not clear how to defend yourself against this attack either aside from disabling FaceTime altogether.
This was a pretty serious bug, and there was some interesting conversations online about how one could detect this in advance. What kind fo exploratory testing could have guaranteed that it didn’t happen.
My biggest disappointment was with general infosec advice on this bug, because it looked like organisations advice giving systems ran far slower than their incoming news information. Apple responded very well here, and within 24 hours, completely disabled Group FaceTime functionality, meaning that this bug could not be exploited. However the advice given out for days afterwards was to go to your phone and disable FaceTime in it’s entirety. That is potentially good advice if you don’t use FaceTime at all, but given that Apple had already prevented the primary attack, it felt like redundant advice in many cases.
Two sides of service design
Let’s suggest the role of a service designer is twofold: * Figuring out the future vision and where we want to get to * Making it real — doing whatever it takes to create (sustainable) change
This epitomises the problem that I’ve always had with Tom Loosemore’s wonderfully quotable direction to Government back in the formation of GDS.
“The Strategy is Delivery” sounds very wise, and sensible, and given to organisations that have hundreds of people employed in “strategy” departments but cannot deliver, is probably the right advice. But too many organisations took it to mean that the second facet of service design was the only thing that mattered.
Choosing what to delivery is just as important as delivering it well. A bad concept that is brilliantly executed is still a bad concept.
I still believe that delivering is 90% of the problem for many organisations, and it doesn’t matter how many strategy meetings you have about what the vision is, and how to get there, if you can’t actually deliver then you are wasting your time. But lets not throw the baby out with the bath water. Having a clear vision and articulating what steps will get us there is an important thing for an organisation as well.
With Super Bowl LIII only days away, Epic Games is ensuring that Fortnite players will be able to partake in some of the festivities. As reported by The Verge, the game's creators have booked EDM artist and producer Marshmello to perform an in-game concert the game, which will be hosted directly on the football field in Pleasant Park.
While his appearance has yet to be confirmed publicly by Epic Games, the popular DJ has already added Pleasant Park as one of his upcoming tour dates on his official website. Also, if you visit the football field in question, you'll see that the preparation work for a music event is already underway
This fascinates me. I’m really not very good at Fortnight, it turns out that my First Person Shooter skills have deteriorated from when I was 16 and used to Dominate at Quake and Unreal Tournament somehow.
But that aside, Fortnight has been doing something pretty impressive with building a coherent universe. For those who don’t know fortnight, it’s a pretty simple first person shooter that is designed for fast constant matches. The basic gameplay is 100 to 1, where 100 players start by flying over the island, and it’s everyone versus everyone until just a single person remains. If you die, you can join another match within seconds, which creates for fast fun gameplay.
But the environment is something else. The island is the same for all players, but it kind of constantly changes. Epic Games have been telling stories through Fortnight that everyone logged in at the same time can globally participate in. The ending of the last “season” saw a massive global change to the island, but if you were logged in and playing a match at that time, you literally saw the lightning striking bits of the ground and changing the terrain.
Which brings me back to this story. This is a digital world, it can be changed in seconds by computers without the slightest thought. But if you go and play today, you’ll see that the football field has a stage that is mid construction. And it wouldn’t surprise me to find that the construction changes every few hours, to indicate the setup for the concert. In a world where it can be changed in the blink of an eye, and needs no construction crew.