While attempting to avoid an entirely pessimistic outlook we are reflecting this week on just how hard the security, technology and privacy fundamentals still are in 2019. Whether it is keeping a really big platform online as one of the largest technology organisations in the world or working with international standards to influence 'what good looks like' none of it is as simple as washing your hands.
Jon has the pleasure of contending for money/time budgets while Joel has the pure joy of suggest things to spend/invest those budgets in but even those choices are not as simple as they appear to be: how do you assess and prioritise what may seem like an infinite technology estate? How do you spend the right amounts of money for the widest and most beneficial security gains? How do you recruit talent when you're competing with banks and venture capital backed startups?
We promise its not all doom and gloom and meaningful strides are made every day, but here is a reflection on how far we've come, the distance we have yet to go and a tired nod to the ever-moving goalposts.
More than 350 ethical hackers got together in cities across Australia on Friday for a hackathon in which they worked to “cyber trace a missing face”, in the first-ever standalone capture-the-flag (CtF) event devoted to finding missing persons. [...] During the six hours the competing teams hammered away at the task of searching for clues that could potentially solve 12 of the country’s most frustrating cold cases. 100 leads were generated every 10 minutes.
(Joel) Ending on a lighter note :-)
Capture the Flag events are not new in the hacker space and relevant conferences (DEF CON, B-Sides etc) however dedicating the CtF to finding missing persons is an excellent and powerful way of leveraging the speed and expertise of ethical hackers against the mountain of intensive work faced by law enforcement and missing persons charities.
In the face of all the bad things technology can enable (malicious artificial intelligence, blockchain, and so on) our faith in humanity can be restored through hundreds of people giving up their time and expertise to help people they will likely never meet.
Now that our cousins from 'down under' have set this example, maybe the UK's National Crime Agency and other cyber policing forces will run and maintain similarly inspired events.
On December 1, 2018, the Chinese Ministry of Public Security announced it will finally roll-out the full plan [of the Cybersecurity Law adopted in 2016]. [...] “It will cover every district, every ministry, every business and other institution, basically covering the whole society. It will also cover all targets that need [cybersecurity] protection, including all networks, information systems, cloud platforms, the internet of things, control systems, big data and mobile internet.”
This system will apply to foreign owned companies in China on the same basis as to all Chinese persons, entities or individuals. No information contained on any server located within China will be exempted from this full coverage program. No communication from or to China will be exempted. [...] This means intra-company VPN systems will no longer be authorized in China by anyone, including foreign companies.
(Joel) The technology and telecommunication laws in China don't pull any punches.
This overt required level of access to communications presents an interesting challenge for foreign nationals and organisations working in the region and further crystallises the Chinese states capabilities to understand its citizen's digital lives.
While I suspect little will be made public, it will prove interesting to see how international companies such as Google, Apple and so on adapt their local presence to cater for (or mitigate) these requirements.
If you're intending to travel to a jurisdiction that you believe may impact your personal privacy there is a post I wrote a little while ago (could use some updating I suspect) that may be useful (supplement/supersede with advice from your employer).
This section is intentionally blank due to an author defined paywall (booooo!)
(Joel) If you can devise a method to read this post you might join me in being a little under-whelmed (whispers: clickbait)
Cloud vendor lock-in is real. It doesn't need to be high up on the agenda, but it should always be a conscious choice. The issue with naively considering proprietary platforms de facto standards is that this actively participates in the creation of closed standards with proprietary control (and should leave you looking over your shoulder for a price hike once you're all warm and cosy).
Amazon.com/.co.uk (as opposed to Amazon Web Services) is a great example of a private market space where Amazon is Monarch, Judge, Jury and Executioner. Amazon can change the rules at almost virtually any time and is authoritative on virtually all matters to the point where specialists now exist just to help sellers deal with Amazon's highly automated appeals processes when livelihoods are snatched away in the blink of an eye.
I would challenge the author to migrate their AWS workloads to another "public cloud" provider to demonstrate how easy it is to express choice within these alleged de facto standards. (I don't think 'easy' will be part of it nor 'quick' or 'cheap'.)
To justify its opposition to encryption, the US government has, as is traditional, invoked the spectre of the web’s darkest forces. Without total access to the complete history of every person’s activity on Facebook, the government claims it would be unable to investigate terrorists, drug dealers money launderers and the perpetrators of child abuse – bad actors who, in reality, prefer not to plan their crimes on public platforms, especially not on US-based ones that employ some of the most sophisticated automatic filters and reporting methods available.
The true explanation for why the US, UK and Australian governments want to do away with end-to-end encryption is less about public safety than it is about power: E2EE gives control to individuals and the devices they use to send, receive and encrypt communications, not to the companies and carriers that route them. This, then, would require government surveillance to become more targeted and methodical, rather than indiscriminate and universal.
(Joel) Edward Snowden is a name most subscribed to this mailing list will know.
His opinion piece in the Guardian in response to the governments from across the Five Eyes sending a letter to Facebook is best summarised as 'just that' - an opinion (and in my opinion, an incredibly emotive and polarised one).
Privacy is broad. It ranges from the messages/calls you send/receive through the doors on the home and the curtains on your bedroom windows. In order to reach a compromise between security and privacy (and it is a compromise - you cannot truly have both) technology platforms and vendors need to come to the table and intelligence/law enforcement have to step into the light.
"There are no easy answers here. We all naturally want perfect privacy and perfect safety but those two things cannot coexist … you have to accept reasonable restrictions on both of them." - John Oliver
Many companies use dark pattern techniques to make it difficult to find how to delete your account. JustDelete.me aims to be a directory of urls to enable you to easily delete your account from web services.
(Joel) Just Delete Me is one of those things you wish didn't need to exist but does. Its rather cute and I'm sure many have found it helpful. Whether you're using Yelp (easy to delete), Amazon (hard to delete your profile) or Youtube (impossible to delete) there are shortcuts and instructions to help you wave good bye to your latest remorseful relationship with an online platform.
In the world where not all data privacy/protection laws are equal (or in some cases, existent) data has become a valuable commodity and in some cases the platform just doesn't want to let you go, or claims it can't (shadow profiles).
In a world where the product is often you, from a personal security perspective the most important things still within your control remains using unique passphrases for each service and enabling multi-factor authentication wherever available (so, y'know, do that please)
In a written statement issued on Wednesday (2019-10-16), Ms Morgan said the government would not be "commencing Part 3 of the Digital Economy Act 2017 concerning age verification for online pornography".
Instead, she said, porn providers would be expected to meet a new "duty of care" to improve online safety. This will be policed by a new online regulator "with strong enforcement powers to deal with non-compliance".
(Joel) Focusing on just the technological challenge that was presented: it was always going to be a steep hill to climb without violating the personal privacy of lawful citizens consuming lawful materials (one of the counter-arguments to age verification).
I think a regulatory approach that considers requiring registration once a site knowingly (as in, its purpose is to publish the materials in question) passes a threshold (volume of content or unique visitors per month) and is empowered to direct/enforce would be a meaningful retrospective model that does not incur extremely high technology and privacy costs.
Australia’s covert foreign intelligence collection agency has put out the recruitment call for its three most senior technology positions - a chief information officer, chief technology officer and a more senior strategic leader for its innovation and enabling division.
The highly sensitive positions at the foreign spy agency, which oversee tech services and support for overseas human source collection, are listed as “multiple vacancies” and ranked at SES Band 1, assistant director general for the CIO and CTO roles.
The head of the innovation and enabling division, which is listed as a single position, comes in at SES Band 2, first assistant director-general.
(Joel) ASIS is the Australian counterpart to the UK's MI6 or US' Central Intelligence Agency - all primarily focusing on human intelligence. SES Band 1 and 2 appear to map to SCS1 or so in the UK civil service.
The departure and recruitment of such key roles at the same time must be leading to some challenges but presents an opportunity for a total revamp of ASIS' information and technology strategies to invigorate an environment that cannot run the risk of complacency. It could also impact short term delivery as people shift and programmes are brought to a close and others take their place. Ultimately fresh ideas are generally always a good thing and these ripple through to the rest of the Five Eyes intelligence community of which the UK is a part.
I would say 'time will tell' but for the vast majority of us... we'll simply never know.
This minimum standard describes security requirements for web browsers used on workstations of the federal administration. These requirements must be adhered to in order to achieve a minimum level of information security.
(Joel) Of 4 browser builds tested - Mozilla Firefox 68 Extended Support Release (ESR), Google Chrome 76, Microsoft Internet Explorer 11 and Microsoft Edge 44 - Germany's Federal Office for Information Security describes only Firefox as meeting their minimum baseline.
Chrome wasn't far away, but it fell short on: the in-built password manager's master password functionality, control of phone-home data upload/reporting (needs an additional firewall to prevent) and transparency in terms of documentation.
This is a good win for Firefox and will challenge other vendors to do better. This is unlikely lead to blacklists or whitelists, more of a preference from an organisation's security perspective for Firefox over Chrome, but, on balance with other measurements (such as compatibility with G-Suite or Office 365) Chrome or Edge may still pull ahead when all metrics are balanced... after all, security is and should not be the only consideration.
(Jon) I wonder about standards like these, especially in an issue of this newsletter where we're also linking to an item on Common Criteria. For the many faults in CC, the new-style Protection Profiles are much better at articulating what threats they're trying to address. Writing a security standard for a web browser is much more interesting - and difficult - in terms of articulating what you're trying to address.
In practice, I don't think this is a true standard; maybe more of a recognition of current good practices, which is no bad thing. However, without a clear threat model, it's hard to judge what achieving or deviating from this standard means. Does the lack of a master password on the password storage facility matter? For most people it's probably not that important. What about access of websites to sensor data - such as the battery status API in HTML5 (https://googlechrome.github.io/samples/battery-status/) - should that be a factor?
Writing standards is hard, writing security standards especially so, as the world can move on whilst you're still drafting. Be clear about what you are trying to achieve so readers can judge the applicability.
The Redmond giant confirmed on Friday an unspecified glitch prevented customers in North America from receiving the multi-factor auth (MFA) codes they need to sign into their cloud-based accounts. Obviously, those not using MFA are not affected. [...] The outage for both services began at around 1330 UTC (0630 PT), when users complained of being unable to get two-factor codes on their devices via phone call, text message, or through an authenticator app. By 16:50 UTC (0950 PT), the Azure side of the IT blunder was resolved
(Joel) So the moral of the story here is not to use multi-factor authentication (relax Michael, I'm kidding).
While the inability to login is a pretty severe problem (!) you should always enable multi-factor authentication (MFA, aka 2FA) whenever it is available because being protected against 99.9% of automated attacks overweighs the occasional whoopsie.
Also, it goes to show that even the second most valuable company in the world (by market cap) can struggle with change control, testing, quality assurance and rollbacks.
This takes place via the Ofcom (or rather the Oftel) Yahoo Group. A review might consider whether it is befitting for the world's sixth largest economy to manage critical national infrastructure via a Yahoo group but we would hope that is obvious. What we would like the review to consider is the process beyond posting that message. Many networks receive the Yahoo message, and very ably build the number range.
(Joel) I don't recall having ever used Yahoo! Groups (and I suspect this alone has kept Yahoo! executives up at night and is a great source of personal pain) but for Oftel to manage UK phone number assignment through such a route... well... the mind boggles.
(Jon) The fundamentals are - often surprisingly - hard. Using a Yahoo! Group to manage UK phone number block assignments. Was this the 'best' way? Probably not. Was it a secure way? Debatable. Was it a way that worked? Sounds like it. Was it the easiest way to get something working that got put in the 'fix it later' bucket? Almost certainly. And a lot of what we do in security is make that call - fix it now, fix it next, fix it...not yet. Getting the balance right, and being open about which thing we're doing and why, is the hard bit.
A flaw that means any fingerprint can unlock a Galaxy S10 phone has been acknowledged by Samsung.
The issue was spotted by a British woman whose husband was able to unlock her phone with his thumbprint just by adding a cheap screen protector. [...] because they left a small air gap that interfered with the scanning.
(Joel) An amusing workaround to a security solution. Samsung's fingerprint scanning technology uses ultrasound to detect/read a fingerprint but in this case could be manipulated through interrupting said signals.
One could query why the device would unlock if the cryptographic pattern representing a fingerprint was a mismatch to one already known/authorised on the device but I suspect that is the software update being rolled out.
Biometrics are hard as variation is important (a slightly different angle, wearing sunglasses, growing a beard, a bit of dirt on one's finger/thumb) but the kicker here is that in the face of doubt, the device chose to unlock rather than keep data/content safe.
Before Apple users (like myself) get too excited... remember: Apple's death grip was a thing and FaceID will unlock if you have one eye closed and your mouth/jaw covered with a duvet (how do I know this? thats usually how I read Cyber Weekly on a Saturday morning).
The UK’s NCSC (National Cyber Security Centre) considers that effective cybersecurity requires a combination of: appropriate product development, architectural design, situational awareness, and agility of response to threats. Evaluation of individual products can play a part but, for the UK, its relevance, in the wider cybersecurity context , is diminishing and this has been reflected in the limited UK market and developer demand for certification. Following a review of its range of assurance services NCSC has therefore concluded that the operation of a national common criteria certification scheme is no longer an appropriate use of its resources and has ceased to be a certificate producer under the CCRA.
As a Certificate Consuming Participant, the UK will continue to recognise CCRA compliant certificates as providing a level of confidence in their respective products. The UK also remains committed to working with the Common Criteria community on the development of relevant Collaborative Protection Profiles (cPPs and their supporting documents), for technologies of interest to the UK, by contributing to associated international technical communities, and to the development of underlying International standards in ISO etc.
(Joel) The Common Criteria for Information Technology Security Evaluation (Common Criteria or just CC) is an international standard (ISO/IEC 15408) for computer security certification.
NCSC's new position is apparently a reflection of a more rounded approach and appreciation to all the facets that contribute to technology information security. I hope this frees up resources within NCSC to research and publish even more excellent advice and guidance, perhaps with a greater level of technical depth.
Building secure technology is one thing... having any independent assurance of those security claims is another thing: I can't quite bring myself to say 'long live Common Criteria' but I can say that vendors looking to work with organisations that require it is a very useful carrot/stick to mandate and standardise security characteristics.
It doesn't appear to be NCSC's intention to take a back seat when it comes to international standards (!) so I hope NCSC keeps fighting the good fight to help the community build/update protection profiles that ripple increasingly better security practices through vendor products that underpin some really important systems (and thus, organisations).
(Jon) Having worked with Common Criteria and other product assurance schemes for over a decade, I respectfully slightly disagree with Joel. I believe that what matters far more in terms of secure products and systems is the security engineering approach that the developer has - the way they consider threats through product development cycles, the way they train their developers, the security tooling they use in their build process to find and remove classes of vulnerabilities, the way they test, and so forth. In fact, probably the most important factor is the ability of a vendor to respond to potential security problems, provide rapid patches to users, and for users to be able to confidently deploy them.
Any product certification scheme that doesn't look at these points is giving a misleading sense of security to the end users who will understandably be treating the certificate as not just a 'point in time' ok, but an endorsement of the product's general security.
Dr. Ignaz Semmelweis spent years trying to convince other doctors to wash their hands after performing post-mortems. He proved that handwashing could save thousands of lives, and yet most doctors ignored his evidence. One stated it simply wasn't possible for doctors to harm their patients, as "a gentleman's hands are clean." Even after Semmelweis provided solid proof that washing hands and sanitized tools decreased the maternal mortality rate, doctors dismissed his findings.
Medical doctors often started their day in the autopsy room. The physicians performed the procedures barehanded, and they did not wash their hands afterward. Sometimes doctors performed examinations on pregnant women who succumbed to puerperal fever the day before. And then they went straight to the maternity ward - without washing their hands.
(Jon) Someone was recently talking about "cyber hygiene" and mentioned it needing to be as normal for users to do the right thing "as washing hands is for a doctor". What's interesting is that washing hands wasn't always 'normal', particularly in the medical world.
There are many fascinating things about how doctors in the Victorian era underwent this change in behaviour. The first is how many of the male doctors failed to notice that their 'modern' techniques - yet unsanitary behaviours - were killing rather than curing more patients than those cared for by the female-dominated nursing and midwife professions. The second is that even when presented with clear evidence of the dangers of their behaviours, many of the Victorian doctors failed to believe it anyway - or didn't think it applied to them. The more we change, the more things stay the same.
Do we see the same in cyber security? I'd argue at times "yes". Firstly, as Michael has regularly mentioned, we lack solid data on cyber security; a lot of cyber threat intelligence focusses on the malware, the threat actor, the on-target activities, but doesn't tell us much, if anything, about the way the infection began. And organisations are reluctant to share what's working and what's not; the data points we have are the outliers - those who feel confident to share their approaches (such as Google, with their BeyondCorp approach), or those who have no choice (e.g. an organisation that had a catastrophic breach). So we look at cyber security behaviours but cannot easily judge which are effective - so can formulate alternative rationales for why they might work.
So the first problem is identifying what really works - is it the ventilation system in the ward, or the hand washing that's causing the reduction in complications? Was it the blinky-anti-APT box, or the patching regime, or the email scanner, or maybe the user education and training, that led to a decrease in detected malware infections? The second problem is that once we do identify a 'basic hygiene' point, even as professionals do we believe it enough to do it?
One of the reasons medical professionals are now very good at washing their hands is not because they do a yearly eLearning course "hand washing 101 - things to remember" - but because the medical community has, in the last hundred years, learnt it has to make sanitation and hand cleaning an easy and integral thing to do. Having copious numbers of hand sanisation stations, building hand washing into medical protocols and procedures, even designing hospitals such that it's a natural step in a process - these are what makes it second nature. What's the cyber equivalent?