Cyberweekly #119 - The security of comms platforms

Published on Sunday, September 27, 2020

[Apologies for missing last week. I had some personal news that meant that my weekend was taken up with a lot of other stuff, and the newsletter dropped off my radar.]

We’ve moved to a distributed world, and many organisations have not had a unified strategic communications platform. This starts to become an issues when you think about the ways that internal communications happen inside your organisation.

Of course, the most important thing about your communications platform is that it should be usable, and it should enable your staff to communicate with one another in an appropriate way. No matter what you do, you cannot prevent your staff from having personal phones, personal email accounts and using those to discuss work matters. The best way to prevent that is for there to be a good set of communication tools that staff can use easily, and when they need it.

Despite me mentioning unified comms above, this probably isn’t a single platform. Organisations that are doubling down on Microsoft Teams will be discovering that even where it’s a great video product, it’s a moderately awful text communication platform. Staff more used to using Slack will miss the real time communication, divided into channels that seems to just flow in a way that Teams messages don’t. Staff who want to just send a simple message to a coworker will probably prefer to use SMS or WhatsApp from their phone than try to wrestle with an unfamiliar user interface.

But how secure should your communications system be? We often want things to be perfectly secure, and we come up with gradings of all the ways that we think it should be secure, from end to end encryption to secure deletion. But we fail to assess what we are using now, and note whether what we are moving towards has any security improvement over the existing system. If your staff are using SMS to talk to each other now, then there is no encryption, data loss prevention, audit or protection in any form. Using a tool that has even one of those will be an incremental improvement. Staff dialling into a MeetMe phone number often have no way to authenticate who is on the line, and the meeting codes are never changed, probably known to all of your suppliers and of course, none of this is end to end encrypted.

Pick products that work, that your users love to use, that are incrementally better at security than existing tools, and you’ll be doing yourself a favour.

On a meta note.

One of the readers pointed out to me a while back that TinyLetter converts all the links in these emails into tracking urls that are unique both per newsletter and also per user. I don’t use the statistics from these and I cannot find a way to turn these off. I’ll continue to publish this newsletter to every week, where I don’t put any Google analytics or link tracking of any form.

I had been looking around for alternatives anyway, but I cannot find any newsletter provider who won’t wrap the links in tracking urls. I did however look strongly at alternatives and I think that I’d like to move this newsletter from TinyLetter to SubStack.

I’m planning on migrating this newsletter from TinyLetter to SubStack next week. SubStack has a strong privacy statement, and is very popular for sending mailing lists at the moment. It has a better ability for me to compose emails and easier collaboration tools. It does still do link tracking, and I can’t turn that off either. You can read their privacy policy and if you don’t want your email address to be transferred to SubStack, then you can unsubscribe ahead of next weekend. I’ll import the mailing list next Saturday of all the people actively subscribed to TinyLetter and send next weeks newsletter out using the SubStack system instead.

Obviously, I should add, I don’t use your data for anything other than sending these emails. I will never sell or give away the email list or use it for any other purpose than the sending of CyberWeekly each week.

    Staff projects.

    A popular recurring idea around reaching a Staff-plus role is that first you need to successfully complete a “Staff project.” A project that is considered complex and important enough that the person who completes it has proven themselves as a Staff engineer. However popular this idea is, if you’re pursuing a Staff-plus role it’s important to pierce the mythology of these projects and focus on the experiences of folks who’ve walked the path before you.

    The short answer on Staff projects is that most engineers don’t complete one as part of reaching a Staff role, although a large minority do complete one, particularly folks who attain the role via promotion at a company they’ve grown up in. For the folks who don’t complete one, typically it’s either because they accumulated a track record of success over a longer period without a single capstone, or because they switched companies to reach the title.

    There’s something about leading a major and complex project, with difficult stakeholders, and requiring attention at all levels of the architecture of your organisation. I’ve been lucky to be involved in several over the years, and nothing stretches your ability to understand how systems work more than this kind of project.

    However, they tend to appeal to a certain “hero” personality. The sort of person who can intuitively hold complex systems in their head, who can liaise across multiple teams. This wont be all kinds of engineers, and some of the best engineers would never describe themselves as having led a project like this.

    Making conducting this kind of project a requirement for senior leadership will hurt your diversity and inclusion efforts. You will need multiple types of leaders, with multiple ways of recognising those skills in order to build successful large teams. Value your administrators, your deep thinkers and your community leaders just as much as your technical experts.

    When you browse Instagram and find former Australian Prime Minister Tony Abbott's passport number

    Why did you do this?

    One day, my friend who was also in “the group chat” said “I was thinking…. why didn’t I hack Tony Abbott? And I realised I guess it’s because you have more hubris”.

    I was deeply complimented by this, but that’s not the point. The point is that you, too, can have hubris.

    You know how they say to commit a crime (which once again I insist did not happen in my case) you need means, motive, and opportunity? Means is the ability to use right click > Inspect Element, motive is hubris, and opportunity is the dumb luck of having my friend message me the Instagram post.

    I know, I’ve been saying “hubris” a lot. I mean “the willingness to risk breaking the rules”. Now hold up, don’t go outside and do crimes (unless it’s really funny). I’m not talking about breaking the law, I’m talking about rules we just follow without realising, like social rules and conventions.

    This is a thoroughly entertaining description of how a simple photo and a bit of sleuthing using Inspect Source led to disclosing some pretty damaging data.

    The underlying question does however stick with me. What motivated him to do this? Was it just hubris? How many of our customers or users will do this? We sometimes imagine that there are hordes of nasty criminals waiting to attack every system, but the more you think about some of these attacks, the more you realise that there’s a huge set of vulnerable systems out there that are only rarely publicly attacked. There’s a lot of quiet attacks that happen, but it’s also worth remembering that not every vulnerability will be horribly misused straight away.

    Dear Google Cloud: Your Deprecation Policy is Killing You | by Steve Yegge | Aug, 2020 | Medium

    As a user of Google Cloud Platform, and also (at Grab) of AWS for 2 years, I can tell you that there’s a world of difference between the philosophies of Amazon and Google when it comes to priorities. I’m not actively developing on AWS, so I don’t have as much of a sense for how often they sunset APIs that they have previously dangled alluringly before unwitting developers. But I have a suspicion it’s nowhere near as often as happens at Google, and I believe wholeheartedly that this source of constant friction, and frustration, in GCP, is one of the biggest factors holding it back.

    I know I haven’t gone into a lot of specific details about GCP’s deprecations. I can tell you that virtually everything I’ve used, from networking (legacy to VPC) to storage (Cloud SQL v1 to v2) to Firebase (now Firestore with a totally different API) to App Engine (don’t even get me started) to Cloud Endpoints to… I dunno, everything, has forced me to rewrite it all after at most 2–3 years, and they never automate it for you, and often there is no documented migration path at all. It’s just crickets.

    This mirrors my experience of Google Cloud as well. I’ve used Google’s AppEngine for application hosting for a long time, and it’s probably one of the easiest, simplest systems to use, providing you are willing to do everything the AppEngine way. But every time I go back to it, I find that yet another API, system or way of working has been deprecated, and the new way is never better for me as a customer.

    Inside the Twitter Hack—and What Happened Next | WIRED

    But one of the first things Twitter realized in the immediate aftermath was that too many people had too much access to too many things. “It’s more about how much trust you’re putting in each individual, and in how many people do you have broad-based trust,” Agrawal says. “The amount of access, the amount of trust granted to individuals with access to these tools, is substantially lower today.”

    One of the biggest changes the company has implemented is to require all employees to use physical two-factor-authentication. Twitter had already started distributing physical security keys to its employees prior to the hack, but stepped up the program’s rollout. Within a few weeks, everyone at Twitter, including contractors, will have a security key and be required to use it

    It often takes an incident for us to realise that our security processes and systems don’t work the way we think. Technologists build logging and audit frameworks, but rarely does anybody try to use them in anger until there is an incident and a realisation that they do log actions, but they don’t help answer the critical questions that are needed during an incident.

    It’s good that Twitter are moving towards 2FA and the use of hardware keys like Yubikeys to identify staff. Baring fundamental network and identity compromise, this will massively reduce the ease for external attackers to take over staff accounts. (Of course, the big question will be whether you can ethically, morally and legitimately run a world spanning social network the size of Twitter without already doing this?)

    The 7 Biases of Product teams, a very visual thread:" / Twitter

    Product failure is expensive.

    And look around, it’s common.

    Why do products fail?

    Is it becos we can't build the product? No

    Is it becos we launched it N weeks late? Almost never

    So what is it?

    The 7 Biases of Product teams, a very visual thread:

    This is a brilliant thread for project managers and team leaders. The 7 biases are all interesting biases that you can see in lots of situations, but the unpacking here to explain how these manifest and how they affect team decisions is pure gold. There’s a lot of links to further resources at the end as well that will be useful if you found the thread interesting.

    No, Moving Your SSH Port Isn't Security by Obscurity | Daniel Miessler

    It’s true that Security by Obscurity is bad; the problem is many people have no idea when it applies. Including most of the people being loudest about it. Let me tell you the secret to this debate that will permanently solve it for you.

    Security by Obscurity is when you hide how a security measure works, not when you keep some part of it a secret.

    Let me repeat that a few different ways, with examples.

    Certain types of security controls (like encryption) have two components: the mechanism, and the key. In encryption, the mechanism is the algorithm, and the key is, well…the key.

    The question is whether you’re hiding the mechanism or the key.

    I covered the security by obscurity article last newsletter, and this is a follow up by a different author looking at the meaning of security by obscurity.

    Obscurity is bad as a sole protection system. But contexts vary, and therefore the use of obscurity might depend on the context as to whether it’s any good or not. If you chose not to implement TLS, you choose not to implement authentication, but you instead say that knowing the port is a sufficient protective control for a system, then you are getting a very low level of security in your system. Pretty much any casual attacker will find your system and find you.

    However, if you are running a covert red team operation, and you commission a third party, to purchase you some SIM cards so that they are not associated with you, then that’s probably a reasonable and appropriate level of obscurity. There are times where we need to obscure things for security reasons, for the entire point of obscuring a thing. This is good obscurity. When there are times where you know a system is vulnerable and instead of protecting it, you obscure it, then you are probably committing a sin against security through obscurity.

    The interesting line here is that adding some obscurity to an already secure system may not be a bad thing. If you have access to multiple cryptographic algorithms, which are well tested and well secured, then not telling all and sundry which algorithm is used isn’t a bad thing. Your attackers may find it hard to attack your algorithm knowing it’s AES-128 in CBC model, but it’s even harder if they first have to analyse the cipher text and try multiple algorithms as well as search the key space.

    Dr. Geistbot, PhD on Twitter: "let the algorithm decide”

    let the algorithm decide

    This all started with a great twitter thread about why Colin Madland’s coworkers were having their head cut out by Zoom’s face detection algorithm. The answer of course, was that the face detection algorithm has a built in bias for light skin over dark skin, and prefers the white orb light in the background over the dark skinned faces in the foreground.

    In that twitter thread, people noticed that the twitter picture zoom algorithm also happened to choose to focus on the white participant as the main focus. For large images, twitter needs to crop the image by default, and the attention algorithm tries to solve the problem of simple grabbing the center of the image (which had an unfortunate tendency to focus on women’s chests), with a new algorithm that looks for faces and crop around them.

    This might all seem funny at first, but as can be seen from the quoted tweet, we literally do build these algorithms into systems, and as those algorithms form building blocks for bigger algorithms, the bias built in will become more problematic in how and when it decides that people are relevant, interesting or useful.

    Bias in algorithms is really hard to model for, because the real world, where we gather data from is already horribly biased. Socioeconomic factors change life outcomes for people from a BAME background, pay can be visibly determined to be aligned to gender, and even when you attempt to correct for it, the reality is that thousands of tiny biases exist in our current data stores. You can’t take data on arrests, job equity, educational outcomes or pretty much anything, without having bias encoded in the data. You would think that face detection, at least, would be easier to build and test for bias more easily than some of these grander and bigger data sets, but the use of this algorithm and training set in Twitter and Zoom means that it’s highly likely that we cannot QA for bias in these systems very easily.

    The Management Trap: Time for a Rethink | AWS Cloud Enterprise Strategy Blog

    In many enterprises, the ratio of “doers” to managerial resources is out of whack. In some Western countries, over 20% of the workforce is now considered to be in manager-type roles, a number accelerating faster than other employment types [1, 2]. I suspect this story is worse in the average IT department, given demand and extensive outsourcing. More managers-of-outsourcers, PMOs, demand planners, and “business relationship managers” are recruited, all internally focused. We compound this by presenting only one career path for the “doers”: management. Contributions and importance become measured in team size, span of control, and budget, none of which equate to value delivered. This isn’t pointing the finger. I’ve been guilty of this managerial sprawl too.

    Managers can play a role in connecting strategy to execution, greasing the wheels of communication, and coaching teams. In many organisations though they dilute accountability and agility by being overly prescriptive and disempowering teams, and reinforcing existing silos. Let’s be clear, though: this is not an employee problem, it’s an organisation issue.

    This is an interesting read on managerial patterns. The post identifies that less than 9% of people promoted into managerial positions actually wanted that position directly, which would align with a lot of my experience with senior software developers for example.

    The highlights here is to remind us that being a traditional “Taylorism” manager is not the only leadership role available to us. We can encourage people to become mastercraftsmen, truly mastering their craft and continuing to deliver; We can encourage people to become coaches, working with other people to encourage, support and work with (without being burdened by the managerial meetings and oversight); We can encourage them to own a problem area, and become a manager within an area of the problem; or we can engage them as change vanguards.

    What we need to remember is that as an organisation, we need to value individual contributors with other rewards than simply promotion to management. Extra pay, extra holidays, gifts and staff benefits, bonuses and support for learning and development. There are lots of other rewards that we should be able to use, but sadly many HR functions don’t invest in providing mechanisms for delivering these rewards.

    Selecting and Safely Using Collaboration Services for Telework

    Criteria to Consider When Selecting a Collaboration Service

    1. Does the service implement end-to-end encryption (E2EE)?
    2. Are strong, well-known, testable encryption standards used?
    3. Is multi-factor authentication (MFA) used to validate users’ identities?
    4. Can users see and control who connects to collaboration sessions?
    5. Does the service privacy policy allow the vendor to share data with third parties or affiliates?
    6. Do users have the ability to securely delete data from the service and its repositories as needed?
    7. Has the collaboration service’s source code been shared publicly (e.g. open source)?
    8. Has the service and/or app been reviewed or certified for use by a security-focused nationally recognized or government body?
    9. Is the service developed and/or hosted under the jurisdiction of a government with laws that could jeopardize USG official use?

    These are interesting criteria to use for assessing collaboration services. I think these are all potential factors that one should use, but it’s worth noting that the table on the following page, which assesses products that are used by the NSA’s primary customers has 17 different tools, of which only 9 support end-to-end encryption (and 4 of those are “configurable” which also probably means “paid for tier only”).

    It’s clear that end to end encryption, or in fact no single one of these should be seen as make or break. These products are all in use, and no single category appears to be a single discriminator.
    I’d have loved to see this table also add some of the features, as I’m not familiar with all the products. I’d love to know which ones can be installed on mobile devices, which support multi-person video calls, which support group texts and so on. You can use the N/A as a proxy for some of this.