Cyberweekly #214 - Little snippets of practice
Published on Sunday, October 30, 2022
Firstly, apologies that this weeks issue has been delayed and last weeks issue was completely missing for a number of reasons, partly because it's half term in the UK and I've been spending time with my family, and partly because I've been incredibly busy recently.
I've concentrated on a number of little snippets that I thought were interesting, but I've not had as much time to provide deep analysis (as if my analysis is ever that deep!). Hopefully, you still find them all interesting. Depending on when everything else settles down, normal service will eventually resume.
The passwords one stood out to me because of the little habits that we have in our lives and how we let them affect us.
Taking a simple action, and doing it over and over is the best way to get better. We know that we develop new habits through repetition, but we can also enforce bad habits.
Taking the conscious decision to ingrain yourself with positive habits, and to reinforce them everytime you unlock your computer could be the step that you need to improve.
But more than that, we learn from others habits. The article on being a 40 year old programmer talks about how often, as developers, we fail to learn from others. We copy the best practices that others do, and don't question how and why they came about. So instead, sometimes we should take the time to develop our own practices, to try doing the work ourselves, over and over. It will either give you a greater appreciation for why the best practice is best, or cause you to question and determine whether there's maybe an emerging practice that is better.
- The critical resource changes that require extra code review from different teams (a good example of this might be a load balancer where a change may require an additional review from a traffic engineer)
- The supported Terraform modules allowed for infrastructure changes, where so long as engineers are taking the prescriptive approach with respect to deploying a cloud resource the approval is automated
- The specific actions allowed for particular resources
- The changes that require security team review
- Ensuring that cloud resources tags are being used
- The cost parameters around allowable changes to infrastructure
- Validating resource type to ensure engineers are taking advantage of existing reserved instances and savings plans
- 128 CPU threads at ~4GHz
- 4TB RAM
- 25 Gbit ethernet
- 10 Gbps NAS
- hours of yearly downtime
Doing things in a weird way will often cost you reputation. Reputation, of course, is also tied to things like title and money.
This is why most of us are really bad at starting new projects. It’s why most of us can’t talk about when to use one syntactic structure or another, and why we act like there are open-and-shut rules for indentation that should be enforced by a machine, as if there is a simple, correct way to do it. Because the way you build these skills is repetition, and getting better over time, and believing in expressiveness, and communicating with other humans. Things that we stigmatise, as an industry.
Does this mean that you, individually, need to be bad at these things? Not at all.
You aren’t going to argue programmers into thinking in some other way. Maybe a few, individually. But the answer is not to drag everybody into being better. That will take decades or longer. You don’t have that kind of time. You want to get better, faster than that.
But you personally can do things that are “bad practices” but in fact make you better.
You can reinvent the wheel. You can repeatedly write the same thing. You can also write code in “bad” ways and see what happens.
Be careful with best practices. They are like other forms of advice: they mean that somebody else did the work to think through it and get smarter, and you’re just using the simplest thing they came up with.
That’s fine to start out. But it’s terrible as a way to get better.
There's a lot to like in this essay, and a lot that is quite subjective. But this bit about watching out for best practices really stood out for me.
Best Practices are best only because someone else has done the thinking. In an industry as new and fast changing as ours, those best practices often don't come with the context that is needed to understand if they're still in fact the best practices.
The characters you type out over and over again into your digital devices may impact your mental health more than you might expect.
A new UBC and NYU Shanghai research study has found self-affirming written passwords — such as “MusicCalmsMeDown@123”— can offer a boost to one’s mental health.
The research published in Internet Interventions focused on how such log-in codes impacted the well-being of first-year sexual minority undergraduates at both UBC and NYU Shanghai in coping with sexual orientation microaggressions, including homophobic name-calling, during the first six weeks of university.
“We were thinking, with self-affirmation passwords, people can be reminded of what’s important to them whenever they log in to their laptops or computers,” says lead author Dr. Gu Li (he/him), assistant professor of psychology at NYU Shanghai, who first began the study in 2019 while at UBC.
In this way, a password could be used as a timely “booster” for a writing-based intervention, explains Dr. Li, and help mitigate a stressful situation and subsequent decrease in psychological well-being.
This is an interesting study. I wonder whether the growth of this practice might result in changed password lists as well. Maybe this will have second order effects on Infosec professionals who have to read all these users passwords when they get inevitably leaked
How a Password Changed My Life. The following events occurred between ☹… | by Momo Estrella | Medium
Then, letting all the frustration go, I remembered a tip I heard from my former boss, Rasmus. Somehow he combined to-do lists with passwords, and I thought to use an augmented variation of that.
I’m gonna use a password to change my life.
It was obvious that I couldn’t focus on getting things done with my current lifestyle and mood. Of course, there were clear indicators of what I needed to do -or what I had to achieve- in order to regain control of my life, but we often don’t pay attention to these clues.
My password became the indicator. My password reminded me that I shouldn’t let myself be victim of my recent break up, and that I’m strong enough to do something about it.
My password became: “Forgive@h3r”
During my meeting I kept thinking on what I just did. Something drew a smirk on my face.
During the rest of week, I had to type this password several times a day. Each time my computer would lock. Each time my screensaver with her photo would appear. Each time I would come back from eating lunch alone.
In my mind, I went with the mantra that I didn’t type a password. In my mind, I was reminding myself to “Forgive her”.
I thought I'd posted this back when I read it, but since that was apparently back in 2014 or so, well before I started this newsletter. The research into passwords affecting your mental health reminded me of this though. Lovely story and anecdote
Policy is a set of rules, conditions, or instructions meant to be enforced across the organization, including such things as cloud-native infrastructure, application authorization, or Kubernetes admission control. One example would be establishing policy rules to define the conditions required for infrastructure code to pass a security control and be deployed.
At DoorDash, we built policy-based guardrails by codifying rules to secure infrastructure deployments and changes, including but not limited to:
Yet another example of organisations investing in tools like OPA to automate policy around what changes are allowed where. This kind of policy automation should be seen as a huge advantage by security teams, because it builds confidence that the system stays within policy. As a side effect, it also means that policies have to actually be enforceable and definable.
On August 15, the Signal team reported that unknown hackers attacked users of the messenger. We explain why this incident demonstrates Signal’s advantages over some other messengers.
What happened? According to the statement issued by Signal, the attack affected around 1900 users of the app. Given that Signal’s audience runs to more than 40 million active users a month, the incident impacted only a tiny share of them. That said, Signal is used predominantly by those who genuinely care about the privacy of their correspondence. So even though the attack affected a minuscule fraction of the audience, it still reverberated around the information security world.
As a result of the attack, hackers were able to log in to the victim’s account from another device, or simply find out that the owner of such and such phone number uses Signal. Among these 1900 numbers, the attackers were interested in three specifically, whereupon Signal was notified by one of these three users that their account had been activated on another device without their knowledge.
So, it turns out that even Signal isn’t immune to such incidents. Why, then, do we keep talking about its security and privacy?
First of all, the cybercriminals did not gain access to correspondence. Signal uses end-to-end encryption with the secure Signal Protocol. By using end-to-end encryption, user messages are stored only on their devices, not on Signal’s servers or anywhere else. Therefore, there is simply no way to read them just by hacking Signal’s infrastructure.
What is stored on Signal’s servers is users’ phone numbers as well as their contacts’ phone numbers. This allows the messenger to notify you when a contact of yours signs up for Signal. However the data is stored, first, in special storages called secure enclaves, which even Signal developers can’t access. And second, the numbers themselves aren’t stored there in plain text, but rather in the form of a hash code. This mechanism allows the Signal app on your phone to send encrypted information about contacts and receive a likewise encrypted reply as to which of your contacts uses Signal. In other words, the attackers could not gain access to the user’s contact list either.
Lastly, we should stress that Signal was attacked in the supply chain — through a less protected service provider used by the company. This, therefore, is its weak link. However, Signal has safeguards against this, too.
The app contains a feature called Registration Lock (to activate go to Settings → Account → Registration Lock), which requires a user-defined PIN to be entered when activating Signal on a new device. Just in case, let’s clarify that the PIN in Signal has nothing to do with unlocking the app — this is done through the same means you use to unlock your smartphone.
This is a nice breakdown of why the Signal hack wasn't as big a deal as it could have been, and how deliberate engineering practices in Signal that prevent them seeing users messages also protect the users against even pretty competent and capable attackers
I have spent several months now redesigning services I have encountered before and designing services for problems I would like to work on going forward. The process has led me to a general design that works for many problems and I quite enjoy building.
It can be summarized as 1 VM, 1 Zone, 1 process programming .
If this sounds ridiculously simplistic to you, I think that's good! It is simple. It does not meet all sorts of requirements that we would like our modern fancy cloud services to meet. It is not "serverless", which means when a service is extremely small it does not run for free, and when a service grows it does not automatically scale. Indeed, there is an explicit scaling limit. Right now the best server you can get from Amazon is roughly:
That is a huge potential downside of of one process programming. However, I claim that is a livable limit. I claim typical services do not hit this scaling limit. If you are building a small business, most products can grow and become profitable well under this limit for years. When you see the limit approaching in the next year or two, you have a business with revenue to hire more than one engineer, and the new team can, in the face of radically changing business requirements, rewrite the service.
Reaching this limit is a good problem to have because when it comes you will have plenty of time to deal with it and the human resources you need to solve it well.
Early in the life of a small business you don't, and every hour you spend trying to work beyond this scaling limit is an hour that would have been better spent talking to your customers about their needs.
The principle at work here is: Don't use N computers when 1 will do.
I really like this approach. I have been using Google’s CloudRun, which has a similar, but much smaller model for this. There’s no server, just something that runs containers, and then we use backing databases that are SaaS provided. But this keep it simple approach, of single processes, running single systems and not building for immediate parallel scale works wonders when it’s a hobby project or side project.
Too often organizations treat the Cloud as a destination. For legacy systems or naive implementations, cloud can prove expensive while providing minimal value. The assumptions of old school highly redundant scale up architectures do not migrate well to an ephemeral scale out cloud reality. Perhaps as costly as ‘Lift and Shift’ applied to workloads, is ‘Lift and Shift’ applied to practices.
Deming’s “chain of quality” observed a counterintuitive relationship between quality and cost. When an organization focuses on cost as a cause, as opposed to an outcome, they attempt to reduce cost directly and quality inevitably suffers. On the other hand when organizations focus on understanding how technologies, practices and policies can enable value creation and improvement in quality, costs tend to go down. It is important to note that the costs that are reduced are indirect; reduction in variation, increases in responsiveness, and reduction of waste; rework, escaped defects and replanning. For most workloads, cloud migration has almost no chance of cutting cost without cloud architectures and operating models.
Incredibly well put economic argument for why Lift and Shift is a bad practice. In particular, concentrating on migrating to cloud in order to lower your costs is almost always a foolish move.
Fat Leonard is on the lam.
If you’ve never heard of Fat Leonard, don’t worry, you’re not alone. If the greatest trick the devil ever pulled was convincing the world he didn’t exist, then perhaps the greatest trick the Pentagon ever pulled was convincing us all to act as if Fat Leonard didn’t exist either.
Leonard, more accurately known as Leonard Glenn Francis, earned the nickname due to his large size, reportedly 350 pounds, but he earned the federal prison sentence he is now running away from by being at the center of arguably the largest corruption scandal in the history of the United States Navy. For years, Leonard bribed and otherwise corrupted hundreds of Navy officers to look the other way as he systematically overcharged the U.S. government on hundreds of millions in Pentagon contracts.
It’s a scandal of proportions as massive as Fat Leonard himself. The stories of Leonard’s corruption include drugs, prostitutes, Cuban cigars, Lady Gaga tickets, and of course lots of good, old fashioned cash. Eventually, Navy investigators from NCIS (not the TV show, the real Naval Criminal Investigative Service) started to look into it all, so at first Leonard just bought them off too. Eventually though, in 2013, federal agents successfully lured Leonard to a San Diego hotel where they were able to capture him. He eagerly flipped and gave up dozens of the corrupt Navy officers he’d worked with, and was awaiting sentencing as a cooperating witness when, last week, he escaped custody.
What a story. I first noticed this back in September, and I seem to recall seeing some updates, I think he fled to Venezuela, but this writeup seemed to have the best summary of the allegations and story so far
Hello there! I'm Pete - aka Kybr - and this is my writeup for the DEF CON 30 badge challenge.
This year's challenge was heavily focused on music and hacker pop culture, which (I think) made it more widely accessible than some previous badge challenges. We had a huge crew of hackers in person and on Discord working on the badge challenge from the moment they started handing out the badges Thursday morning all the way through Sunday afternoon, when we finally finished the last challenge just four hours before DEF CON ended. Big thanks go out to Mike, redactd, and crew at MKFactor for the truly unique and beautiful badge, and also for the many hours of pain and suffering fun!
This is a truly impressive writeup, that shows a lot of the geekery that went into the badges at DefCon this year