Most security products exist in what economists call a Market for Lemons (https://en.wikipedia.org/wiki/The_Market_for_Lemons), which means that purchasers lack the ability to tell a good product from a bad product. The market theory suggests that when there is an information assymetry, then there tends to exist an economic incentive for vendors to push bad security solutions on the purchasers. How would a normal person select a VPN company if they wanted to follow the advice to "Not use public wifi without a VPN". They'd google for "free VPN" and they'd struggle to determine what good looks like. Is it a flashy website, a trust mark of some kind, a personal recommendation? Maybe they'd select a VPN from a known major brand, or maybe they'd just find the cheapest one out there.
As security professionals, we have to ensure that our advice is actionable and usable, whether it be the advice in a pentest report, advice to a friend, or technical guidance on how to implement something in a secure way. Although the reality for all situations means that much security advice is contextual, when we try to load the context onto the user, it becomes unfair. So maybe we should say "It depends" a lot less than we do, even though we know that the real answer is complex and does depend, if for that user, there is a simple answer that works 99% of the time, then we should give that.
Despite alienating the army top brass, Fuller was handed a unique opportunity to advance the cause of tanks in the British army: he was offered the command of a new experimental mechanised force in December 1926. There was just one problem: he would have to step away from his single-minded focus on the tank, also taking command of an infantry brigade and a garrison. In short, Fuller would have to get into the organisational headaches that surround any architectural innovation. He baulked, and wrote to the head of the army demanding that these other duties be carried out by someone else, eventually threatening to resign. The position was awarded to another officer, and Fuller’s career never recovered. His petulance cost him — and the British army — dearly. Architectural innovations can seem too much like hard work, even for those most committed to seeing them succeed.
This sounds so familiar to me around digital disruptors and transformation communities. While those working in digital disruption are often gifted and brilliant at the thing they care about, their political and integration skills are often lacking. This failure to acknowledge or carry out the hard work of integrating the ideas into the rest of the business, with those who "don't get it", the disruption then fades or is under accepted by the organisation.
With this collection of resources and toolkits, you’ll be able to work out how to launch a digital revolution in your department, and find some of the tools to help you do it
This is a good list of various digital toolkits, github repos, data repositories and general government improvement from around the world. I notice that not a single resource has been highlighted on modern digital security
Transkiy was being questioned about an extraordinary form of contraband. Someone had hidden refurbished computers in the ceiling of the prison. They’d somehow obtained a login to the prison’s network, gaining access to the inner workings of the facility, including databases on inmates and the tools for creating passes needed to enter restricted areas. The computers also granted access to the outside world, which someone had used to apply for credit cards using the stolen identity of a prisoner. The scheme extended from the prison, to a community nonprofit, to multiple banks — all done under the noses of an oblivious prison staff.
This is a lovely story showing how one can abuse a system and the ways you can slip through the cracks of a system if you know what you are doing.
It could be argued that it is useful for antivirus software to collect certain limited browsing history leading up to a malware/webpage detection and blocking. But it is very hard to argue to exfiltrate the entire browsing history of all installed browsers regardless of whether the user has encountered malware or not. In addition, there was nothing in the app to inform the user about this data collection, and there was no way to opt out of this data collection.
A lot of the writeups of this incident seemed to focus on "Stealing data" and "China servers - bad" and jump to conclusions without acknowledging that there are potentially valid reasons for the attempt to get the data, and that modern cloud environments make geographic ownership a potential red herring. This writeup does acknowledge that, and does a good job of explaining why this app and series of apps goes well beyond the pale in stealing data.
This code checks if the configuration file sent by the user contains a line starting by plugin, script-security, up or down. These are all the methods to execute code or commands through OpenVPN.
This is not how to patch code that reads files! More importantly, we security people tell people that they should use a VPN all the time, without giving them any tools or capability to determine a good VPN provider from a bad one. These two VPN providers are big names and good at this stuff, and they had vulnerabilities in their client package, so a user has to be able to assure themselves that the VPN provider won't sniff their traffic, and that the client is correctly configured. This means "Use a VPN" isn't usable security advice for normal people
Onavo Protect also allegedly violated a part of the iOS developer agreement that regulates how app makers make use of data outside the core function of the software. Onavo Protect is a VPN service, and yet Facebook has been using the traffic routed through its private servers for broad analytic purposes.
How do users pick a VPN provider? Maybe they select one from a known trusted large brand, such as Facebook. They wouldn't be snooping on the traffic at all surely? Oh right, nope
As red teamers, this provides a highly attractive proposition for certain components of the red team infrastructure as we no longer need to worry about provisioning, building or configuring servers. Indeed, serverless means you can programatically create new services as and when we need them in minutes and if a particular campaign becomes tainted, you can simply rinse and repeat to create new, unattributable infrastructure.
This is a smart use of serverless infrastructure, although I'm less happy about the assertion in here that "customer data is not secure in the cloud". Building a small serverless app is simple today and takes 5 or 10 minutes. Instead of reading about it and trying to understand it, why not just try to write something small and simple and run it for a while and get your head around it.
What is interesting to note from the certificate the Magecart actors used is that it was issued on August 15th, which indicates they likely had access to the British Airways site before the reported start date of the attack on August 21st—possibly long before. Without visibility into its Internet-facing web assets, British Airways were not able to detect this compromise before it was too late.
Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.
This is a nice guide on using AWS Parameter Store in a secure way to store your secrets, which is plenty good enough for small to medium applications that just want to externalise simple secrets outside of their system