I don't think we yet know where the future of work is going, let alone how to secure it.
Analysis shows that there are generational and lifestyle splits over a desire to work in the office, but it's not just about location. The pandemic working has shown people a flexibility around work timing, has shown that people can work evenings and weekends, and can work the way they want.
The things that we haven't missed is also key to understanding peoples motivations for work, as people realise that they don't miss their workmates, or the inspiring speeches from their manager, or the communal complaining at the tea points. It creates a gap that attackers will start to capitalise on in various forms.
What's not clear yet is what the social contract between employer and employee will be, how much we need to trust or verify each other, and just what that impact will be for society. This might all be a flash in the pan, but I suspect that there have been some quite fundamental shifts that have happened within our collective psyche, and we're still waiting to see what the result is.
Or maybe, I'm focusing on this all just because I finally finished watching Squid Games, which is a scathing critique on capitalist society and the things that drive us to engage with one another and with society. People, it turns out, are complex webs of drives, self delusion and higher order reasoning, it's what makes driving a positive security culture so difficult, because at the root of it all, it's not about technology, it's about people.
New research by training platform TalentLMS and Workable, a provider of recruiting software, suggests that tech and IT workers are likely to be planning an exit soon. In a survey of 1,200 tech and IT workers in the US, nearly three-quarters (72%) said they intended to quit within the next year.
Data from the Bureau of Labor Statistics shows that the quit rate in the US hit a record high of 4.3 million in August 2021, while data from Bankrate the same month suggests that approximately half of the US workforce plans to leave their job within the next 12 months.
Burnout, stress, and feelings amongst tech workers that their efforts have not been recognised are commonly-cited reasons for employees looking to quit. A survey of 600 data engineers conducted by Wakefield Research found that 97% reported feeling burned out, with many citing relentless demands from employers, repeated interruptions and disruptions to their work-life balance, ill-defined projects, and "a steady stream of half-baked requests from stakeholders."
So pervasive were the feelings of burnout amongst data engineers that 78% said they wished their job came with a therapist to help them manage stress, while 79% of those surveyed said they were thinking about leaving the data engineering field altogether.
Organisations that aren't monitoring the health and wellbeing of their staff are looking for problems in the face of the "great resignation".
Tech staff have always found it easier to find new jobs, with skills in demand and a hot market, it's important that you don't just just a recruitment strategy, but you also need to think about retention. Are you ensuring that your staff feel listened to, are protected from an endless stream of half-baked requests from stakeholders, and that they feel appropriately rewarded. This isn't just about money, many people in tech are after what Daniel Pink calls a deep-seated desire to direct our own lives, to address the need for competence, autonomy and relatedness. People have a desire for autonomy in their jobs, feeling the ability to make relevant decisions about the work they do; a desire for mastery, to feel like they are getting better and recognised for that improvement; and a desire for purpose, to understand how their work fits in the bigger picture.
To fight back against the coming changes in how and why people work, we're going to need to consider these drives and ensure that work is meaningful, linked to organisational purpose and appropriately challenging for people.
The report, titled "2021 State of Ransomware Survey & Report: Preventing and Mitigating the Skyrocketing Costs and Impacts of Ransomware Attacks," is based on survey responses from 300 U.S. based IT business decision makers. It further reveals that more than four out of five (83%) ransomware attack victims felt they had no choice but to pay ransom demands to restore their data.
The report highlights how organizations are responding to the growing threats from ransomware attacks, including:
- 72% have seen cybersecurity budgets increase due to ransomware threats
- 93% are allocating special budget to fight ransomware threats
- 50% said they experienced loss of revenue and reputational damage from an attack
- 42% indicated they had lost customers as a result of an attack
The report features three main takeaways with recommendations and resources to help mitigate damage from an attack.
With so many organizations victimized by ransomware attacks, it is more important than ever that organizations prioritize creating an incident response plan to avoid being added to the growing list that have paid the ransom demand.
While increasing cybersecurity budgets for network and cloud security solutions, organizations must also understand and prioritize the requirements for preventing exploit escalation with PAM security that enforces least privileged access.
Preventing ransomware attacks by practicing basic cybersecurity hygiene such as regular backups, timely patching, MFA, and password protection is essential. However, PAM policies that make least privileged access a priority enable security teams to identify the attack entry point, understand what happened, help remediate, and ultimately protect restored data.
It's no real surprise that a company that sells PAM policy enforcement thinks that everyone should invest in PAM enforcement. But the actual report is an interesting read, and does show that ransomware works for the attackers.
The main thing here is to have a plan, and know what you are going to do if ransomware does get in. Having good backups, that cannot be compromised by the attackers, so backed up on "write once" medium, such as to CD/DVD, rotated tapes, or more likely cloud storage held in a separate account and with write once policies enabled, such as with Amazon Glaciers Vault Lock which should ensure that you can recover if you do get compromised.
A Gemini source was offered a position as an IT specialist at “Bastion Secure Ltd”, a cybersecurity “company” seeking C++, Python, and PHP programmers, system administrators, and reverse engineers. A basic search for this company returns a legitimate-appearing website (www[.]bastionsecure[.]com), but analysis revealed that it is a fictitious cybersecurity company run by a cybercriminal group. During the interview process, the source was given several tools for test assignments that the source would use if employed.
Gemini Advisory worked jointly with the Recorded Future Insikt Group to analyze the tools provided by Bastion Secure and determined that they are actually components of the post-exploitation toolkits Carbanak and Lizar/Tirion, both of which have been previously attributed to the FIN7 group and can be used for both POS system infections and ransomware attacks.
Prior to 2020, FIN7’s primary modus operandi was to compromise companies’ networks and infect POS systems with credit card-stealing malware. Since 2020, cybersecurity researchers have identified instances in which FIN7 gained access to company networks that were later infected with either REvil or Ryuk ransomware. FIN7’s exact involvement in the deployment of ransomware—i.e., whether they sold the access to ransomware groups or have formed a partnership with these groups—remains unclear. However, the tasks that were assigned to the Gemini source by FIN7 (operating under the guise of Bastion Secure) matched the steps taken to prepare a ransomware attack, providing further evidence that FIN7 has expanded into the ransomware sphere.
Furthermore, due to Bastion Secure’s use of Carbanak and Lizar/Tirion and FIN7’s established practice of using fake cybersecurity companies to recruit talent, Gemini assesses with high confidence that FIN7 is using the fictitious company Bastion Secure to recruit unwitting IT specialists into participating in ransomware attacks.
It's a new world, where not only do employers have to perform legitimacy checks on their staff to ensure they aren't doing another a job, but now prospective employees might have to check that their new employer isn't in fact a scam company.
This isn't new, there were some reports a few years ago of other criminal groups doing this. It's of course risky, precisely because you might get your opsec wrong and hand over hacking tools to an analyst firm, but it's obviously working for them if they keep doing it.
How does no email square with complete distributed communication? The answer is P2. P2 is the name of a WordPress theme that every team at Automattic uses internally for documentation. Essentially, we have (probably) hundreds of P2s, for current teams, for teams that used to exist, for special interest groups, and events.
When you “P2” something, you’re writing a blog post, where you can tag in coworkers and cross-post to other P2s. What do teams use P2s for? Absolutely everything. Checklists for onboarding, status reports on projects, thoughts about detailing projects, discussions about best coding practices, architectural diagrams, marketing data analyses, and more. The best part is that the entire company’s history of P2s is available to search through. Also, you can subscribe to literally any P2, and comment on it as necessary, starting with your own team’s.
It’s true that I still spend an enormous amount of time in Slack, but the company’s focus on P2 as THE place where institutional memory lives, and where other people can interact with your work, means that there is an internal incentive to get stuff out of Slack and into P2s, where they can live forever and be collected with other P2s to form a cohesive view of how a team, project, or division operates over time. In this way, the company also owns its own institutional knoweldge instead of having it locked away in third-party tools.
The nature of reading P2s means even if you only write one or two P2s a month, you can send them around in meetings or Slack conversations as solid anchors and points of reference. (Another note is that we definitely do still have meetings, and I spend a good deal of my day in Slack, but I don’t come away from these times feeling like the information is lost, because I’m always building towards a P2 on whatever was discussed in either format.)
Of course, as with any technology, P2s come with their own tradeoffs: it can be easy to get lost reading through hundreds of P2 posts every day and responding unless you have a very good P2 strategy and focus on the ones relevant to you. It can be hard to synthesize a lot of technical information into a post that’s relevant enough to both engineers and business people and offer enough context and value to continue the conversation. It can be hard to go back far enough to find all of the historical context you need for your own P2s.
This is an interesting way to solve the “organisational memory” problem. Organisations are generally really bad at this, and as pointed out in the final sentence, there’s actually two problems at play here, and P2’s solve one of them.
Publishing a blogpost or long form article, such as an Amazon 6-pager or a Civil Service Submission, is a good way to get people to articulate their thoughts, ideas, and thinking process down in a way that can be easily consumed.
However, the second problem is that these papers, blogs and submissions are written within a temporal and functional context, and that the context will be lost over time. They are also written to a specific audience, and often only actually distributed to that specific audience (although the public nature of P2s make them interesting to me in this form). These things lead to the classic problem of the organisational wiki, described to me once as “where information goes to die”. Maintaining that information over time, marking things or ideas as deprecated, and of course finding, disseminating and reading that information becomes the second order problem.
The act of publishing the information in a transparent way at least starts to solve the first problem, and is something I'm a big fan of, but I've not yet found anywhere that has solved the second set of problems.
Reports of phishing attacks doubled in 2020, with credential phishing used in many of the most damaging attacks. The Microsoft Digital Crimes Unit (DCU) has investigated online organized crime networks involved in business email compromise (BEC), finding a broad diversification of how stolen credentials are obtained, verified, and used. Threat actors are increasing their investment in automation and purchasing tools, so they can increase the value of their criminal activities.
Overall, phishing is the most common type of malicious email observed in our threat signals. All industries receive phishing emails, with some verticals more heavily targeted depending on attacker objectives, availability of leaked email addresses, or current events regarding specific sectors and industries. The number of phishing emails we observed in Microsoft Exchange global email flow increased from June 2020 to June 2021, with a pronounced surge in November potentially taking advantage of holiday-themed traffic.
The Microsoft Digital Defense Report itself is worth a read. It does, somewhat naturally, shill a lot of the areas where Microsoft has done the most work, but it also covers a continuing crisis in attacks on the most basic areas of cybersecurity.
I pulled out Phishing because although ransomware continues to get all the headlines, the numbers on phishing attacks are still eyewatering. Phishing remains one of the most used initial access mechanisms, and large amounts of it can be protected against fairly easily. MFA prevents 99% of phishing in a study from Google a few years back, and is still one of the most effective controls.
Microsoft and Google do a lot of reduce the impact of phishing, but ensuring that you use common email security patterns and invest in your defences, you should be able to reduce the impact of phishing on your staff down enormously.
The Network Time Protocol (NTP) has been critical in ensuring time is accurately kept for various systems businesses and organizations rely on. Authentication mechanisms such as Time-based One-Time Password (TOTP) and Kerberos also rely heavily on time. As such, should there be a severe mismatch in time, users would not be able to authenticate and gain access to systems. From the perspective of incident handling and incident response, well-synchronized time across systems facilitates log analysis, forensic activities and correlation of events. Depending on operational requirements, organizations may choose to utilize public NTP servers for their time synchronization needs. For organizations that require higher time accuracy, they could opt for Global Positioning Systems (GPS) appliances and use daemons such as GPSD  to extract time information from these GPS appliances.
A reader recently highlighted to us a bug in the GPSD project that could cause time to rollback in October 2021 . Due to the design of the GPS protocol, time rollback (or technically termed “GPS Week Rollover”) can be anticipated and usually closely monitored by manufacturers . The next occurrence should have been in November 2038 , but a bug in some sanity checking code within GPSD would cause it to subtract 1024 from the week number on October 24, 2021 . This would mean NTP servers using the bugged GPSD version would show a time/date of March 2002 after October 24, 2021 .
If you are reading this update, then the internet didn’t just rewind back to 2002, but it’s difficult to know how big an impact this will have.
What has become clear to me is that thanks to a rise in new marketplace platforms — there are new ways for many people to earn additional income or even a living — all from the comfort of their homes! A decade ago, I read a book called “What’s Mine is Yours; How Collaborative Consumption is Changing the Way We Live” by Rachel Botsman and Roo Rogers. My key take-away from the book was that each home has a plethora of assets that are badly utilized, and sharing them collaboratively could be good business for the owners. And sharing these assets means that others won’t need to purchase them, promoting sustainability
Fascinating idea that people will monetise the sharing of their space and stuff with others over time. Most people do not want to bother with the overhead of doing this, but the infographic in the system shows some of the services plotted along a horizontal curve of activity required. The passive end of these are ones that are easy for an "asset owner" to use.
I suspect that nobody is looking at these models and considering the impacts on fraud yet, but that would be an interesting analysis. How much of your life and home do I need to rent out to be able to conduct fraudulent or insider attacks against you? And how protected are you, if there are different organisations looking at the different aspects, nobody is to track that I'm the same person renting your car, your house and your garage at the same time.
The number one thing that I recommend out of security training is to teach the rest of the organization that they should be proactive in engaging with the security team. Coming from the consulting world, often you would produce a pentest report, a security team would consume that file and add all the issues into the bug tracker, and then they’d say shipping is blocked until these are all resolved. There's never been a better way to burn a relationship with the development teams than over something that probably wasn't even critical.
We don’t want security to be the blocker. We want to find the right balance of prioritizing critical matters while being able to ship and iterate quickly. Having fun trainings and showing that the security team won’t try to get in the way every time a developer or DevOps engineer asks a question is a great way to help build an effective relationship.
The key here is ensuring that security teams and development teams are working together, there's not a blame culture, and nobody is blocking the work of the other. Good advice from Zane here.
Securing a telecommunications organization is by no means a simple task, especially with the partner-heavy nature of such networks and the focus on high-availability systems; however, with the clear evidence of a highly sophisticated adversary abusing these systems and the trust between different organizations, focusing on improving the security of these networks is of the utmost importance. Given the significant intelligence value to any state-sponsored adversary that’s likely contained within telecommunications companies, CrowdStrike expects these organizations to continue to be targeted by sophisticated actors, further underscoring the criticality of securing all aspects of telecommunications infrastructure beyond simply focusing on the corporate network alone.
Crowd strike doesn’t attribute this attack, but does note just how advanced and capable it is. Hiding the traffic in the protocols expected to be transferred is a strong indicator of well planned and capable attacker.
Of more concern is the apparent lack of foundational security controls within the telecoms companies impacted, especially where some of the capability is outsourced to a managed service provider.
Managing your risk from your own managed service providers is still a theme of 2021 and looks likely to continue for some time
How to defend against attackers from decrypting the CyberArk vault password in these credential files? First off, prevent an attacker from gaining access to the credential files in the first place. Protect your credential files and don’t leave them accessible by users or systems that don’t need access to them. Second, when creating credential files using the CreateCredFile utility, prefer the “Use Operating System Protected Storage for credentials file secret” option to protect the credentials with an additional (DPAPI) encryption layer. If this encryption is applied, an attacker will need access to the system on which the credential file was generated in order to decrypt the credential file.
We reported this issue at CyberArk and they released a new version mitigating the decryption of the credential file by changing the crypto implementation and making the DPAPI option the default.
CyberArk is a neat tool that manages your privileged system access, meaning that the admins don't know the global admin password, or the exchange server password or whatever passwrods they need. The tool manages the authentication on their behalf, and can provide time limits, second person controls and audit logs.
The downside to these sorts of cybersecurity tools is that they can become targets in their own right. An attacker discovering that you have CyberArk installed now knows that all the passwords are stored on that server, and that if they can get to it, they can get them all! This makes the security of the security tools even more important than ever.
The dangers of RDP exposure, and similar solutions such as TeamViewer (port 5958) and VNC (port 5900) are demonstrated in a recent report published by cybersecurity researchers at Coveware. The researchers found that 42 percent of ransomware cases in Q2 2021 leveraged RDP Compromise as an attack vector. They also found that “In Q2 email phishing and brute forcing exposed remote desktop protocol (RDP) remained the cheapest and thus most profitable and popular methods for threat actors to gain initial foot holds inside of corporate networks.”
RDP has also had its fair share of critical vulnerabilities targeted by threat actors. For example, the BlueKeep vulnerability (CVE- 2019-0708) first reported in May 2019 was present in all unpatched versions of Microsoft Windows 2000 through Windows Server 2008 R2 and Windows 7. Subsequently, September 2019 saw the release of a public wormable exploit for the RDP vulnerability.
The following details are provided to assist organizations in detecting, threat hunting, and reducing malicious RDP attempts.
Look, it's easy to say "don't expose RDP to the internet", but sometimes organisations need to, whether for legitimate structural reasons, or because it cost of fixing a legacy decision is simply too high.
This excellent piece of research covers how you can monitor and protect the RDP service if you have to expose it.
My preference from all of the controls is to not expose the RDP directly to the internet, but to invest in a VPN solution for the RDP infrastructure. You can then invest in certificates and MFA for the VPN, and know that the weaknesses of RDP is protected by the access controls from the VPN. This is even easier with modern cloud based VPN solutions, such as AWS Client VPN or Azure VPN Gateway, which can be used to isolate the VPN from everything else on your infrastructure.
HTTP Toolkit is a beautiful & open-source toolfor debugging, testing and building with HTTP(S)on Windows, Linux & Mac.
Absolutely lovely looking local intercepting proxy. It sorts out a trusted root certificate, and can pause http requests, put in place rules to rewrite the responses and mock apis with ease.
Perfect for inspecting and playing with third party apps to see what they do, as well as fuzzing api responses, or sniffing out hard coded tokens and so on.
In successful enterprise attacks, adversaries often need to gain access to additional machines beyond their initial point of compromise, a set of internal movements known as lateral movement. We present Hopper, a system for detecting lateral movement based on commonly available enterprise logs. Hopper constructs a graph of login activity among internal machines and then identifies suspicious sequences of logins that correspond to lateral movement. To understand the larger context of each login, Hopper employs an inference algorithm to identify the broader path(s) of movement that each login belongs to and the causal user responsible for performing a path’s logins. Hopper then leverages this path inference algorithm, in conjunction with a set of detection rules and a new anomaly scoring algorithm, to surface the login paths most likely to reflect lateral movement. On a 15-month enterprise dataset consisting of over 780 million internal logins, Hopper achieves a 94.5% detection rate across over 300 realistic attack scenarios, including one red team attack, while generating an average of < 9 alerts per day. In contrast, to detect the same number of attacks, prior state-of-the-art systems would need to generate nearly 8× as many false positives.
This is a really neat bit of academic research, looking at applying anomaly detection against internal authentication data flows to identify examples of adversaries conducting lateral movement.
Here's hoping that we start to see people implement this model into their SEIM tooling in the near future