Site icon IT World Canada

Cyber Security Today, Week in Review for Friday March 25, 2022

Cyber Security Podcast

Podcast June 1st, 2022

Welcome to Cyber Security Today. This is the Week in Review edition for the week ending Friday, March 25th, 2022. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com.

 

In a few minutes I’ll be joined by guest commentator David Shipley of New Brunswick’s Beauceron Security to talk about events from the past seven days:

The Lapsus$ extortion group struck again, catching two big tech companies off-guard. Microsoft admitted one employee was hacked. It didn’t say what was copied but Lapsus$ released some 37 GB of what appears to include source code for the Bing search engine. The other victim was Okta, a major provider of identity and access management solutions to big corporations. At first Okta dismissed the impact of what Lapsus$ posted on its data leak site. But by Wednesday it admitted it should have been more aggressive in investigating the hack of a contract customer support worker. David and I will talk about Okta’s belated response.

We’ll also take a look at a ransomware survey of Canadian organizations done by Telus, Canada’s third biggest telco. Among the findings: Only 42 per cent of firms that paid got full access to their data back.

We’ll talk about the U.S. president warning Washington has what he calls “evolving intelligence” that the Russian government is “exploring options for potential cyberattacks” on companies that provide critical services to the nation.

And we’ll end with comments about an open software developer inserting malware into his code to wipe out computers in Russia and Belarus. Under pressure from developers he withdrew the code, but what does this attempt mean for the open-source community?

Also during the week news emerged that the South Africa division of the TransUnion credit ratings agency was hacked. There are two versions of how: The attackers told a news site it cracked a file server with the password “password.” But TransUnion says the hackers stole the password of one of its clients.

Finally, a security researcher at Avast suspects someone has put together the largest criminal botnet-as-a-service yet seen. It’s powered by nearly 230,000 hacked MikroTik routers around the world that can be hired for denial of service attacks. How do they get hacked? Owners aren’t applying security patches, and they aren’t choosing good passwords.

(The following transcript has been edited for clarity)

Howard: I’m going to welcome David Shipley from Beauceron Security. I want to start with the Telus survey of Canadian organizations on ransomware.
What stuck out for you among the numbers.

David Shipley: This is one of the first surveys where I saw the frequency of attacks actually noted. We saw that a good chunk of people were seeing attacks in the 20 to 30 attempts per month range. Which was really insightful. So it’s [attacks] not just an occasional thing. A percentage are seeing hundred-plus attacks — I think it was around two per cent. So I thought that was relatively interesting I think it was also interesting the assertion that 77 per cent of healthcare organizations are under persistent ransomware attacks. So it’s really enlightening, particularly at this dire time, that we’re talking about the threats in the global environment and we talk about the most important services — and we’re not through the pandemic yet — seeing the siege of health care laid out so clearly.

Howard: Have you talked to organizations that have been hit by ransomware? What do they tell you about why they were hit?

David: When I talk to organizations about the root causes for ransomware, it starts with phishing and that it’s still the easiest way to get in. Number Two was, ‘Oh we had set up some remote access this time and we didn’t remember about it and it was really easy to pop.’ So RDP — particularly among municipalities and particularly during COVID — has been a pain point. And Number Three was, ‘Oh we had an unpatched system legacy code etc. that folks got into.’

Howard: And of those who paid a ransom, why did they feel that they had to?

David: That is a really interesting conversation. It has shifted over the years. Initially it was, ‘Because we didn’t have a good backup. We didn’t test our backups regularly. And we didn’t have a choice.’ What I’ve heard in the last couple of years has been more along like, ‘Well our cyber insurer required us to bring in their breach coach and their recommendation was it was cheaper [to pay the ransom] — and they were going to cover the payment of the ransom — versus the sometimes 10 times the cost to recover data manually. And we’ve actually seen this reported about some municipalities facing ransomware attacks here in the last couple of years with that exact line: ‘Well well if it was up to us we wouldn’t have paid but the insurance company told us, so we did.’

Howard: That’s interesting because more recently what I’ve been hearing is insurance companies are starting to increase their exclusions on paying ransomware. They’re starting to up the demands for cyber security protection. They’re starting to get more insistent on customers having things like multifactor authentication if you want coverage. So I’m not sure that leaning on your insurance company to pay ransomware these days is a very good strategy.

David: No, I think it’s a terrible strategy. It was like watching an arsonist run around the neighborhood and giving them extra gasoline to pour onto the fire. It was a pretty stupid idea and it’s blown up in everyone’s face. We’ve got some really good intelligence from [ransomware gang] Conti and others that one of the first things they do when they gain persistence [to a victim’s data] is they check out your insurance policy. In fact, they hit one of the big insurers in the ‘States and they got their list of clients so they knew how much insurance the clients have. I am glad the insurance companies are moving away from paying ransoms. I still think in Canada insurance — which is provincially regulated — needs to be prohibited from paying ransoms. Let’s get out of paying the ransom business. I love that they’re looking for basic cyber security hygiene controls [in customers] because previously they were assessing risk on the number of data records and sensitivity of records and all kinds of metrics that are meaningless. Basic hygiene is a much better indicator of risk.

Howard: Forbidding paying ransoms. What if it’s not merely a shoe store or company that’s selling toys? It’s a law firm or a hospital and unfortunately they don’t have good backups. How can you pass a law without allowing at least some exceptions?

David: I’ve looked this in the face a few times over the last couple of years. Look at the University of Vermont Medical Center hack where their ability to deliver chemotherapy dropped by 75 per cent. Nurses were asking the patients if they remembered what doses and what drugs they were on. So your point is a hundred percent valid there could be carve-outs for life and death data, but not paying out just because it’s the cheaper option than manually recovering from backups. We we have to set the right market incentives. Otherwise we end up with the tyranny of the commons and everyone acting in their best interests and not necessarily in society’s best interest.

[Note: According to a news story, the medical centre was hit after an employee’s corporate laptop was infected by a phishing email. When the employee plugged the laptop into the hospital’s network the infection spread.]

Howard: This report also brought out again that even if you pay you may not get your data back. Only 42 per cent of respondents said that if their firms paid they got their data. Forty-nine per cent said they had partial data back, perhaps is because the decryption keys just didn’t work.

David: Brett Callow from Emsisoft and I have had some hilarious conversations over the years about the amount of times it has had to release a decryption tool for one of these crypto-ransomware gangs to actually make decryption work. Gangs aren’t great at writing the code for these tools. What I found interesting from the Telus report was if you tried to nickel and dime with a ransomware negotiator your probability of getting your full data set back and recovered dropped. What a surprise. That might be a signal they’re trying to send for the market.

Howard: I noticed that the report also said 15 per cent of respondents whose organizations suffered a ransomware attack said that they were re-infected by the same ransomware after recovery. What does that say about the ability of firms to clean up after any attack?

David: I think when you don’t pay and recover the hard way — which we saw in examples like the City of St. John here — you built back better. I think that does lead to resiliency. The most famous example of getting hit again was Kansas Heart Hospital in the ‘States. I’m not surprised that there’s a percentage that get hit again. I think it’s interesting to see who did the incident response and the quality of that response and how deep did they go to build the network back better.

Howard: Let’s turn to the warning from the U.S. about Russia possibly expanding its cyber war. When you were on a few weeks ago we talked about this. But the latest news, which came out at the beginning of this week, is the Biden administration said it has what it called ‘evolving intelligence’ that the Russian government is ‘exploring options for potential cyber attacks.’ Does the U.S. have something or is it just using the usual sniffing around the internet they can see to prod critical infrastructure firms to get cracking on cyber security?

David: It’s hard to know with 100 per cent certainty. I jokingly say it depends if the Dutch are assisting them, because the Dutch national intelligence team had hacked into [the Russian-based] Cozy Bear and was watching them go at the U.S. Democratic National Committee in 2016. We learned about that in 2018. The NSA and other [U.S. intelligence agencies] do have good signals intelligence. And there has been some confidential briefings with energy companies and other targets. So I’m reasonably confident they have this and I don’t think that they would get the President out there talking about this and raising the temperature if they didn’t think there was a good reason to move on it. I think this is the right time to raise the temperature on the critical infrastructure providers so they’re ready.

Remember that the civilian part of the U.S. federal Department of Energy’s nuclear program was among the targets that got popped in the SolarWinds hack. What did they [hackers] learn? How much did they learn? Did they build target lists for this? Chris Krebs, the former director of the U.S. Cybersecurity and Infrastructure Security Agency had some good points about this: Cyber operations generally take months if not years to lay the groundwork, so if Russia is just starting to explore its options and just probing we may not see short term massive impacts. Depending on how long this conflict goes and the sanctions go what we may see is them starting to lay the groundwork for some payback and causing economic pain to the Americans — but you’re not necessarily going to see it the next week month or six months. What’s interesting is the advice [from the White House] goes back to doing the basics in cybersecurity: Multifactor authentication, patching your stuff, monitoring your network.

I hope to God all the critical infrastructure folks are listening to this in Canada and the United States: Hey gang, if you ever needed that executive buy-in for MFA you can show your leadership the message from the President of the United States if that’s what’s going to open the wallet up and get past the internal political objections. Because keep in mind we’re not over the 50 per cent mark in organizations that have actually fully deployed MFA.

Howard: Well, it was interesting the Canadian Centre for Cybersecurity, which advises federal departments and private sector companies on cybersecurity, issued a carefully worded statement to me saying that it isn’t aware of any current specific threats to Canadian organizations in relation to events in and around Ukraine. Now to be fair, the U.S. isn’t saying that it had seen evidence of specific threats. But our government would have seen the U.S. statement and they chose not to say, ‘We have also seen evolving intelligence.’ I don’t know what you make of that. Does the U.S. has better intelligence? Hopefully it does considering the billions that the NSA gets. Or is this just a matter that any Russian-based threat group would go after the U.S. first?

David: I think they would go after the Americans as the big target. Also the American energy grid is dramatically different than ours. You’re talking tens of thousands of different small and mid-size energy grid providers. It’s a much easier attack surface than the smaller number of energy companies we have here in Canada … If we’re not number 1 on their list, we’re not number 10. We just lack the [intelligence] capacity of the Americans, and that shouldn’t really catch us by surprise. We’re a lot smaller country. But it’s also an example of how much more immature we are than the Americans in the leadership in the federal government in building relationships with the private sector, building intelligence sharing networks and trust. Right now in Canada they’re not going to give you information back. They’re not allowed to in many cases — and nor are you compelled as a critical infrastructure operator to tell them when something naughty is happening on your network. So no wonder they have no credible intelligence.

Howard: Well, every time I talk to the Canadian Center for Cyber Security they say that they’re trying to increase their ability to send out notifications indicators of compromise to the private sector. So they are trying to work closer.

David: But the problem is without mandatory disclosure [to the government] legislation in Canada what happens is a company gets breached, the CSO or the chief legal counsel says to the executive to shut up and not send any information out. We’re gonna limit our risk. So, no you’re not calling the feds. No, you’re not calling the police — and if it’s not the chief legal officer it’s probably the insurer who’s going to have to potentially pay out the claims around this also says to shut up. So this is why mandatory breach reporting legislation’s required so victim firms can safely report — and if you don’t tell us we’re going to fine you. It’s borderline criminal negligence of our federal government and political leadership that they aren’t taking this seriously?

Howard: There was a list of eight things this week that the White House said that firms in the critical infrastructure sector who haven’t got the message on cybersecurity should act on. Things like implementing multifactor authentication and having good backups. On the other hand some of them I thought cannot be done fast like encrypt your data. So is it a useful list?

David: I think we should focus on the Fast Five that can be done quickly. Even MFA requires a degree of nuance in how you’re going to roll it out. The biggest blockers to MFA typically aren’t technological they’re [office] political and involve change management. But get it going for certain roles or certain applications. Patch management. You can stop a lot of phishing with patch management. You can stop a lot of the vulnerability stuff with training and educating your users. Tell them the story of your organization: How your security works, how your tools work, what they can expect for regular emails. Genuinely educate your people. Making sure you have an incident response plan and you practice that plan. And checking your backups regularly.

Howard: I want to spend a few minutes on the confusion around the cyberattack involving Okta, a major supplier of identity management and access control to big companies. On Monday the Lapsus$ extortion gang posted screenshots of what it said were from a customer of Okta. Okta sort of initially downplayed the report, but it turned out that they didn’t have the full information. The computer of a contract customer support employee had been accessed back on January 20th and Okta knew about that because they were alerted and they were able to shut that down. What they didn’t immediately know was that the hacker had had access to this computer for five days and he or she had taken some screenshots. Those screenshots were posted at the beginning of this week by Lapsus$. Last week Okta got a summary of the investigation from that third-party support company. It was only when the gang made its claim this week that Okta demanded to see the full report and it appears from the statement from the company that was when they realized there was something going on deeper than screenshots. This podcast was recorded on Wednesday and so it was only an hour ago when the chief security officer issued a video statement with a longer explanation of what happened. And he admitted that perhaps Okta should have acted a little faster when it got the summary of the breach report to get instead the full report. Okta says that the customer support staffer who had access to the Okta system wouldn’t have allowed an attacker to download customer data. But this whole incident raises a lot of questions.

David: It does, and I want to start off a little bit on a soapbox particularly for folks listening because there’s something really important: Please don’t do this when and if, God forbid, your organization has to announce a breach. Number one, please don’t play the percentage game [when talking about how many customers were hit, as Okta did]. I don’t care that it was 2.5 per cent [as Okta said] of 15,000. Your 15,000 customers include some of the biggest customers on the planet. So 366 [victims, which in Okta’s case is 2.5 per cent of their customer base] becomes a lot more meaningful when it’s which 366. That’s a really bad PR advice. It’s almost as bad as including the sentence, ‘We care about the security and privacy of customer data’ in your mea culpa on a breach.
Secondly, the thing that really bothers me about this is the timeline. January 20th to almost March 20th before the truth finally comes out. We know speed matters in these investigations so it’s really important that if your business relies on third party supply chain components for service delivery you’ve got to treat your incident response times as if you controlled that company itself. You outsourced something to save money. Well, you save money. But now you got a nice two-month-plus timeline on your incident response. There’s some really interesting lessons on this. And the third one is this feels like SolarWinds 2.0. You’ve got a company that’s critical, used by some of the most important governments and enterprises on the planet. A subset of customers is hit, and we don’t know which 366. Maybe we’ll find out. But if the 366 included Microsoft and the U.S. federal government and others, well, back to our earlier conversation about the Russian threat.

Howard: And remember the timeline here. On January 20 there’s an attempt to monkey around with the multifactor authentication of an employee and they want to add a multifactor code availability, presumably so that the attackers can have their own way of getting access. They don’t have to use the employee’s multifactor authentication, which would tip them off. I believe that was one of the tactics in the SolarWinds attack: The attackers had added a new multifactor authentication phone number. Usually in multifactor authentication is an extra code is sent to a user as an extra login. Usually it’s texted to the user’s cell phone number. If a second cell phone number is added – unknown to the user – the attacker can use the code sent to it. On January 20th Okta detects that something going on and shuts that user’s access. One of the inferences is Okta wasn’t squeezing the third-party contractor to find out exactly what happened. It would appear they just let that contractor take their time investigating. So it’s only on March 17th that Okta gets this summary of a report, and apparently initially they’re just satisfied with the summary. And it isn’t until the Lapsus$ gang starts publishing the screenshots of a customer’s computer that Okta says, ‘Oh we better see more.’

David: Was the laptop or the endpoint used by the third-party contractor compromised to the point where they had remote desktop access installed? Or was it their own corporate remote access compromised? How deep was Lapsus$ into Sitel [the contractor]? That’s the onion that’s going to be really interesting to peel. Was this only one guy? Sitel is a huge service provider to a lot of companies around the world. They’re not an insignificant player. So I think there’s a lot more to this story. Lack of transparency and timely communications about this is probably what the most important thing I have to say to information security people. When you’re dealing with an incident the commodity that you’re spending isn’t just for the incident. It’s the declining balance on your trust account with your customers and with others. So if you want to stop the bleeding on the trust thing you get in front of it as soon as possible and you say, ‘We are aware there might be an incident with this. We are investigating it right now. We are contacting our affected customers. We do not believe this is an ongoing incident but as we get more information …’. You get in front of this. Particularly if you’re an identity and access management company. This is PR 101. We in cyber security tend to think of these things as the incident, the response, the cleanup, the containment etc. But we’ve got to mature our communication strategy components as part of this.

Howard: Finally, there was an interesting report on an open-source developer who decided to take on the Russians himself He added code in his project, which is widely used and it’s placed on the NPM open source library. He had code to detect if his project was downloaded by a computer in Russia and Belarus and if it was then the code executed a data-wiping malware. Now that’s great news if you’re a sympathizer of Ukraine, but doesn’t this raise ethical and trustworthy questions about open source libraries? If one developer can do this so can everyone. You can decide who you don’t like and who you like — and it doesn’t necessarily have to be somebody in a war zone.

David: I think the new Cold War is about to present a new set of problems for the open-source global movement. Ukraine is not going to be the only aspect of this. We see geopolitically and how that impacts us technologically. The current buzzword in security is Zero Trust and it’s getting thrown around all the place, and it’s hilarious because Zero Trust is what we need to do — except open-source software. We’ve incorporated into open source all of our products, which we trust implicitly, which are projects run by an understaffed group of one to three people, one of whom may get disgruntled about some cause or another and could seriously ruin our day. This incident’s the most dramatic … I do think we’ve got a huge open-source liability issue that comes down to how much do you trust the people that control the keys to push the updates to that software?

Howard: There was some very quick outrage after a security vendor published a report on this and then it was carried by one of the IT news services. The open source developer who had added this thing to his project quickly deleted it. He was sort of shamed into that But it does show the risks of open-source software and how much people trust that if iomething is posted it’s fairly trustworthy — at least until other people download it and they test it and then they find holes in it.

David: The open-source movement is that last beautiful vestige of the internet that came out of academia that was built on trust by default — ‘Of course we can build these amazing things if we only work together.’ It’s not going to work in the distrust world that we are in now. What if, God forbid, a genuinely altruistic individual is compelled by their nation to insert code into a repository because that’s their patriotic duty? It’s going to be a really interesting challenge for us. And the dirty secret of a lot of security tools is they rely on a lot of free and open-source software.
One of the things that I think is important — and this came out of log4j — is making sure we advance the state of what’s called the software bill of goods. Being able to know almost like your nutrition guide when you pick up your cereal what codes are in here from where and what are the processes. So when you’re buying applications you know what you’re bringing to your environment.

Exit mobile version