Welcome to Cyber Security Today. This is the Week in Review edition for the week ending Friday, May 26th, 2023. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.
In a few minutes Terry Cutler of Montreal’s Cyology Labs will be here to comment on recent news. But first a look at some of the headlines from the past seven days:
Four American states settled claims against a vision insurance benefits firm over a 2020 data breach. In that breach the information of 2.1 million people was stolen. Terry and I will have a look at that incident.
We’ll also examine the spread of a fake image posted on Twitter of an explosion supposedly near the Pentagon. We’ll discuss why Cisco Systems isn’t patching new vulnerabilities found in older small business switches, why companies hold onto unneeded data for so long and a Canadian data breach victim fighting with the tax man.
In ransomware news the Cuba ransomware gang took credit for the attack on the Philadelphia Inquirer news service.
The Snatch ransomware gang is taking credit for an attack on the Canadian Nurses Association.
German automotive and arms manufacturer Reinmetall said it was hit last month by the Black Basta ransomware group.
The city of Dallas, Texas, which is still dealing with the impact of a ransomware attack over two weeks ago, had to close its municipal court building on Monday. It’s not expected to re-open until Tuesday, May 30th.
The BlackCat ransomware gang has added a new tool. According to researchers at Trend Micro, it’s a digitally signed Windows kernel driver. The driver is used with a separate user client executable in an attempt to control and kill defensive software on computers and servers.
Android users of an app called iRecorder-Screen Recorder were warned to delete it. This comes after researchers at ESET discovered it was compromised last August to install spyware. The app has been around since September 2021 and has been downloaded 50,000 times.
And Samsung smartphone owners are being urged to install the latest patches after the discovery of four critical vulnerabilities.
(The following is an edited transcript of one of the topics discussed. To hear the full conversation play the podcast)
Howard: Let’s start with a US$2.5 million data breach settlement between four U.S. states and a vision insurance benefits company called EyeMed Vision Care. With a username and password, in 2020 a hacker accessed a company email account used by staff. That account had messages and attachments with personal information on 2.1 million subscribers. The data included names, dates of birth, full or partial Social Security numbers, medical diagnoses and other information. In addition to copying data the attacker used their email access to send 2,000 phishing messages to clients — which would look like they came from the company — trying to get their credentials. Several things stood out to me from this attack: First, nine employees violated company rules and shared the same username and password. Second, the company was in the process of implementing multifactor authentication but hadn’t put it on the email system before the attack. And third, the company had hired consultants to do risk assessments — but the email system wasn’t assessed.
Terry, what do you make of this?
Terry Cutler: There’s a lot to unpack here. The first part is about the nine employees that were evading the company rules. I still can’t believe they’re actually sharing usernames and passwords. That should be in the employee handbook. This is a no-no because if you share your username and password somebody else can log in as you. Now the responsibility will be on you to prove that it wasn’t you. The other thing they [the hacker] sent out over 2,000 [phishing] emails. This is like using living off the land. If they’re sending out emails from a legitimate company [account] no one’s going to question it. It’s not being spoofed. Everything comes back legit. It’s going to hit the inbox and when people see it they’re going to click on links and then they’re going to reveal their password, or even get the company infected depending on what they clicked on. As for the part about doing external risk assessments but not including the email server, we’re seeing this more and more in our own penetration testing. The client company says ‘Oh, we’re using Office365, we don’t need to have that assessed. Microsoft has it covered.’ But they don’t realize that Office365 out of the box is not secure. When we do the audits we’re going to find a ton of things, like multifactor authentication isn’t turned on or is inconsistently applied or their password policy set is it set to never expire. Is it capable of receiving and and sending out malicious email attachments? Because we’re going to try and send an email to the inbox and see if it lands in there or not. We’re also going to check for temporary exception rules. We’ll also look for third-party apps such as LinkedIn for example, LinkedIn contact synchronization. We’ll see if it [Office365] is able to share contacts out and maybe leak some personal information, things like OneDrive for business. Is it being used on unmanaged devices, has email external auto-forwarding been enabled? These are all functions that are available in Office that might be totally misconfigured.
Howard: Let me take these one at a time. First of all, there’s obviously a failure in security awareness training if nine people ignore the password rule.
Terry: I think it’s because most employees probably are not tech-savvy.
Howard: Another thing — and you talked a bit about this: How do you do a risk assessment and not include email?
Terry: A lot of times when we do an assessment the client wants to exclude it because they think it’s covered by somebody else. That’s not so. We have to constantly educate them on why it’s not covered and it should be included in the risk assessment.
Howard: One other thing that I note: As part of the settlement the company had to agree to implement a written information security program, to regularly log and inspect network traffic and login attempts, to create an incident response plan and several other things. So one lesson to me is you don’t want to be publicly told by a regulator to do these things after an attack.
Terry: We do a lot of assessments. Some of our audits are called pathfinder assessments, where we see where you are now, and where you need to be. And a lot of times they don’t have [incident response] playbooks set up. They have no idea about their incident response plan: Who do they contact, when and where? All that stuff is non-existent. And a lot of times they lack proper documentation. So if [the regulator] is saying, ‘You need to start implementing logging, create all these plans,’ et cetera, this requires expertise. And if you don’t have the budget — especially if you’re a not-for-profit — you’re not going to be able to hire expertise. That’s where outsourcing is going to come into play. But that costs a lot of money. And a lot of times the companies will feel they don’t need to have this until it’s too late. For example, I’m dealing with a company right now that doesn’t even have a firewall. They just have the regular ISP modem and that’s it. They don’t feel that they need endpoint protection on their machines or how much important information they have. They believe a cyber attack will never happen to them, they’re only seven employees.