Artificial intelligence (AI) is expected to increase the global ransomware threat over the next two years, U.K. cyber chiefs have warned in a new report published today by the National Cyber Security Centre (NCSC).
The report, entitled The near-term impact of AI on the cyber threat, concludes that AI is already being used in malicious cyber activity and will almost certainly increase the volume and impact of cyber attacks – including ransomware – in the near term.
The NCSC is part of the Government Communications Headquarters (GCHQ), an intelligence security organization that focuses on identifying, analyzing and disrupting cyber threats in the U.K..
Among other conclusions, the report suggests that by “lowering the barrier of entry to novice cyber criminals, hackers-for-hire and hacktivists, AI enables relatively unskilled threat actors to carry out more effective access and information-gathering operations. This enhanced access, combined with the improved targeting of victims afforded by AI, will contribute to the global ransomware threat in the next two years.
“Ransomware continues to be the most acute cyber threat facing U.K. organizations and businesses, with cyber criminals adapting their business models to gain efficiencies and maximize profits.”
Commenting on the findings, NCSC chief executive officer (CEO) Lindy Cameron said, “we must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat. The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.
“As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organizations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defences and boost their resilience to cyber attacks.”
A release from the organization notes that The Bletchley Declaration, which was agreed on at the U.K.-hosted AI Safety Summit at Bletchley Park in November, also announced a first-of-its-kind global effort to manage the risks of frontier AI and ensure its safe and responsible development.
It goes on to say that analysis from the U.K.’s National Crime Agency (NCA) suggests that “cyber criminals have already started to develop criminal Generative AI (GenAI) and to offer ‘GenAI-as-a-service’, making improved capability available to anyone willing to pay. Yet, as the NCSC’s new report makes clear, the effectiveness of GenAI models will be constrained by both the quantity and quality of data on which they are trained.”
According to the NCA, “it is unlikely that in 2024 another method of cybercrime will replace ransomware due to the financial rewards and its established business model.”
James Babbage, director general for threats at the agency, said, “ransomware continues to be a national security threat. As this report shows, the threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cyber criminals.
“AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods. Fraud and child sexual abuse are also particularly likely to be affected.”
Meanwhile, authors of the NCSC report note that “while it is essential to focus on the risks posed by AI, we must also seize the substantial opportunities it presents to cyber defenders. For example, AI can improve the detection and triage of cyber attacks and identify malicious emails and phishing campaigns, ultimately making them easier to counteract.”