In light of the recent Blaster, Slammer and Sobig worm attacks, Microsoft Corp. last month announced it will add automatic update capabilities to its products – including Office, Exchange and SQL Server – as part of plans to overhaul its patch management architecture.
For example, those companies that successfully installed Microsoft Corp.’s patch for the flaw exploited by Blaster (and ahead of its rampage), said they felt no ill effects from the worm.
The situation typifies the conundrum facing IT managers. Despite the time individual patch installation and testing takes (patch management has become a full-time job at many companies), not all users feel comfortable handing over the reins to an automated system since untested patching can lead to unstable systems.
Tristan Goguen, president of Toronto-based ISP Internet Light and Power (ILAP), maintains that automated patch application is an “extremely dangerous practice.”
For example, some patches are configured in a way that when they are installed, they cause the network settings to be reset to default settings, said Goguen. He added: “Your printer drivers might be overlaid with latest drivers, which could interfere with custom printing applications. Or your servers may just stop communicating.”
The problem then becomes one of timing: how long it takes to resolve the problem. It could take five minutes or more than an hour, depending on what kind of damage the automated patching has caused.
“From our perspective, the issue [around automated patch management] is not the labour savings; the issue is the damage that we cause to our clients when patching systems fail,” he said.
For its mission-critical equipment, ILAP’s solution is to review and apply the patch on its backup or redundant servers, and test the system for a day or so, before installing the patch on its primary production servers. “We recognize that this is labour-intensive, but in reality we cannot afford to cause downtime for our clients,” Goguen said.
Update services are not entirely a bad thing, Goguen added: automated patching could be used in environments with large inventories of equipment. “To distribute these (patches) to a large numbers of PCs, we would first test the patch on one or two machines, and once we were satisfied (that they’re safe), we’d apply the patches to the PCs automatically, but still apply them on a one-by-one basis on our servers.”
One thing is for certain: patching duties can’t be ignored. “If you do, you’re really playing with fire,” Goguen said.
Another problem with patching is an apparent urgency for it to be done.
“The thing about patching is that it is so darn reactive. And that can kill you,” said Dave Jahne, a senior security analyst at Phoenix-based Banner Health System, which runs 22 hospitals.
“You need to literally drop everything else to go take care of (patching). And the reality is, we only have a finite amount of resources” to do that, Jahne said, adding that Banner had to patch more than 500 servers and 8,000 workstations to protect itself against the vulnerability that Blaster exploited.
Eric Kwon, CEO of antivirus software publisher Global Hauri in San Jose, said the other problem with patches is that a shutdown and reboot is required after installation. “When you get into a corporate environment, downtime is critical and you can end up losing money, but installing patches means you have to shut down mission critical applications,” Kwon explained.
Microsoft said it wants to help alleviate that pain. “We are looking at a range of options to get critical updates on more systems, from finding ways to encourage more people to keep their systems up to date themselves to where it is done automatically by default for certain users,” said Matt Pilla, senior product manager for Windows at Microsoft in Redmond, Wash.