Over the summer some ambitions hackers initiated a distributed denial of service attack that brought down the Web site of a California-based security company.
Their own Web site.
As demonstrated Wednesday at the annual SecTor security conference in Toronto, the hackers were the staff of White Hat Security Ltd. who wanted to show the world a point: An attacker can easily spend $50 placing an ad that includes malicious Javascript on an online Web network that generates millions of hits.
That ad, Matt Johnson, manager of White Hat’s threat research, told the conference, could trigger anything — password cracking, brute force attacks or hacking the browser of anyone who clicked on the ad.
It’s been said that you have to know your enemy to defeat him (or her), which is why IT security staff and software coders explore the limits of software to learn where the holes are.
Sometimes called ethical hacking, a conference panel earlier in the day explored whether legal, ethical and moral lines have to be crossed to be a skilled IT security professional.
Panelist James Arlen, a senior security advisor based in Hamilton, Ont., with Leviathan Security Group, pointed out in an interview that skills development in the industry can include training you can’t get in school or on the job.
Some feel “to be an excellent penetration tester or things like that you have to break the law to learn that you actually understand what you’re doing.”
The problem is, he said, the lines governing legal, ethical and moral behavior “are rubbery and have changed over time.”
If you discover a flaw in a company’s IT product, how soon do you have to notify it? Do you have to notify the company first, or should you take it to the media? Should you go to the media (or your blog or Twitter) if the company seemingly doesn’t want to do anything about the flaw? How long should you wait before giving up on the company?
If you can get inside an organization’s network and wander around but not copy or alter any data or security settings, is that illegal, unethical, immoral?
These are they types of questions an ethical hacker faces, Arlen said.
MORE FROM THE CONFERENCE
Tips from cybersecurity expert G. Mark Hardy
Compounding these questions is that it’s almost hard to use a computer without violating some law, Arlen said.
For example, he said, in the U.S. Andrew Auernheimer, a member of a group of computer experts called Goatse Security, legally played around with a URL on an AT&T web site, as anyone can on any browser. But Auernheimer — who used the online name Weev — discovered he could access the email address of over 100,000 AT&T subscribers who were iPad users.
Goatse revealed the flaw to Gawker Media before AT&T was notified. But also the data of the subscribers was made public.
Last November he was convicted of identity fraud and conspiracy to access a computer without authorization, and fined and sentenced to 41 months in prison.
That’s an example of crossing the legal line, Arlen said.
Ethical standards are set by testing organizations like the ISC2 (International Information Systems Security Certification Consortium), which gives credentials to IT professionals who pass courses. But, Arlen said, some standards leave a lot to be desired.
One forbids professionals from consorting with hackers — meaning criminal hackers. But Alren said, potentially anyone who attends a conference like Defcon could be violating ethical standards.
The spirit of the standard is right, he said, but sometimes people bend the rule.
As for moral questions, that falls into “is this the right thing to do?” Sometimes hackers to illegal (or close to illegal) things in discovering a vulnerability, Arlen said. Then what do you do with the information? Â Call the company or broadcast the flaw?
Unfortunately, he said, some companies immediately sue whistle-blowers, even if the vulnerability isn’t made public.
But morally, if the flaw affects public health or safety, how long can the hacker remain silent?
In any situation a hacker or research might face the prospect of ethically having to notify a company, morally going public if the flaw is vital but the company won’t fix it (or is slow to fix) and legally open to then being sued or charged criminally.
The problem is, Arlen said, to find vulnerabilities do you “hang out with the bad guys?”
“To do a good job as a defender you have to understand how they think,” he said.
But how? Computer schools teach a lot, he said, but — for understandable reasons — not how to complete some things. There are security education conferences like SecTor. And, he added, there’s talking to people in the hallways at conferences.
Panelist Gord Taylor pointed out there’s another alternative: Set up a server and hack it yourself, or, with consent, hack a friend’s server.
“To be a good security practitioner you don’t need to do evil things to learn,” he said in an interview. “You can learn in a classroom, you can learn from your peers.”
“We see ethical dilemmas almost daily outside IT,” he said. But, he added, often there’s a little voice in your head asking “is this ethical?”
“If you’re asking yourself if it’s ethical, it probably isn’t.”
As for Matt Johnson and his ad network test, he has no doubt what he did was ethical, because all that was done was attack his company’s Web site.
The lesson for ad networks, he said, is that they shouldn’t allow Javascript. Most don’t, but some still do.