Site icon IT World Canada

The bot traffic dilemma

Usually bots are bad, which is why some CSOs ensure their organizations’ Web sites block them.

There is a risk, a study by a security vendor has discovered, although not in the way most security pros think.

Blocking bots also means blocking Googlebots, which eagerly crawl the Web to come up with an accurate way of indexing content. So IT professionals set firewall rules that allow Googlebots on their sites. However, hackers know that, and have crafted malicious bots that mask themselves as Googlebots to find ways to exploit browser, server and other vulnerabilities.

The dilemma, as outlined by Igal Zeifman, product evangelist at Incapsula, which offers cloud-based Web site security, is that if you block bots to protect against fake bots, you also block legitimate Googlebots.

“It’s a no good options scenario,” he said in an interview.

Incapsula released a study of Googlebot activity Thursday which covered 400 million searches and visits, to 2.9 billion pages . It showed that on average Googlebots drop by a Web site 187 times a day. But for every 24 visits a Web site will also be hit by a fake version.

Over 23 per cent of those are used for denial of service (DDoS) attacks. Almost 11 per cent are malicious screen scrapers, spammers and scanners.

Bad bots come mainly from the U.S. (25 per cent), plus China (15.6 per cent), Turkey (14.7), Brazil (13.5) and India (8.4 per cent).

One problem, Zeifman said, is that legitimate Googlebots can come from the U.S., Britain, China, Denmark, Belgium and France, so blocking bots from those countries doesn’t help.

“I don’t think that many realize that not all Googlebots are who they say they are,” he says. “There are few services that show inside bot traffic,” he said.

Exit mobile version