We often deal in a world of win or lose: Business deals, sports, wars.
The same is true for the world of infosec pros: Intruders are blocked, or they get in. Which is why Red Teams/Blue Team cyber security exercises are like wars: One side (the Red Team penetration attackers) is trying to make the other side (Blue Team incident response defenders) look bad.
Small wonder the two don’t like each other.
But there’s another approach: Making them work together in so-called Purple Teams.
In planning sessions before test exercises the Red Team shares how attackers think and their techniques, the Blue Team shares how defenders think and their techniques. They can also work collaboratively in tabletop exercises (‘If we were to do this, what would you do?’, or ‘What process do we have now for employees to get a temporary ID if they lose their tag?’ ). But it can also work on a real exercise where the Red Team reveals what it will be doing (ie: testing Web site weaknesses) and the Blue team trying to see if its detection processes are up to the task.
In essence the Red Team creates training exercises for the Blue Team for continuous improvement.
“It doesn’t click with some hard-core Red Teamers,” admits Haydn Johnson, a Toronto-based consultant and security researcher who encourages CISOs to adopt the approach.
But, he argues, collaborating benefits the enterprise in that the shared knowledge helps strengthen security defences. After an exercise both teams sit together to analyze what went right/wrong and lessons learned.
After all, he told a recent meeting of the Toronto Area Security Klatch (TASK), real Red Teamers are criminals. So the Purple Team can be considered the real Blue Team protecting the organization.
Among other benefits is the CISO can save money hiring an outside team of pen testers to practice against. Another is that recommendations from a unified team are more likely to get management buy-in.
Regardless, the goal is to emphasize collaboration within the security team.
Some organizations may call the combined squad Green or White (or any colour) team, Johnson notes, but the object is the same.
Johnson was part of an exercise at a Canadian financial institution where the after-attack analysis with Red and Blue teams exposed how the defenders failed to take seriously an indicator of compromise, then when it did team members didn’t communicate at all with each other. That led to changes in cyber security processes.
An Internet search can find a number of supporters of the concept. Chris Gates, who used to work at a major social media firm, describes it as “Putting more Offence in your Defence” and “More Defence in your Offence.”
In a blog he noted that separate Red and Blue Teams can lead to stagnation if the two teams concentrate on catching or defeating each other rather than innovating together in order to better defend their company.
“A key point in the understanding of Purple Teams is that it should be thought of as a function, or a concept, more than as a separate entity,’ writes Daniel Miessler a San Francisco information security practitioner. This can come in the form of an actual, named team that performs this function, or it could be part of the Red/Blue teams’ management organization that ensures that the feedback loop between them is continuous and healthy.”
Making the Purple Team part of security management may be ideal, he added, so that it does not appear as if the Purple Team is a peer with the other two, or that the Purple Team is the only way the Red and Blue teams will communicate with each other.
“Don’t fall into the trap of allowing the blue team to do something they normally wouldn’t do during the test,” advises one pen testing company. “You want this to be as realistic as possible. Make note of what is happening and where your deficiencies are so you can remediate them properly.”
(Editor’s note: The next monthly meeting of TASK is Wed. April 26)