Around the end of November 2019, Florian Roth wrote a much-discussed post about problems he saw with today’s red teaming. I considered writing a blog post to diverge some of my ideas and “respond” to his concerns. However, as is often the case with these types of things, I didn’t get to it at the time.
Just before the end of 2019, Dominic Chell shared his own thoughts, backed by a decade of red team experience and thus valuable input. In the meantime, I read both blog posts multiple times and hope that I can provide some useful insights as well through this post of my own.
Even though Florian already added an update to his original statements, I’d still like to briefly share my view on the first problem, single point(s) of failure. In this section, Florian concludes that when the red team reaches the final goal, they only prove that they were able to find their custom path over the various stages of the kill chain. The main criticism here is that this provides the blue team with limited value and just one way to get to the objective, which often does not even contain the most probable techniques to be performed during each kill chain phase. As a result, applying counter measures to those techniques only reduces the attack surface minimally.
While that is true, another aspect had been overlooked until also mentioned by Dominic: an engagement where the red team obtains the majority of flags can bring a useful shock effect to prioritize security efforts. It can get the blue team the necessary buy-in to further reduce the attack surface, even if the red team only showed one way in. This also goes for organizations without a (proper) blue team. Showing the attack was successful with the organization being completely blind to the entire kill chain can bring an even bigger shock. In the end, management does not care about technical details when the business was compromised.
Problems 2 (focus on kill chain) and 3 (incomplete simulation) are tightly linked and thoroughly explained by Dominic. Personally, I’m more acquainted with TIBER than CBEST. This framework supports what Florian suggests:
- Study and use the toolset of well-known threat groups — TIBER aims to perform adversary emulation by mimicking the TTPs of relevant threat groups, based on the threat intel phase results.
- Apply different levels of clumsiness and to achieve that, study APT reports provided by the threat intelligence community —The actual red team part is based on threat intelligence of the most probable APTs. While different levels of “clumsiness” are not explicitly required, multiple attack scenarios are defined, which thus emulate different TTPs and could have different clumsiness levels during the kill chain phases.
- Set realistic goals because a complete take over of the top level domain has never been the actual goal of any threat group out there – TIBER makes use of “CEFs” to define objectives, which are the Critical Economic Functions, or the systems/data underpinning the client’s business processes.
However, besides performing a pure adversary emulation and thus only mimicking a certain threat group’s TTPs, TIBER-NL and TIBER-BE also cover the type of testing that was criticized by Florian. The TIBER implementation guidance calls for the execution of:
- Scenarios based on threat intelligence, mimicking TTPs seen in the past and combining techniques of various relevant threat actors.
- A scenario X in which the RT provider is stretched to its absolute limits. This scenario enables a forward-looking perspective to the attacks. It could be beneficial to start the scenario X when the RT provider has already infiltrated the network, since this would provide interesting leads.
To me, this opens the doors to a more “classic” approach, where you don’t follow the rules set out by another threat group but instead simulate an attack where you can:
- Use your own (= the red team’s) favourite TTPs
- Use TTPs that you noticed often work against similar environments
- Use custom-made obfuscation, unique forms of evasion, etc.
- Try to be a better adversary than every other threat group out there and try to be everything Florian told you not to be
Continuing on Dominic’s post, I couldn’t have said it better:
“The output of the operation will typically demonstrate one or more attack paths to achieving the agreed objectives and highlighting what failures occurred along the way. Secondly, it provides the organisation with the opportunity to exercise their detection, prevention and response capabilities.”
A bit further on, he also (correctly) mentions that the red team exercise is not meant to provide breadth, as this can be more adequately addressed using a purple team approach.
Here is where I would write a conclusion, if I already had one. However, there are still some things I want to touch upon, such as the nomenclature issue, but this post has already grown a bit larger than expected. I’ve used emulation, simulation, red team, purple team, and perhaps some other similar terms. During my talk at the SANS Pentest Hackfest in Berlin, I tried to make a distinction based on two properties. The slides are available here in SANS’ archives, but I’ll pour the content into another blog post for easier discussion, which will be posted soon.
About the author
Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both.
Next to offensive assessments, he also likes to perform defensive work, allowing him to combine the red and blue points of view, to better understand how both sides operate and obtain expertise in the fields of adversary emulation and purple teaming.
You can find Jonas on LinkedIn