Definition Edit

Red teaming

refers to the practice of conducting realistic “blind” tests against a system. Such tests are blind in the sense that the operators of the system do not know that they are being tested, and realistic in the sense that the testers are free to do most or all of the things that actual terrorists might or could do in challenging the system.[1]

Overview Edit

In a red team exercise, skilled outside experts plan and carry out surprise adversarial cyber attacks on an enterprise’s systems to find and exploit vulnerabilities and reveal flaws in security planning, policies, and defenses. Unlike role playing or tabletop exercises, the “hostile adversaries” in red team exercises make every effort to outthink defenders and “win” by overcoming real cyber defenses and gaining access to actual systems, networks, and information. The attack phase of the exercise is followed by a thorough analysis of what transpired. Red teaming can be combined with or used by other types of assessment such as risk, vulnerability, threat, consequence, system management, system security, accreditation, and certification.

An effective red team exercise should challenge security assumptions and strategies, expose operational and technical weaknesses, and stimulate fresh thinking about an enterprise’s security posture. Red teaming has been applied for varied purposes, including: testing cyber defenses and response plans; improving the design and implementation of a system and its security throughout its life cycle; system calibration; generating likely adversary actions to obtain signatures and test detection capabilities; technical analysis of adversarial scenarios; observing the effects of various decisions and prioritizations on an adversary’s response; demonstrating a scenario involving real systems and operational constraints; and training.

Red teaming can be an effective tool for IT system engineering or for evaluating the security of complex systems through an increased understanding of component and system function and behavior. Red teaming can encompass globally distributed systems, numerous distributed organizations, a range of technologies, and the effects of interdependencies among systems.

Red teaming is useful for identifying technical system vulnerabilities and managerial oversights. In industry it may be used to assess the security of high-consequence targets such as those in a banking or financial infrastructure. However, much information about red-teaming methods has not yet been documented. Dedicated red teams often do not share their knowledge with other teams, and temporary red teams rarely have the resources to capture their own knowledge for re-use. There is no easy way to measure a red team’s capability and performance to determine its effectiveness.

References Edit

  1. Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment, at 49 n.5.

Source Edit

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.