Alert • Alert Observables • Alert Triage • Attack Map • Attack Scenario • Attack Scenario Elements • Case File • Cyber Incident • Cyber Loss Occurrence • Cyber Risk Elements • Detection Control • Enriched Alert • Event • Event Detection System • Impact Curve • Cyber Incident • Incident Management Process • Incident Recognition • Intruder Hunting • Investigation • KPI or Key Performance Indicator and KRI or Key Risk Indicator • Playbook • Preventive Control • Response Control • (Actual) Response Time • Response Window • Runbook • Scenario Kit • SCA or Service Capacity Agreements • Security Event • SLA or Service • Threat Capability • Threat Hunting • Threat Potential • Threat Technique • Use Case • Use Case Factory • Vulnerability
Definition: One or more events that correlate to a programed alarm rule within a SIEM or other security management platform. Alerts are typically created through programmatic correlation logic within a SIEM. In the logical flow, events are correlated to create Alerts. Alerts are then Investigated to render either a False Positive or an Incident, and Incidents are then resolved through the Incident Response Process. Note that prior to being transitioned to an Investigation (or in some cases, after transition to Investigation), Alerts can be Enriched by queries to additional Event sources, non-event sources (such as systems logs, Threat Intelligence Services, AI Systems, Simulated Attack and Breach systems, Attack Surface Analysis systems, and Vulnerability Scanning systems), and historical data in a Data Lake.
Synonyms: Alarm; sometimes referred to as Investigations (although alerts are technically the trigger for the investigation, not the investigation itself); often improperly equated with Incidents.
Definition: These are observed contextual elements of an Alert that can be used for formulating queries to other information sources in order to enrich the Alert.
The following are a few examples:
- The event’s IP address: The IP Address of an event reporting system is observed in the Alert and can be used to search other information systems that have logged related data for the system with that assigned IP address. This additional data serves to enrich the alert with supplemental context and aids in evaluating the Alert.
- The time of the activity that generated the Alert: This is an observable that can be used to search for other potentially related actions that occurred within a bounded time period (both before and after) the time of the Alert Event.
Synonyms: Event elements, event contextual data
Definition: This is the process of receiving a raw alert from a SIEM and conducting any required Alert Enrichment and investigation, to determine if the alert should be escalated to an Investigation for further review by Level 2 SOC staff or the customer or closed as a False Positive.
Definition: The probability map of the potential steps in an attack scenario for an attacker to achieve the anticipated objective. This concept is depicted below.
Definition: This represents the outcome of an attack, or the attacker’s desired outcome state for a specific Asset or set of Assets. While this outcome maps to the MITRE ATT&CK Matrix column titled Impact, it must be noted that an Attack Scenario describes a specific attack against a specific Asset or set of Assets and can therefore be mapped directly to a loss valuation. Further, as Attack Scenarios are human driven, they may have more than one outcome state, and may shift in desired outcome at any point in the attack. Attack Scenarios are part of the Use Case Development Cycle, as depicted below, in a traditional Risk Management process. This process includes the following:
- Start with the input of all assets.
- Filter that to Critical and High Value Assets.
- Develop a list of scenarios that result in significant loss (data loss, integrity compromise, regulatory failure, etc.), and prioritize based on damage to the organization.
- Screen or validate against threat intelligence information, threat potential (do the tools exist to conduct the attack), and preventive controls in place, and refine to a list of likely attacks for use in the playbook development cycle.
Attack Scenario Elements
Definition: These map directly to the MITRE ATT&CK Matrix Column titles of Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, and Impact. While not all Elements will be present in every Attack Scenario, some combination of these must be present for the attack to be successful. For example, there must be some form of Initial Access, Execution, and Impact, at a minimum.
Synonyms: Cyber Kill Chain
Definition: One or more Alerts, Investigations, or Incidents that are in some way related.
Definition: In the context of Cyber Security, an Incident represents a confirmed malicious action by a Threat Actor. Logically, an event or set of correlated events can trigger an Alert, indicating that there is suspicious activity that could represent the malicious activities of a threat actor. When an Alert is Investigated, it can be de-escalated to a False Positive, or escalated to an Incident. Once escalated, the Incident can be resolved through the Incident Management Process.
Cyber Loss Occurrence
Definition: The loss resulting from an Attack Scenario.
Synonyms: Impact in the MITRE ATT&CK Matrix
Cyber Risk Elements
Definition: The factors that influence Cyber Risk – Threat Capability, Actor Intent, Preventive Control Strength (inversely, Vulnerability), Detection Control Strength, and Response Control Strength. For each attack scenario, these five factors can be used to calculate the probability of attack scenario success, or risk.
Synonyms: Risk Factors
Definition: A control (access logging, network monitoring, AAA logging, anomaly detection, malware detection, config monitoring, etc.) designed to detect a potentially malicious action within a given environment. Typically, these detection capabilities map to a specific Threat Technique. However, in the case of anomaly detection, they reflect only the fact that something out of the normal occurred. For each Attack Type, and each step (Threat Technique) within the Kill Chain of the attack there may be Detective Controls that would alert the organization that a Threat Agent is actively engaged in an Attack Process. It should be noted that Detection is functionally an input to Response. Obviously, no person or system can respond to that which was not first detected.
Definition: These are Alerts that have been Enriched by the addition of context information typically derived through queries to supplemental Event sources, non-event sources (such as systems logs, Threat Intelligence Services, AI Systems, Simulated Attack and Breach systems, Attack Surface Analysis systems, and Vulnerability Scanning systems), and historical data in a Data Lake.
Definition: A human, end-system, or network security related, activity that is identified and recorded in some way.
Event Detection System
Definition: Any system, appliance, device, or software that can generate a security record based on some user, external system, or network activity that can be forwarded to a SIEM. Operating systems, applications, and event detection systems can all be event detection systems. The method of detection can be content, condition, activity (or change), or anomaly based.
Definition: The Impact Curve represents the amount of Loss or Damage over time from the point in time where an Attack Scenario was potentially detectable to the point where the attack would have caused its maximum potential Loss or Damage had it gone completely unchecked. See the following graphic for an explanation of the relationships between Impact Curve (IC), Response Window (RW), and Actual Response Time (ART).
Definition: In the context of Cyber Security, an Incident represents a confirmed malicious action by a Threat Actor. Logically, an event or set of correlated events can trigger an Alert that there is suspicious activity that could represent the malicious activities of a threat actor. When an Alert is Investigated, it can be de-escalated to a False Positive, or escalated to an Incident. Once escalated, the Incident can be resolved through the Incident Management Process.
Incident Management Process
Definition: Incident Management defines the workflow for handling detected Events. This process is depicted in the diagram below and includes Event Correlation and Processing, Alert Creation, Alert Enrichment, Alert Triage, Investigation Analyst Review, and Incident Response.
Synonyms: Incident Handling
Definition: The act of identifying the cause of an Investigation to be a confirmed malicious activity by a Threat Actor. Incident Detection is therefore the escalation of an Investigation to Incident status. Note that all Incidents begin as one or more Events that are correlated into an Alert, which is then escalated to an Investigation, and then further escalated to an Incident.
Synonyms: Incident Detection
Definition: Intruder Hunting is a process for aggressive intruder detection and eviction, focused on specific targeted assets. Threat Hunting is often confused with Intruder Hunting. For example, performing detective work to identify anyone considering committing a burglary in your neighborhood would be Threat hunting, while setting up detection sensors and traps within your home is Intruder Hunting. Threat Hunting therefore is generally external (looking for threats against the organization), while Intruder Hunting is internal (looking for threats that have potentially breached the organization’s defenses).
Definition: In the context of Cyber Security, an Investigation represents a probable malicious attack by a Threat Actor. Logically, an event or set of correlated events can trigger an Alert, indicating that there is suspicious activity that could represent the malicious activities of a threat actor. When an Alert is Triaged, a preliminary determination is made that the Alert is either a false positive, or a probable malicious action that warrants further investigation. If so, the Alert is transitioned to Investigation status and is escalated to a Level 2 SOC analysts for further review. During the SOC Level 2 review, the Investigation can be de-escalated to a False Positive, or escalated to an Incident. Once escalated, the Incident can be resolved through the Incident Management Process.
Synonyms: Escalated Alert, Probable Attack
KPI or Key Performance Indicator and KRI or Key Risk Indicator
Definition (In the context of a Security Service): A KPI is intended to quantify the Effectiveness or Efficiency of a service. KPIs tend to measure Accuracy, Capacity, Time to Complete (an activity), and Improvement of Performance. While it is appropriate to pre-define performance baselines that will set the expectations for performance, KPIs are most useful and applicable when used in the context of trending over time. In this regard, KPIs should have two thresholds – the first being the expected level of performance, and the second being a lower bound that serves as an indicator of a potential issue, or KRI (Key Risk Indicator). By leveraging this multi-threshold approach to performance measurement, separate sets of measurement do not need to be maintained and tracked for KPIs and KRIs, since they are simply different performance boundaries within the same metric.
Synonyms: Performance Metrics
Definition: A process flow documented within the CyberProof Defense Center that defines the steps involved in the response actions for a specific Alert or Alert Type. Some or all of the process may be automated. This is similar to a Runbook in that it is also a process flow; but it is not the same in that a runbook is a written document that tells the customer how to respond to a scenario and interact with the SOC.
Synonyms: Response Script, Automation Script
Definition: A control (software patch, network filter, authentication process, access control, etc.) designed to prevent a specific Threat Technique. For each Attack Type, and each step within the Kill Chain (Threat Technique) of the attack there may be Preventative Controls that would Prevent the Threat Agent from successfully completing that defined step in the Attack Process.
Definition: This control represents the loss mitigation measures that can be put in place to halt or disrupt an attack and terminate the line of the Impact Curve. Typically, this is done by altering the course of the attack, engaging secondary Preventative Controls to prevent the next Threat Technique or step in the attack, or limiting the impact and extent of loss though some form of loss recovery (backups, for example).
Synonyms: Incident Response Action
(Actual) Response Time
Definition: This is the time delay from initial Event Detection, through correlation, Alerting, Investigation, Incident Creation, until an Incident is resolved, and loss mitigation measures are put in place to halt the attack and terminate the line of the Impact Curve.
Synonyms: None, but term is misused frequently.
Definition: This is the time period starting at attack inception or initiation until an Attack Scenario results in unacceptable loss or damage to the organization.
Definition: A document that details the customer interaction processes with the CyberProof SOC for a specific scenario or set of scenarios. The document focuses on the steps each organization will take to collaborate and defines what actions are expected from all participants. It is different than a Playbook that defines incident response processes that may be partially or completely automated.
Definition: The Scenario Kit is the product of the Scenario Development Cycle. Each approved Attack Scenario will result in a Scenario Kit. This cycle includes the development of an Attack Scenario (including the Risk Analysis Report associated with the scenario), the Scenario Element Enumeration Process, the Technique Probability Analysis, Evidence Prediction, Prevention and Detection Evaluation, Detection Source enumeration, Identify Alerts and Alert Enrichment Sources, Controls Testing, Playbook Creation, Playbook Testing, and Operational Implementation Plan. Therefore, the Kit includes all documentation, code, algorithms, and automation resulting from the execution of each of the process steps identified above. The process flow below depicts Scenario Kit development after the creation of the Attack Scenario.
SCA or Service Capacity Agreements
Definition: An SCA defines the capacity limits of a service to be provided. These capacity bounds are required for meaningful SLAs (see SLA below), in that they indicate the quantities of service to which the SLAs are applicable.
Definition: A human, end-system, or network security related activity that may be identified and recorded via a logging mechanism.
SLA or Service Level Agreements
Definition (In the context of a Security Service): An SLA is used to measure the conduct of a service provider in the execution of a service contract.
This includes what was done in the execution of duties as well as the time that is expected for the completion of specific tasks within the context of an agreed capacity.
Therefore, SLAs generally define “What” service is to be provided and “When” or “How much Time” it will take to provide a service, whereas the Service Capacity Agreement defines “How Much” of the service is provided.
For example, if the time to process an alert or investigation is quantified, this time requirement must be linked to a capacity of the service such that it is applicable only up to a specific number of alerts in a given period of time.
In other words, this time requirement must be limited by an agreed capacity of the service. These capacities or at least SLA application boundaries are typically defined in the Service Capacity Agreement.
For example, if the agreed service capacity is 20 alerts per hour, then there can be an SLA in place that states each alert is processed within 30 minutes for up to 20 Alerts per hour.
If an unexpected and wide-ranging attack generates 500 alerts within an hour, the SLA can only apply to 20 of those alerts given that 20 alerts per hour is the agreed service capacity.
It is critical that all time-bound SLAs are mapped to service capacity as SLAs have financial penalties for the provider, and must therefore represent actions within the provider’s control.
As an additional example in the application of context around an SLA, consider an SLA that provides a 2-hour response time to implement countermeasures to a detected attack.
While this may be a reasonable expectation for a known attack or malware variant, it may not be reasonable for a new attack type or never-before-seen malware, where extensive effort may be required to reverse engineer the attack to determine an effective countermeasure.
Note that while SLAs are a good tool for ensuring standards of practice, KPIs are far better for managing performance and performance improvement.
Definition: Threat Capability is a measure of the probability that an adversary has the ability and interest to launch a damaging attack against an organization.
Synonyms: Adversary Capability
Definition: This is the use of Threat Intelligence gathering techniques to identify potential Threat Actors, Threat Techniques, Exploits, Attack Campaigns, and geopolitical situations that may pose a threat to the target organization.
Definition: Threat Potential can be represented as Threat Capability (TC) minus Control Strength (CS) for a given Threat Technique (TTn) or TP(TTn) = TC(TTn) – CS(TTn). A complete mapping of Preventative Controls to known Threat Techniques identifies which attacks have the potential for success, and links specific Preventative Control Gaps to defined Risks.
Definition: A specific technique employed during a cyber-attack. This specific technique may be a component within one or more Attack Scenario Elements (also known as a Cyber Event Chain or Kill Chain). Each technique is preventable or detectable through some method, but since several Attack Scenarios may have combinations of overlapping Threat Techniques, identification of the correct associated Attack Scenario may not be possible with only one or two detected Threat Techniques. In the MITRE ATT&CK Matrix, each of the items under the categories of Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Command and Control, and Exfiltration represents a Threat Technique.
Synonyms: Threat Action, Threat Event, Attack Technique. Note that Attack Step and Kill Chain Step are frequently but incorrectly used to describe this action, in that a kill chain step is a step in the kill chain model, while a Threat Technique is a specific technical action that fulfills that step in the model sequence. There may be several Techniques that accomplish the same outcome.
Definition: Use Cases represent specific techniques that produce undesirable outcomes. Once this undesirable outcome is identified, a prediction is made as to the evidence that would indicate the undesirable activity has occurred, is occurring, or is about to occur. Typically, this evidence is expressed as an Alert Rule within the SIEM that maps the evidentiary events to the named outcome and creates an Alert to allow the organization to take the appropriate action. The example below illustrates these relationships:
A few examples below illustrate these relationships
- Use Case – Persistent Malware Infection: Evidence – A specific Malware has been detected multiple times over a long period of time (more than 4 days)
- Events –EDR detection of Malware a compute platform
- Alert Rule – The same variant of malware is detected on one or more systems over a 4-day period
- Alert – an alert is triggered bearing the same name as the Use Case.
- Use Case – Discovered Vulnerability: Evidence – A High and critical vulnerabilities on critical servers requires immediate remediation.
- Events – A vulnerability Scanning system detected a high rated vulnerability on a critical server and created an event.
- Alert Rule – This single event triggers an alert.
- Alert – an alert is triggered bearing the same name as the Use Case and invokes a Playbook to create a remediation ticket in the ticketing system.
Use Case Factory
Definition: The Use Case Factory defines a process that includes the following general steps:
- Convert Negative Outcome Scenarios into Attack or Action Scenarios (i.e., how would the outcome happen). This process typically incorporates input from threat intelligence sources, penetration testing, and vulnerability assessments.
- Extrapolate from the Attack Scenario the specific Attack Technique or Techniques required to perpetrate the attack or unwanted scenario.
- Identify the evidence that would be generated by the technique or techniques.
- Identify the data sources and mechanisms required to detect such evidence.
- Define Preventive and/or Response Controls to be engaged upon detection of evidence.
- Implement necessary data source logging and create detection correlation rules, if needed, to create an Alert upon detection.
- Implement Prevention or Response controls.
- Test, Implement, and operationalize the Detection control and the associated Prevention or Response controls.
Definition:Weakness in system/application. Vulnerabilities are typically rated via a CVSS score (Common Vulnerability Scoring System) which is an open industry standard for assessing the severity of computer system security vulnerabilities. The score assigns a severity or risk rating to vulnerabilities, allowing responders to prioritize responses and resources according to threat. However, more advanced Vulnerability Management Systems also take into account other risk factors such as the availability of an exploit against the vulnerability, frequency of exploit usage, the criticality of the vulnerable system, Attack Surface Analysis that can determine the accessibility of the vulnerable system from potential attackers. There are also multiple classifications of vulnerabilities such as the following:
- Unknown or Zero Day Vulnerability. These are vulnerabilities that are unknown to the manufacturer of the system or software, and to the makers of prevention systems, yet may be known by a small number of researchers or hackers. There is generally little to no defense against exploits of these vulnerabilities other than AI based predictive analytics or anomaly detection systems.
- Known but Unpatched Vulnerabilities. These are vulnerabilities that are widely known and for which the manufacturer has created a working update or “patch”. However, that patch has not yet been applied to the systems that have this weakness. These systems are therefore still vulnerable in spite of the availability of the corrective updates. This is not uncommon in that sometimes the patch causes malfunctions in other software on the vulnerable platform, and this other software is critical to business operations. In these cases, it is essential to create filtering or protective mechanisms to block the exploit on its path to the vulnerable system.
- Patched Vulnerabilities, although a seemingly oxymoronic term, given that a patched system is no longer vulnerable, is a legitimate term within the Vulnerability Management Process that indicates that patching remediation has been performed on a given system or set of systems.