I found the talk Red and Blue: a tale of stealth and detection by Elias Issa, very interesting.
Here are my notes containing interesting insights that I want to keep track of.

Core concepts #

Stealth #

Stealth is the ability to avoid detection. There's not a lot of resources on stealth for blue and red teams. Perfect stealth is impossible in cybersecurity, but an action can be made harder to detect.

Blue Team objectives #

The blue team is a team tasked with an organisation's defenses.
The blue team wants to detect attacks.

Key questions #

Tools and techniques #

Sysmon, Yara, EDR, NDR, Proxy, SIEM, firewall, Reverse Engineering...

Red Team objectives #

Red team (team tasked with simulating a real-world attack to test an organisation defence). Red team wants to be stealthy.

Key questions #

Tools and techniques #

C2 (command and control center), evasion, social engineering, phishing, AD exploitation...

Footprints #

For red team, stealth is primordial. This means being careful with the footprints they leave behind.

Types of footprints #

Network-related

Such as IP source and destination, ports, payloads...

System events

Such as logs, event IDs, files interactions (creation, access, modification), processes launch, registry modifications...

User behaviour

Such as access to unauthorised files, unusual connection patterns (out of offices hours, geographical locations)

Red team & Footprints #

Red team needs to:

A few principles #

For red team stealth #

For blue team detection #

Detecting stealthy actions is difficult but not impossible.

Few tools to help: EDR, NDR, Sysmon, SIEM

Common blue team errors #

Real-world examples #

Those examples are events that took place at the end of a mission when the red team remained undetected, so they decided to be increasingly noisier to se what reaction it would elicit in the blue team.

Mimikatz #

The red team dropped mimikatz on a server and this triggered no further investigation. An alert was raised and the file was deleted by the antivirus so the analyst decided that the work was done. The analyst didn't investigate why and how mimikatz got on the server in the first place. -> Always investigate after an alerts and look for deep causes.

-> mimikatz is a credentials dumping tool

Creation of a domain admin account #

A domain admin account was created from another domain admin account, but the alert was ignored because it was coming from an account that usually generated a lot of noise. They considered that the action was legitimate without checking it was effectively him -> Bad evaluation of the alert criticality level and poor internal communication

Unnoticed phishing campaign #

This campaign was undetected at the beginning of the mission, so it was relaunched more noisily at the end of the mission. It was detected then, an alert was raised, the IoCs were identified. The phishing mail was blocked in/by the different security products. BUT no historic research was done so the previous attacks were not detected, only the current and future ones

Blind spots #

Blind spots are areas where monitoring and threat detection capabilities are insufficient or non existent. This can be due to a lack of monitoring coverage, ineffective (or misconfigured) detection tools, insufficient log collection and analysis, too much data complexity and volume or evasion techniques.

Blue Team needs to identify those blind spots to know where they are and check on them regularly. Red Team wants to identify those blind spots to hide there and remain undetected.

Blue Team wants to implement a rule to detect cookie dumping attacks. An alert is raised when a process access cookies file. After a few days, BT realizes there are a lot of false positives because browsers naturally access these files so there is a lot of noise. Blue team decides to exclude browsers from this rule.

That's a blindspot. Because now an attacker will be able to migrate to a browser process before accessing cookie to remain undetected.

To mitigate this blindspot, BT needs to be better on process migration detection or find other ways to analyse anormal behaviour in legitimate processes.


Cookie dumping When a website sets a unusually large number of cookies in a user's browser

Process migration When code or malware injects itself into another process to evade detection. [[Process migration detection techniques]]


Handling of an incident depending on maturity level #

Level 0 - No detection #

No detection, no blue team Ransomware is deployed without any opposition 0 beacon found

Level 1 - Isolate and format #

Antivirus, EDR raise an alert. The compromised devices are immediately isolated and formatted. But there is no further investigation so:

Level 2 - Identify file and signature #

Blue Team is able to identify the malicious files and check for the file the file signature across the network to find other compromised workstations.

Level 3 - Partial incident response analysis #

Blue Team goes further and interrogate the victim to gather information. Extract other IoCs (emails, domains, hash, IP, ...) Historical research is performed to find other malicious files

Level 4 -Develop generic detection rules #

Understand the technique used by the attacker Did they use a persistence method ? Create generic detection rules to catch similar threats (but potential for increased false positives)

Level 5 - Dynamic behavioural analysis #

Blue Team Detect C2 and beaconing behaviour to correlate with previous techniques Analyse files to find more IoCs and extract them

Level 6 - Proactive threat hunting #

Blue team doesn't wait for an incident to react but proactively look for well-known IoCs or TTPs. Potentially they identify additional compromised systems.

Handling of an incident details #

We receive a suspicious email.

  1. Analysis and potential IoCs extraction
    • Object
    • Sender, domain sender
    • Links
    • File hash
    • Are those IoCs present elsewhere?
  2. Isolate and analyse workstations (don't format it directly)
  3. Block IoCs
  4. Create detection rules
  5. Visit and analyse domain in sandbox to find more IoCs
  6. Then repeat this cycle

When incident is over (and during), document everything.

Incident handling lifecycle #

  1. Extract IoCs
  2. Isolate workstations
  3. Block IoCs on the IS
  4. Implement detection rules
  5. Document lessons learned to improve processes

Detection is good but investigation is better.