I found the talk Red and Blue: a tale of stealth and detection by Elias Issa, very interesting.
Here are my notes containing interesting insights that I want to keep track of.
Core concepts #
Stealth #
Stealth is the ability to avoid detection. There's not a lot of resources on stealth for blue and red teams. Perfect stealth is impossible in cybersecurity, but an action can be made harder to detect.
Blue Team objectives #
The blue team is a team tasked with an organisation's defenses.
The blue team wants to detect attacks.
Key questions #
- How to react to specific alert?
- Are we missing a critical intrusion?
- How to improve our defense capabilities?
Tools and techniques #
Sysmon, Yara, EDR, NDR, Proxy, SIEM, firewall, Reverse Engineering...
Red Team objectives #
Red team (team tasked with simulating a real-world attack to test an organisation defence). Red team wants to be stealthy.
Key questions #
- How to avoid detection ?
- Will this action endanger the red team operation?
Tools and techniques #
C2 (command and control center), evasion, social engineering, phishing, AD exploitation...
Footprints #
For red team, stealth is primordial. This means being careful with the footprints they leave behind.
Types of footprints #
Network-related
Such as IP source and destination, ports, payloads...
System events
Such as logs, event IDs, files interactions (creation, access, modification), processes launch, registry modifications...
User behaviour
Such as access to unauthorised files, unusual connection patterns (out of offices hours, geographical locations)
Red team & Footprints #
Red team needs to:
- Leave footprints that cannot be correlated so that Blue Team cannot easily follow their steps or detect them
- Priorise footprints for which there are no well-known detection rules
- Avoid known bad footprints
- Check the doc of security products
- ex: accessing LSASS process
A few principles #
For red team stealth #
- Uncorrelate actions
- Adapt depending on the blue team, vary the IoCs
- Write custom code or adapt code to get rid of the well-known IoCs
- Reproduce environment in labs (EDR, FW)
- Blend in the usual network traffic
- For example: If you use the compromised credentials of an employee, execute the actions in the working hours of this employee
- If you use a process to connect to internet, then it needs to be a process that has a legitimate access to internet
- Be slow: spread actions (open 10 files in 1h instead of 20 in 1mn)
- Avoid useless actions
- Always consider: is this action worth doing ?
- Use known blind spots
For blue team detection #
Detecting stealthy actions is difficult but not impossible.
Few tools to help: EDR, NDR, Sysmon, SIEM
- Investigate after alerts and look for causes.
- Use IoCs to create alerts when there's unusual behaviour
- Continuous monitoring & automated alerts
- Use honeypots/tokens to catch attacker or analyze threats
- Threat hunting by targeting known attacks
- Develop and maintain an Incident Response plan
- Regular red team exercises to improve the defence of the organisation
Common blue team errors #
- Rely (only) on signature-based detection (which is useless in the case of custom tooling)
- Insufficient network visibility (you miss a lot of events)
- Poor communication between SOC and CERT that causes loss of information
- Wrong interpretation / evaluation of alerts
- Ignoring some alerts that are too often false positive
- Insufficient log retention duration
Real-world examples #
Those examples are events that took place at the end of a mission when the red team remained undetected, so they decided to be increasingly noisier to se what reaction it would elicit in the blue team.
Mimikatz #
The red team dropped mimikatz on a server and this triggered no further investigation. An alert was raised and the file was deleted by the antivirus so the analyst decided that the work was done. The analyst didn't investigate why and how mimikatz got on the server in the first place. -> Always investigate after an alerts and look for deep causes.
-> mimikatz is a credentials dumping tool
Creation of a domain admin account #
A domain admin account was created from another domain admin account, but the alert was ignored because it was coming from an account that usually generated a lot of noise. They considered that the action was legitimate without checking it was effectively him -> Bad evaluation of the alert criticality level and poor internal communication
Unnoticed phishing campaign #
This campaign was undetected at the beginning of the mission, so it was relaunched more noisily at the end of the mission. It was detected then, an alert was raised, the IoCs were identified. The phishing mail was blocked in/by the different security products. BUT no historic research was done so the previous attacks were not detected, only the current and future ones
Blind spots #
Blind spots are areas where monitoring and threat detection capabilities are insufficient or non existent. This can be due to a lack of monitoring coverage, ineffective (or misconfigured) detection tools, insufficient log collection and analysis, too much data complexity and volume or evasion techniques.
Blue Team needs to identify those blind spots to know where they are and check on them regularly. Red Team wants to identify those blind spots to hide there and remain undetected.
Example: Cookie dumping detection rule #
Blue Team wants to implement a rule to detect cookie dumping attacks. An alert is raised when a process access cookies file. After a few days, BT realizes there are a lot of false positives because browsers naturally access these files so there is a lot of noise. Blue team decides to exclude browsers from this rule.
That's a blindspot. Because now an attacker will be able to migrate to a browser process before accessing cookie to remain undetected.
To mitigate this blindspot, BT needs to be better on process migration detection or find other ways to analyse anormal behaviour in legitimate processes.
Cookie dumping When a website sets a unusually large number of cookies in a user's browser
Process migration When code or malware injects itself into another process to evade detection. [[Process migration detection techniques]]
Handling of an incident depending on maturity level #
Level 0 - No detection #
No detection, no blue team Ransomware is deployed without any opposition 0 beacon found
Level 1 - Isolate and format #
Antivirus, EDR raise an alert. The compromised devices are immediately isolated and formatted. But there is no further investigation so:
- the potential forensic evidence is lost
- it's not possible to understand the root cause of the attack
Level 2 - Identify file and signature #
Blue Team is able to identify the malicious files and check for the file the file signature across the network to find other compromised workstations.
Level 3 - Partial incident response analysis #
Blue Team goes further and interrogate the victim to gather information. Extract other IoCs (emails, domains, hash, IP, ...) Historical research is performed to find other malicious files
Level 4 -Develop generic detection rules #
Understand the technique used by the attacker Did they use a persistence method ? Create generic detection rules to catch similar threats (but potential for increased false positives)
Level 5 - Dynamic behavioural analysis #
Blue Team Detect C2 and beaconing behaviour to correlate with previous techniques Analyse files to find more IoCs and extract them
Level 6 - Proactive threat hunting #
Blue team doesn't wait for an incident to react but proactively look for well-known IoCs or TTPs. Potentially they identify additional compromised systems.
Handling of an incident details #
We receive a suspicious email.
- Analysis and potential IoCs extraction
- Object
- Sender, domain sender
- Links
- File hash
- Are those IoCs present elsewhere?
- Isolate and analyse workstations (don't format it directly)
- Block IoCs
- Create detection rules
- Visit and analyse domain in sandbox to find more IoCs
- Then repeat this cycle
When incident is over (and during), document everything.
Incident handling lifecycle #
- Extract IoCs
- Isolate workstations
- Block IoCs on the IS
- Implement detection rules
- Document lessons learned to improve processes
Detection is good but investigation is better.