Validate before you wake anyone up
Half of the critical alerts I have seen triaged were false positives — a new scanning tool,
a misconfigured agent, a researcher running a curl command that happened to hit
a canary. Before paging leadership, verify the signal: correlate across at least two
telemetry sources, confirm the indicator maps to a real production asset, and pull a second
opinion from another analyst.
Three minutes of validation saves three hours of confusion. The SOC that skips validation in the name of speed pages its CISO at 3 a.m. twice a week, and within a quarter nobody answers the pager anymore.
Define the blast radius — fast, even if wrong
You do not need perfect scope in the first hour; you need a working hypothesis of "where does this stop?" Pick the tightest plausible boundary — a single host, a single identity, a single application — and commit to it. You can expand later.
An imprecise boundary is infinitely better than no boundary, because it gives the response team something concrete to contain. Teams that refuse to commit to any boundary until they have "full visibility" tend to still be gathering visibility when the attacker is already two hops deep.
Contain first, investigate second
I see too many teams hesitate on containment because they fear losing forensic data. Modern EDR tools preserve execution telemetry after isolation; you are not deleting evidence when you isolate a host, you are freezing it in place. Isolate the host, disable the compromised identity, revoke the token.
You can do forensics on a frozen target. You cannot do forensics on an attacker who is still moving through your environment while you debate whether to pull the plug.
Pick one incident commander — even if it is you
The biggest time-waster in a P1 is unclear decision ownership. One person decides. Everyone else executes. Decisions do not need to be right; they need to be made, recorded, and revisited. The commander's job is to unblock everyone else, not to solve the technical puzzle themselves.
If you do not have a designated on-call commander, take the role yourself and announce it in the channel: "I am commanding this incident until relieved." Clarity beats protocol.
Start the timeline immediately
Open a document. Log every observation with a timestamp. Not a Slack thread — a structured doc that your DFIR team, legal, and leadership will read tomorrow. Include timestamps in UTC, who made the observation, what they saw, and what they did next.
Future you will thank present you. And if this incident ever ends up in front of lawyers, regulators, or an insurance claim, that timeline is the artifact that either saves you or sinks you.
Communicate in cadence, not reaction
Stakeholders will ask for updates. Instead of answering each ping, commit to a cadence: "I will post an update every 15 minutes, even if the update is no new information." Then do it. Silence breeds panic. Scheduled updates buy you focus time.
Executives do not need technical depth in the first hour; they need confidence that someone is in control. The cadence itself is the message.
What not to do in the first hour
- Do not try to find patient zero — that is day-two work.
- Do not try to attribute the attack — no CISO cares about the threat actor name yet.
- Do not reimage a host before you have captured disk and memory.
- Do not post in the company-wide channel before legal has reviewed wording.
- Do not make calls over SMS or personal phones if you suspect the attacker is in your Exchange or Teams tenant — use an out-of-band channel.
The real lesson
The first hour is not about solving the incident. It is about putting a clean perimeter around it so the next 23 hours can. Every team that wastes the first hour trying to understand the attack instead of bounding it pays for that decision somewhere between hour six and hour forty-eight.
Get the perimeter. Get the commander. Get the timeline. Everything else follows.
Written by Sari Taher from field experience running enterprise incident response.