
There was a time when every incident was treated like a crime scene. A strange binary on a file server meant disk images, memory captures, and long nights stitching together logs until a story emerged. We didn’t just know what executed; we could tell you how it arrived, why it was run, and whether it achieved its goal. Incidents were infrequent enough that each one became a case study. That was security in the age of books: dense, narrative, the whole story.
Then the world sped up. SaaS everywhere, endpoints everywhere, attackers everywhere. We built Security Operations Centers to cope with the flood: teams wired into EDR/XDR consoles that see everything, act fast, and keep the lights on. The metrics followed - MTTD, MTTR, alert volumes, auto-containment counts. We moved from books to TikTok: an endless feed of short clips, each technically accurate, none with the full plot. It saved us from drowning. It also cost us something essential.
The SOC model shines where it was designed to shine. An employee downloads a dodgy PDF. The payload reaches for the wrong API, SentinelOne swats it, the file is quarantined, the user barely notices. That is a perfect outcome. No one needs to pull memory or draft an affidavit for a commodity lure that never got traction. Speed wins.
The trouble is that, somewhere along the way, we began treating every alert like that dodgy PDF. I’ve sat with teams during the other kind of day - the one where the console shows credential dumping on a domain controller, or a finance server that suddenly grows new scheduled tasks, or outbound traffic that looks like it’s been practicing. The EDR does exactly what it promises: kill the process, isolate the host, reset the obvious credentials. On paper, it reads as a win. And yet I can point to too many weeks that start this way and end with ransom notes on every screen. The malware was contained. The threat actor was not.
That sentence is the gap in modern operations. We report contained without proving understood. By the time an alert fires, something has already gone wrong. Do you know what it was? Did the attacker already succeed before your tool stepped in? Did they plant persistence in the quiet corners you didn’t check? Did they pull the files that actually matter to your business? If your process ends at “process blocked,” you don’t have those answers.
And if you’re paying for a SOC today, ask yourself honestly: do they? Will they know the difference between a noisy piece of malware and a human adversary logging in with valid credentials? Will they spot FileZilla quietly exfiltrating data, or Splashtop tunneled into a server at 3 a.m.? Will they catch persistence via a scheduled task that looks like it belongs? Or will they see a clean dashboard and call it a night, while you spend your weekend investigating what went wrong?
This is how organizations keep getting hit by the same entry points. Today it’s SSL VPNs with a soft spot; last year it was RDP; next year it will be whatever we’ve neglected. The tactics don’t need to evolve quickly because our defenses learn slowly. And they learn slowly because we stopped writing the story down. We replaced investigations with acknowledgements. We became firefighters sprinting from blaze to blaze, and somewhere we stopped calling the police.
You can see the consequences in green dashboards that feel comforting and mean very little. I’ve read reports celebrating a thousand threats automatically contained in a week - while a single valid VPN login led to FileZilla quietly shuttling gigabytes of data out the door. To the SOC, the story ended with a blocked executable. To the adversary, the story ended with success. One side measured noise; the other measured impact.
Digital forensics exists to bridge that gap. Not to slow the SOC down, not to insist that every nuisance sample deserves a lab day, but to answer the questions tools can’t. How did they get in? What did they try to do? Did they succeed? Where else did they go? Which identities were touched, which systems were staged, which data crossed the line? When you can answer those, you move from “we think we’re fine” to “we can show you why we’re fine.” Executives hear the difference. Regulators and insurers hear the difference. Your own engineers hear the difference because now they can fix causes, not symptoms.
So why did we lose forensics except for the catastrophes? Volume, first and foremost. You can’t image disks for every phish. Speed and incentives, next: SOCs are paid to keep businesses running, and the scoreboard favors fast containment over slow certainty. Skill scarcity plays a part; good investigators are hard to grow and harder to scale. Managed providers optimized for consistency and coverage will always default to the playbook that closes tickets quickly. None of this is malicious. It is the rational outcome of the environment we built.
But it leaves MSPs and SOCs exposed at the crossover - the moment a “routine” alert stops being a nuisance and starts being a campaign. The dodgy PDF and the domain controller with credential dumping are not the same kind of problem, yet they’re often met with the same choreography: kill it, close it, move on. That’s how you end up celebrating the week’s blocked payloads while missing the single human adversary quietly achieving their objective. We’ve been containing threats, not threat actors.
There’s a broader cost, and it lands on all of us. When fewer incidents are investigated deeply, the community learns less. We see isolated detections rather than coherent campaigns. We lose sight of how initial access becomes persistence, how persistence becomes privilege, how privilege becomes exfiltration or disruption. And then we act surprised when the same play, renamed and repackaged, beats us again.
What does it look like to bring forensics back without pretending it’s still 2010? It looks like reuniting firefighting with policing. Keep the speed. Keep the automation. But when the signal says “human on the other end,” the workflow must pivot from clip to chapter. Collect the evidence by default - volatile artifacts, identity trails, persistence breadcrumbs, the pieces that turn a kill notice into a timeline. Make “what was attempted, what they were after, and to what extent they succeeded” a standard outcome for serious alerts, not a luxury reserved for regulatory crises. If your plan today is “we’ll do a deep dive if it turns into worst case,” the uncomfortable truth is that you’ll discover worst case late, and often through someone else’s phone call.
I’m not arguing for museum-piece IR where every laptop becomes a legal exhibit. I am arguing for stories. A program that treats alerts as the beginning of a narrative instead of its conclusion. A culture that measures understanding alongside containment. A habit of asking, each time the console lights up in a way that matters: where did this come from, who is behind it, what are they trying to achieve, and how sure are we that they didn’t?
We became firefighters because we had to. It kept businesses alive. But the fires that look “contained” are often the ones that burn the house down later. It’s time we started calling the police again - not after the embers cool, but while the smoke is still in the air.
A green dashboard doesn’t protect your company. Understanding does. And understanding comes from forensics - applied where it counts, built into the way we work, turning alerts back into stories that teach us how to win.