If you want to know what's actually broken in security engineering, don't read a Gartner report. Read r/Azure at 7am on a Tuesday.
Here's a post from last week:
"We use 4 different tools for CSPM, workload security, identity management, and data discovery. None of them share context and it's basically chaos.
4 tools, 4 consoles, 4 different risk scores for the same resource. Every morning starts with context switching between dashboards trying to piece together what's going on. Our CSPM flags a misconfigured S3 bucket. But it doesn't know what's inside it. Our data discovery tool found PII in that bucket but doesn't know it's publicly accessible. Our workload scanner sees vulnerabilities on the instance accessing it but has no idea about the permissions. Our identity tool flags the overpermissioned role but can't see any of the other three problems.
Each tool sees its own slice. Nobody sees the full attack path. We literally had a situation last month where one tool said low risk and another said critical for the same resource."
That's the job. Not "doing security." Stitching.
The theme matches with what we hear from our customers. It isn't a CSPM problem, It's the entire shape of security engineering, in every category, in every org, every day.
The same pattern, in every category
Walk through the categories and the same wound shows up under different bandages.
Compliance. The SOC 2 auditor asks for evidence that production access is reviewed quarterly. Your IAM tool has the access list. Your ticketing system has the review tickets. Your SSO logs show who actually logged in. Your HRIS has who left the company. Nothing reconciles. Someone spends three weeks before every audit cycle building a spreadsheet that bridges four systems, by hand, and the spreadsheet is stale the day after it's submitted. Then the next framework lands — ISO 27001, HIPAA, PCI, FedRAMP — and the same evidence has to be re-mapped to a different control taxonomy. The work is almost entirely translation between systems that were never designed to speak to each other.
Incident response. An alert fires at 2am. The EDR shows a suspicious process. To understand if it matters, the on-call needs identity context (whose box is this, what do they have access to), asset context (is this in the PCI scope, is it production), network context (what did it talk to), vulnerability context (was there an unpatched CVE that explains the initial access), and change context (did someone deploy something forty minutes ago that explains the anomaly). Six tools. Six logins. Six query languages. The investigation takes four hours, and three of those hours are joining data that should already be joined.
Vulnerability management. Scanner produces 4,000 findings. Asset inventory is in another system. Business criticality lives in a CMDB that's 30% accurate. Compensating controls are in a wiki. Exploitability data is on a threat intel feed nobody pays for after Q1. The team builds a prioritization "process" that is in fact a person named Priya who holds the joins in her head and is the single point of failure for the entire program.
Detection engineering. A new TTP drops. To know if you're covered, you need to know what you log, where it's parsed, what rules exist, what assets are in scope, and whether the rule has been firing. That state lives across the SIEM, the log pipeline, the detection-as-code repo, the asset inventory, and a Confluence page from 2022. Coverage is theoretical. Nobody can actually answer "are we covered for this?" in under a day.
Pen testing. External assessor delivers a report. Half the findings are already in your vuln backlog. A quarter are in compensating-control territory you've documented elsewhere. A few are genuinely new. Figuring out which is which means manually cross-referencing against four internal systems, and the 30-day remediation clock is already running while you do it.
Different category, different vendors, same disease. Each tool sees one slice. Nobody sees the path.
Why the categories were always artificial
Detection and response, vulnerability management, compliance, IR, pen testing — these aren't different problems. They're different lenses on the same problem: how is this organization exposed, and what should we do about it?
A vulnerability is a future incident. An incident is a vulnerability that wasn't caught in time. A compliance gap is usually a vulnerability with a paperwork dimension. A pen test finding is a vulnerability somebody deliberately went looking for. A detection rule is an attempt to catch the moment a vulnerability becomes an incident. The categories are operational conveniences, not natural kinds.
The reason the org chart looks the way it does — and the reason the r/Azure poster has four consoles — is that the data for each lens lived in different tools, with different schemas, owned by different vendors, who had every commercial incentive to keep the data isolated. So the humans isolated too. A detection team. A vuln management team. A compliance team. An IR team. A pen test team. Each with its own muscle, its own tooling, its own ticket queue, its own relationships, its own quarterly review where nobody from the other teams shows up.
The category boundaries in security teams are largely a fossil of the SaaS pricing model.
The Security OS collapses the categories
Here's where the exponential thesis from earlier in the series comes back. If LLM inference is effectively free, and the Security OS is doing the data collection, the analysis, the correlation, the reasoning across every account and every log source and every control — then the boundary that used to separate "the CSPM lens" from "the data discovery lens" from "the identity lens" stops being load-bearing.
The same underlying environment — the same assets, identities, configurations, data, and behavior — can be looked at through any of the lenses on demand. The synthesis the r/Azure poster is doing manually at 7am, joining four consoles into one mental model, is exactly the work the Security OS is built to do continuously, in the background, across every customer and every scenario, without anyone asking.
The engineer's question shifts. It's no longer "what's in my queue today." It's "what's the most important thing happening across all the lenses right now?"
The compound insight
Here's the part that doesn't get said often enough: working across all the scenarios in parallel doesn't just make the engineer more efficient. It makes the engineer better at security, because the cross-scenario view surfaces things no single-scenario view could ever see.
Go back to the r/Azure post and read it again. The S3 bucket is a misconfig and a data exposure and a vulnerability and an identity problem. Any single lens calls it medium risk. All four lenses together call it the most important thing the company should fix this week. The poster knows this. He's doing the join in his head. That's the job he's tired of.
Now extend the same pattern:
The vuln management lens shows you that a particular asset has been on the "fix next quarter" list for three quarters. The detection lens shows you that the same asset has been generating low-severity anomalies for two of those quarters. The compliance lens shows you it's in scope for an upcoming audit. The pen test lens shows you that an external assessor flagged it last cycle.
Any one of those signals on its own is forgettable. All four together are an emergency. In the old org structure, the four signals lived in four different tools owned by four different teams, and the synthesis happened — if it happened at all — in a quarterly meeting six months too late.
In the new model, the synthesis is the default state. The engineer is no longer doing the synthesis themselves. The Security OS is. The engineer is the one who reads the synthesis and decides what it means.
This is the part of the work that's genuinely new. Not faster. Not cheaper. New. It wasn't possible before, at any price, because no human could hold the cross-scenario context at the necessary depth. Now the context holds itself, and the human gets to think about it.
What the engineer is actually for
After enough weeks of running this way, the role clarifies in a way that's hard to see from the inside of the old model.
The engineer is not the executor. The engineer is not the analyst. The engineer is not the person joining four consoles by hand at 7am. The engineer is the integrator and the decider. The Security OS runs the operational machine across all the scenarios, in parallel, at full depth. The engineer reads the cross-scenario picture and decides what the security program is going to do about it.
This is closer to how senior security thinking was always supposed to work, and further from how most senior security people have ever actually been allowed to spend their time.



