I've been in cybersecurity for 25 years. Started my company in 2001, back when we called it "computer security" and customers would tell us "we have Antivirus" and walk away. I've seen hype cycles come and go. Cloud security, zero trust, blockchain for everything.
This is not a hype cycle. AI is fundamentally reshaping every domain in cybersecurity, and the pace is unlike anything I've experienced. I'm not saying this as someone watching from the sidelines. We run a 500-person security services firm. We also built Transilience, an AI security startup. I see the transformation from both ends: the company deploying AI, and the company helping clients deal with what AI means for their security posture.
Here's what I'm actually seeing across every practice area. No hype, no hand-waving. Just what's happening on the ground.
GRC: The Domain That Changed Overnight
I'll say it plainly. GRC is becoming highly automated. AI agents are doing the work and humans have very little to do.
This isn't some future prediction. At Network Intelligence, our GRC business already operates at higher gross margins than other projects where we haven't deployed AI. That's because agents now handle: policy creation, SOP creation, policy and SOP validation against a standard, pulling cloud configuration and validating against control frameworks, creating unified compliance across all applicable frameworks and regulations, answering vendor and customer questionnaires, validating evidence (screenshots, documents, config files) against stated controls, and writing audit reports.
Someone pushed back on me recently and said "that's just scripting, not real AI." I'd encourage anyone who thinks that to go try it. Take the tasks you do every day. Ask Claude to help you automate them. See where it takes you. Then tell me AI doesn't have a role to play.
Now, here's the nuance. The most valuable GRC people on my team are not the ones doing checkbox audits. They're the ones who took this disruption and ran with it. They're now extending into pen testing, helping customers resolve issues in their cloud environments, even doing aspects of appsec and architecture work that they could never have touched before. AI made them more capable across a wider set of domains.
If you're a GRC professional with 3 years of consulting experience wondering if you should pivot, I am sorry to be the bearer or confirmer of bad news. But don't pivot out of cyber. Start building agents. Automate as much of your own work as you can, and see what insights that gives you. Maybe the pivot is into AI governance and safety. Maybe it's deeper into cloud compliance engineering. You won't know until you build.
Penetration Testing: The Machines Are Already Competitive
Oh yes. Pen testing is getting automated. There are literally hundreds of GitHub repos that automate penetration testing. Ours is at github.com/transilienceai/communitytools. It has reached Elite Hacker level on HITB and is climbing rapidly higher.
But here's what's really wild. We ran an experiment. Three different AI-powered penetration testing approaches against a known vulnerable server. The results were eye-opening. One approach using Claude Code as the main automation platform performed remarkably well against the target, finding valid vulnerabilities with decent accuracy.
And then Stanford and CMU published a paper where their AI agent ARTEMIS placed 2nd out of 11 testers in a live enterprise pen test, outperforming 9 out of 10 human professionals. The AI cost about $18/hour running on GPT-5. The average US pentester costs about $60/hour loaded.
The strengths are clear: systematic enumeration, parallel exploitation, cost-effective continuous testing. The weaknesses are also clear: higher false-positive rates, struggles with GUI-based assessments, and a tendency to settle for low-severity findings instead of chaining to critical ones.
What this tells me is that the hybrid model wins. AI handles the systematic, repetitive reconnaissance and exploitation. Humans bring the creativity, the lateral thinking, the ability to understand business context and chain findings into real attack narratives.
AI red teaming will still require human input: guiding the tool, interpreting results in business context, building the narrative that makes a boardroom sit up. But the days of a human pentester manually running nmap, nikto, and gobuster? Those are done.
Security Operations: Do We Even Need Logs?
This is where my thinking has gotten a bit radical.
I asked myself a question last year. If I had to build an SIEM replacement from first principles, knowing what we know about SIEM failures and EDR/XDR capabilities, would I even collect logs?
The current SOC model is broken. Analysts face alert fatigue, false positive rates exceed 90%, and the cybersecurity workforce shortage means we can't hire our way out of the problem. Every SIEM I've seen becomes a data lake that's expensive to maintain and painful to query.
So what if we killed the log lake and built a behavioral graph instead? Every entity (user, device, file, process, network connection) becomes a node. Every interaction becomes an edge with temporal properties. You're not searching logs anymore. You're querying a living graph of "what touched what, when, and how."
Or what about inverting the detection model entirely? Instead of "analyze everything, alert on suspicious," flip it. Assume everything is fine, prove otherwise. Start with known-good baselines. Only investigate deviations.
And here's the nuclear option I keep coming back to. What if there is no SIEM? Just EDR/XDR with good APIs, a threat intelligence layer, automation tools, and notebooks for ad-hoc investigation. The "SIEM" becomes Python scripts and APIs.
At Transilience, we've built 24/7 security monitoring and vulnerability prioritization that uses AI agents to move beyond traditional scoring. We use real-world threat intelligence and business context to help security teams prioritize what actually matters. Our clients have seen vulnerability backlogs reduced by 70% and triage time cut from weeks to minutes.
The SOC of the future isn't 50 analysts staring at Splunk. It's 5 people directing AI agents that do the detection, investigation, and initial response, with humans making the judgment calls on novel, unexpected threats.
Security Awareness: AI-Powered Social Engineering Changes Everything
Here's something that should keep every CISO up at night. Voice cloning with 3 seconds of audio. Polymorphic phishing at scale. AI-generated social engineering that's personalized to every target using their own LinkedIn profile.
The offense has gotten dramatically more sophisticated. An attacker with AI can generate targeted phishing campaigns that would have taken a dedicated team weeks to craft. They can create deepfake audio of your CEO requesting an urgent wire transfer. They can spin up convincing personas across multiple channels simultaneously.
This means traditional security awareness training, the kind where you show employees a PowerPoint about not clicking suspicious links once a year, is basically useless now. The attacks don't look suspicious anymore. That's the whole point.
What I think works is continuous, AI-powered simulation. Real-time phishing tests that use the same techniques the attackers use. Training that adapts to each person's failure patterns. And honestly? Building technical controls that don't depend on humans making the right call every single time, because increasingly, even trained humans can't tell the difference.
Deploy fraud awareness in every language your workforce speaks. Run simulations frequently, not annually. And accept that the human layer is your weakest link, not because people are dumb, but because the AI-powered attacks are just that good.
AI Security: Securing the Thing That's Changing Everything
This is the domain that didn't exist when I started my career, and it might be the most important one now.
Prompt injection is an unsolvable problem. I've said this publicly, and I'll say it again. No matter how much input validation you build, a determined attacker will get through. So where does the real security layer go? At the tool call boundary. At the database access point. That's where the enforcement has to happen.
We've been building and testing at this intersection: AI red teaming, AI governance, LLM security assessments, and AI supply chain security. The attack surface is enormous. MCP servers alone have introduced a whole new class of vulnerabilities. Command injection in 43% of implementations, path traversal in 22%, SSRF in 30%. CVEs scoring 9.4 on CVSS.
And then there are the attacks you can't even see coming. Google DeepMind recently published work on "Persona Hyperstition", the idea that the internet's narratives about AI models get scraped into training data, and the models start becoming the thing people say they are. It's not data poisoning in the traditional sense. It's cultural narrative manipulation. Indirect, deniable, and nearly impossible to filter for because the "attack" is just people talking about AI on the internet.
For security professionals, AI security is where I'd place my biggest bet. Understand AI from an architecture perspective. Build your own agents. Figure out MCP and multi-agent architectures. How do you do identity and access management, observability, input validation, output validation when it comes to agents? The confluence of AI/ML knowledge combined with cybersecurity is going to be a high-demand skill going forward.
The Uncomfortable Truth About All of This
I've committed to everyone at Network Intelligence that we won't fire a single person because AI is replacing their job. We're investing in training and upskilling. But I'm not sure every other cybersecurity and services firm will take that approach.
Entry-level jobs are disappearing. We used to hire hundreds of fresh graduates a year. Now it's 20 to 30. Our economy runs on the apprenticeship model. Junior people doing junior work, learning from seniors, eventually becoming the next generation of experts. AI is breaking that model. If AI handles the Level 1 and Level 2 work, how do we produce the next generation of Level 3 and Level 4 practitioners?
I don't have a clean answer. It does not bode well for the future, I fear.
But here's what I tell every cybersecurity professional who asks me what to do. Become the world's top 10 expert in one specific field. Azure Forensics. Agentic AI security. OT security. Something narrow and deep. That kind of expertise makes you irreplaceable by AI. That is my current philosophy.
And if you're already feeling overwhelmed, if everything seems to be moving too fast, if you're running faster and faster just to stay in the same place, you're not alone. I've been through 2 or 3 burnout cycles myself. Have a hobby outside of cyber. Find a creative outlet. Get a coach or a therapist. I did a year with a business coach and it was quite life-changing.
The field is transforming. But the people who embrace the change, who start building and automating today, who go deep instead of staying broad, those people are going to be more valuable than they've ever been.



