Series 4: Security in the Age of AI: Building Fast, Building Safe
Laying the Foundation for AI-Ready Infrastructure
Preface
This is the final installment in my Laying the Foundations for AI-Ready Infrastructure series. It’s taken me a bit longer than I expected to write this one — not because it’s an afterthought, but because it’s the piece I care most deeply about.
Throughout this series, we’ve explored the foundational layers for AI adoption: the four roles organizations play in the AI whirlwind, the importance of breaking down data silos, and the network infrastructure backbone needed to support AI-driven innovation. But security — especially in the age of autonomous, agentic systems — is arguably the most complex and under-addressed piece of the puzzle.
For many companies, especially those still defining their identity in a fast-growing AI-native market, security doesn’t feel urgent. I get it. That was my journey too. I didn’t start in security by choice — I was assigned to a VPN project. Back then, I shared the common sentiment: security felt like friction. A drag on innovation. Something to slow us down in the name of caution.
Then I came across a simple analogy that changed everything:
Security is like the brake in a car. Brakes don’t exist to slow you down — they’re what let you drive fast, safely.
That reframed everything for me. Security isn’t just about protection. It’s about enabling speed, confidence, and resilience.
And in the age of AI, this couldn’t be more important. As business leaders race to adopt AI for productivity and innovation, so do threat actors. The same powerful tools that can help you accelerate can also be weaponized. That means we’re entering a new era of cybersecurity challenges that require a fundamentally different mindset.
So how do we frame the intersection of AI and security?
In this article, I’ll explore three dimensions that I believe define the new landscape:
AI for Cybersecurity — how AI strengthens our defenses
Cybersecurity for AI — how we secure AI systems, data, and models
Trust, Fairness, and Privacy — the new frontier of responsible AI adoption
Let’s dive in — because in the age of AI, security isn’t optional. It’s the foundation that allows us to build fast and build safely.
AI for Cybersecurity
AI in cybersecurity is a bit of a paradox. On one hand, it’s helping defenders detect and respond to threats faster than ever. On the other, attackers are using those same tools to get smarter, faster, and harder to catch. So the question isn’t should we use AI — it’s how we use it wisely.
The traditional model — stacking security tools on top of each other and piling on more rules — is no longer sustainable. Today’s attackers move too fast, and the signal-to-noise ratio is brutal. Security teams are overwhelmed, and automation alone isn’t cutting it. That’s why we need to shift from defense-in-depth to what I call intelligence-in-depth — where AI doesn’t just detect, it helps reason, correlate, and prioritize at machine speed.
This isn’t about replacing humans. It’s about augmenting them.
Powering Intelligence-in-Depth: The Role of Domain-Tuned LLMs in Modern Cybersecurity
Many of the advancements in AI for cybersecurity — especially the three use cases below — are powered by large language models doing the heavy lifting behind the scenes: correlating signals, summarizing incidents, and spotting behavioral anomalies.
And vendors are jumping in fast. Microsoft is integrating GPT-4 into Security Copilot to assist analysts with triage and investigation. IBM Consulting introduced a generative AI–powered Cybersecurity Assistant, built on watsonx, to lift alert investigation speeds by up to 48%, helping analysts triage and escalate alerts faster.
These are meaningful steps, but most of these solutions are still built on general-purpose LLMs — models trained broadly but not deeply on security data.
That’s what makes Google’s SecLM project so interesting to me. It’s one of the first domain-tuned LLMs purpose-built for cybersecurity — and it goes beyond plugging an LLM into a dashboard. Google’s whitepaper, “Solving Domain-Specific Problems Using LLMs”, walks through how SecLM isn’t just fine-tuned on security data — it’s designed to reason across fragmented, real-world environments: logs, threat intel, incident tickets.
The security section of Google’s whitepaper outlines how SecLM tackles persistent challenges like alert fatigue, fragmented tooling, and analyst overload. By combining real-time data integration with a model trained on security-specific tasks, they demonstrate a major leap: from rule-based workflows to intelligent, adaptive defense.
What I find most exciting is how this points to a future where LLMs act as part of the control plane — not just summarizing things after the fact, but actively shaping how defenders respond in real time. It’s not about replacing analysts — it’s about making the system smarter, more contextual, and more agentic.
We’re just scratching the surface, but I believe this is where AI in cybersecurity is headed: deeply embedded, domain-aware, and purpose-built for the mission.
AI for Cybersecurity: From Defense-in-Depth to Intelligence-in-Depth
We’ve been talking about defense-in-depth for years — layering tools and rules to keep bad actors out. But in today’s environment, that’s not enough. We need a new approach: intelligence-in-depth. One where AI doesn’t just alert us — it helps us reason, prioritize, and take smarter action.
Whether it’s network security, identity, application, or endpoint protection — every vendor and every SecOps team is trying to figure out how to weave AI into their solutions. And honestly, I don’t blame them. AI is already changing the game in three key areas:
1. AI in Security Operations: Reducing Alert Fatigue and Accelerating Response
If you’ve worked in a SOC, you know the drill — hundreds of alerts, most of them noise. Teams are overloaded, and real threats can get buried. AI helps by correlating signals across endpoints, firewalls, identity systems, and more — turning ten scattered alerts into one cohesive incident with context and a timeline.
It’s not just correlation. Natural Language Processing (NLP) can summarize incidents and recommend actions, so junior analysts don’t have to escalate everything. That means less burnout, faster resolution, and more time focused on what matters: the real threats.
2. Threat Detection & Behavior Monitoring: Catching What Doesn’t Belong
This is where AI’s pattern recognition skills shine. AI can baseline what’s normal for users, machines, and services — and flag unusual behavior, even when it’s subtle.
For example:
A user account suddenly accessing files it’s never touched before
A device reaching out to a command-and-control server at 3 a.m.
Or a spike in failed login attempts that would go unnoticed in a rules-based system
These are signs of early-stage compromise — whether it’s credential abuse, lateral movement, or insider risk. AI doesn’t need a signature to catch them. It spots the deviation.
3. Malware Classification & Threat Intelligence Correlation: Staying Ahead of Zero-Days
This is more about file and payload analysis. AI helps us detect malicious code — especially things we’ve never seen before.
Deep learning models trained on binaries, system behavior, and memory patterns can flag a file as malicious without a signature. These models are already being deployed in endpoint agents and cloud sandboxes to stop polymorphic malware in real time.
This is the shift: from reactive alerts to proactive defense. It’s not magic. It’s just smarter tooling, backed by models that learn from what came before and adapt to what’s coming next.
Security for AI: When the Model Becomes the Attack Surface
We’ve spent decades hardening our infrastructure, applications, and endpoints. But AI — especially in its generative and agentic forms — is changing the game. The machine learning model itself — particularly LLMs — has become the brain of the system, the center of control. That along become the new attack surface.
The technology is still relatively new, and the ecosystem around securing it is rapidly evolving. But one thing is clear: securing AI is no longer optional — it’s essential.

Unlike traditional systems where bugs live in code, AI vulnerabilities live across the entire lifecycle — in training data, model behavior, and runtime interactions. It’s not just about protecting the infrastructure that runs the model — it’s about protecting the model itself.
Let’s break it down across three critical moments in the AI lifecycle:
Before deployment: Models are vulnerable to data poisoning, prompt injection, or unintended data leaks. If you can manipulate what a model learns, you can influence how it behaves.
During deployment: Threats shift to model misuse, jailbreaking, or abuse of public-facing APIs. These risks are heightened when models are exposed externally — often with weak identity controls or insufficient rate limiting.
In the wild: Things get even trickier with autonomous agents. These systems take actions on your behalf, and their interactions — with APIs, systems, or other agents — can produce emergent behaviors that are hard to predict, and even harder to contain.
And here’s the challenge: traditional security tools weren’t built for this. Firewalls won’t stop a compromised LLM. Endpoint protection can’t prevent an agent from hallucinating a dangerous command. This isn’t just a feature gap — it’s a category gap.
That’s why we need a new security discipline, purpose-built for AI-native and agentic systems. One that treats the model not just as software, but as a semi-autonomous actor — and builds a control plane around behavior, trust, and containment.
We’ll dive deeper into that in my future writing around Agentic Security. But for now, it’s worth remembering this: as AI becomes more capable, it also becomes more unpredictable. If we want to build fast — we have to build safe.
Trust, Fairness, and Privacy: It Takes a Village
No AI system can be trusted just because it’s sophisticated. In fact, the more intelligent these systems become, the more likely we are to over-trust them — even when they’re confidently wrong.
That’s why trust, fairness, and privacy can’t be left to explainability tools or regulatory checklists. They have to be designed in. But more than that, they have to be collaboratively owned by the entire ecosystem.
As Fei-Fei Li put it: “AI is a reflection of the people who make it. If we want it to reflect humanity, humanity must be at the table.” That idea really resonates with me. Because building trust in AI isn’t just about the model — it’s about the people, the culture, and the collective accountability behind it.
“Trust doesn’t live in a dashboard. It lives in our habits, our design decisions, and our shared standards — and it truly takes a village to get it right.”
As a security practitioner, I’ve always believed that trust is built through defense-in-depth: layers of transparency, accountability, and fail-safes. But in AI systems — especially generative and agentic ones — those layers have to be intentional. Privacy needs more than encryption. Fairness needs more than bias audits. And explainability needs more than model cards.
I’ll be the first to say: I don’t come from a legal, policy, or privacy background. And the more I’ve learned, the more I realize how much I still have to learn. This space goes way beyond technology — it touches on ethics, governance, social norms, and global impact. That’s why it takes more than just vendors or security teams. It takes the whole ecosystem — and society at large.
Yes, vendors can build secure infrastructure and offer policy frameworks. But trust doesn’t stop at the product boundary. Everyone involved in the lifecycle of AI has a role to play:
Builders (engineers and data scientists) need to embed safeguards into how models are trained, tested, and deployed.
Product managers must ensure that fairness and user protections aren’t scoped out during time-to-market crunches.
Security and compliance teams have to go beyond access control, and think about things like prompt injection, model misuse, and data provenance.
And users — yes, users — need to build new habits. As Cassie Kozyrkov says, “Don’t trust the first answer. Ask it 50 ways.” We need to teach people how to interact with AI the same way we once taught them to recognize phishing emails.
Trust doesn’t live in a dashboard. It lives in behavior, design, and community norms. We have to build it together — just like we did with open source, internet safety, and responsible data use.
This won’t be solved by one model or one company. But if we align around principles of transparency, accountability, and thoughtful design — we can get there.
Conclusion: The End of One Chapter, the Start of Another
When I started this series earlier this year, my goal was simple: to help demystify what it really takes to be AI-ready — from data foundations to network infrastructure, and finally, security.
I didn’t expect this last piece to take as long as it did. But the deeper I went, the more I realized: security for AI isn’t just a feature gap — it’s a category gap. The more I explored it, the more obvious it became that we’re not just securing systems anymore — we’re securing behaviors, decisions, and increasingly autonomous actors.
Along the way, I fell into this rabbit hole — and I’m still falling.
I first encountered agentic AI through a project using text-to-911: using AI to help people with language barriers, speech impairments, or deafness communicate with emergency services. That project opened my eyes to what’s possible — and to how fragile the control plane can be when AI starts acting independently. What started as a product feature quickly turned into a mission.
Since then, I’ve pulled together a team of builders, security experts, and curious minds who care deeply about this space. We’re diving headfirst into Agentic Security — because we believe the next generation of AI-native systems needs a new generation of safeguards.
This blog marks the end of my Laying the Foundations for AI-Ready Infrastructure series — but just the beginning of our journey into agentic risk, runtime controls, and the new trust layer for AI.
If this resonates with you — if you’re working on these challenges, wrestling with these questions, or just curious — I’d love to connect. Agentic Security is an ecosystem problem. Let’s build it together.