<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Detection at Scale]]></title><description><![CDATA[How AI is reshaping security operations from intelligent agents and detection to the data infrastructure underneath. A weekly newsletter by Jack Naglieri, founder & CEO of Panther.]]></description><link>https://www.detectionatscale.com</link><generator>Substack</generator><lastBuildDate>Wed, 29 Apr 2026 13:52:15 GMT</lastBuildDate><atom:link href="https://www.detectionatscale.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jack Naglieri]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[jacknaglieri@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[jacknaglieri@substack.com]]></itunes:email><itunes:name><![CDATA[Jack Naglieri]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jack Naglieri]]></itunes:author><googleplay:owner><![CDATA[jacknaglieri@substack.com]]></googleplay:owner><googleplay:email><![CDATA[jacknaglieri@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jack Naglieri]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Agents That Don't Wait for Alerts to Fire]]></title><description><![CDATA[How continuous agent hunting collapses the line between detection and threat hunting, and how the infrastructure today makes it possible.]]></description><link>https://www.detectionatscale.com/p/continuous-agent-hunting-detection-engineering</link><guid isPermaLink="false">https://www.detectionatscale.com/p/continuous-agent-hunting-detection-engineering</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Wed, 08 Apr 2026 12:48:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ccee369a-d1ba-4807-86bf-7a1d86c4f313_7952x5304.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work. I&#8217;m Jack, founder &amp; CEO at Panther. If you find this valuable, please share it with your team.</em></p><div><hr></div><p>Every detection program in production today runs on the same basic loop. A detection engineer studies a threat, translates it into rule logic, and deploys it to a SIEM or detection platform. That rule sits in the pipeline, evaluating events as they flow through, waiting for a match. When the conditions are met, it fires an alert. That alert lands in a queue, and someone picks it up, investigates, and makes a judgment call on whether to close it or escalate it. Then the next alert comes in, and the cycle repeats.</p><p>This model has become more sophisticated over the years. Detection-as-code brought version control, testing, and CI/CD to rule management. Behavioral signals replaced some of the noisiest atomic alerts with entity-level risk scoring. AI agents now handle the triage step faster and more consistently than most human analysts can at volume. But the fundamental structure hasn&#8217;t yet changed, and detection is still a human-authored, queue-driven process.</p><p><strong>Coverage in a rule-based model is bounded by two things: what your detection engineers know to look for, and how much time they have to write and maintain rules for it.</strong> Threat hunting exists precisely because teams understand their rules don&#8217;t cover everything. Hunters go looking for things the rules missed, patterns that haven&#8217;t been codified, behaviors that are too nuanced or too novel for static logic. But hunting is expensive. It requires dedicated time and access to broad datasets and intelligence. Most teams run hunts infrequently, and many don&#8217;t sustain a formal hunting program at all.</p><p>The industry has spent years optimizing each step in this chain individually: better rules, faster triage, richer enrichment, smarter correlation. What it <em>hasn&#8217;t</em> done is question whether the chain itself is the right model when agents are capable of something fundamentally different. In this post, we&#8217;ll look at what happens when agents stop waiting for rules to fire and start hunting continuously on their own, how that changes the relationship between detection engineering and threat hunting, and why we&#8217;re still early in a transition that will reshape how SOCs generate and act on security findings.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h2>What Changes When Agents Continuously Hunt</h2><p>An agent with access to a security data lake, enrichment sources, and threat intelligence doesn&#8217;t need someone to anticipate a threat before it can look for one. It can form a hypothesis, query the data, evaluate what it finds, and surface a structured finding, all without a rule telling it what to look for.</p><p>These building blocks already exist. Agents can now query log data, pull enrichments from identity providers and asset inventories, and cross-reference threat intelligence feeds to contextualize activities. Data lakes built on formats like Iceberg make months or years of security telemetry queryable in minutes. The agent doesn&#8217;t need a detection engineer to write a rule for &#8220;unusual cross-account role assumption patterns in AWS&#8221;; it can investigate that class of behavior directly by querying CloudTrail, correlating with identity context, and assessing whether the pattern deviates from the environment&#8217;s baseline.</p><p>What makes this different from existing threat-hunting workflows is its duration and autonomy. A human-led hunt is a project: someone defines a hypothesis, allocates time, runs queries, documents findings, and moves on. It happens once, and then it&#8217;s over until the next hunt. An agent-led hunt can run perpetually. The same investigative logic that a senior analyst would apply during a focused hunt can execute on a rolling basis, scanning for patterns across log sources, adjusting based on what it finds, and surfacing results as they emerge. The concept of a &#8220;hunt&#8221; as a discrete event starts to dissolve when the hunter never stops looking. Continuous agent hunting is the practice of deploying AI agents to independently scan security telemetry on a rolling basis, forming and testing hypotheses without waiting for a pre-authored detection rule to trigger. It collapses the distinction between &#8220;detection&#8221; and &#8220;threat hunting&#8221; into a single, ongoing process.</p><p><strong>This changes the role of detections in the overall security architecture. Rules don&#8217;t go away, but they no longer serve as the sole source of signal in the SOC.</strong> The alert queue gets populated by two streams: deterministic alerts from rules that matched known-bad patterns, and probabilistic findings from agents that noticed something worth investigating. The implication is that coverage is no longer gated solely by what your team had time to write rules for. The spaces between your rules, the gaps that today only get examined during periodic hunts, become continuously monitored territory.</p><h2>From Alert Queues to Continuous Findings</h2><p>In an agent-led SOC model, the alert queue still exists, but what populates it changes. Alongside the familiar rule-based alerts, agents surface findings from their own continuous analysis. The output looks similar to an alert (entities, evidence, confidence, next steps), but its origin is fundamentally different: it came from reasoning over data rather than a pattern match against a rule. These findings should be routed through the same case management or ticketing workflow as rule-based alerts.</p><p>The question is: how do you scope these hunts so agents aren&#8217;t just running open-ended queries against your entire log corpus? The answer is to give them an opinionated starting point. This is where the signals pattern becomes foundational. If you&#8217;ve already been labeling security-relevant events (authentication changes, privilege escalations, infrastructure modifications) and storing them as a filtered layer on top of your raw logs, you&#8217;ve built exactly the dataset that continuous agent hunting should operate against. These signals represent the &lt;1% of your log volume that is security-relevant but not necessarily alert-worthy on its own. A new IAM role creation in AWS, a service account added to a privileged group, and a conditional access policy change in Entra ID: none of these individually justifies waking someone up, but all of them are the kinds of activities an attacker would generate.</p><p>The mindset shift is subtle. In the rule-based model, you ask: &#8220;What specific pattern is malicious enough to fire an alert?&#8221; In the continuous hunting model, you ask: &#8220;What category of security-relevant activity should an agent be regularly reviewing for anomalies, patterns, or context that my rules wouldn&#8217;t catch?&#8221; You&#8217;re defining scopes, not rigid signatures.</p><p>In practice, this is a set of prompts that tell an agent what to look for. The structure should mirror what you&#8217;d write for a triage agent: define the threat model, explain why this activity matters, give the agent clear criteria for distinguishing risky from benign, call out common false positives, and specify the enrichment steps and external context sources that will help it make a confident judgment. </p><p>Here&#8217;s an example hunt prompt for IAM analysis in AWS:</p><blockquote><p><strong>Scope:</strong> Review all IAM signals from the past 24 hours, including role creations, policy attachments, trust policy modifications, and access key generation.</p><p><strong>Threat model:</strong> An attacker who has gained initial access to an AWS environment will often escalate privileges by creating new IAM roles, attaching permissive policies, or modifying trust relationships to enable cross-account movement. These changes are how an adversary establishes persistence and expands access beyond their initial foothold. IAM modifications are among the highest-value signals in a cloud environment because they directly affect who can access what.</p><p><strong>Assessing benign vs. risky:</strong> Most IAM changes in a healthy environment are made by infrastructure-as-code pipelines (Terraform, CloudFormation, Pulumi) running through CI/CD systems with known service roles. The key question isn&#8217;t whether the change happened, but whether the identity, method, and context are consistent with the organization&#8217;s normal change patterns. A role creation by a Terraform execution role during a deployment window, correlated with a recent merge in the infrastructure repository, is almost certainly routine. A role creation by a human identity through the console, especially one that doesn&#8217;t typically make IAM changes, is worth investigating.</p><p><strong>Common false positives:</strong> Platform engineering teams making manual IAM changes during incident remediation or environment bootstrapping. Automated security tooling (like AWS Config remediation or SCPs applied by a governance pipeline) that generates IAM signals as a side effect. Sandbox or development accounts where IAM experimentation is expected and doesn&#8217;t carry production risk.</p><p><strong>Investigation steps:</strong> For each IAM signal, query the identity&#8217;s activity history over the past 30 days via the SIEM to establish whether IAM changes are part of their normal pattern. Check the identity provider via MCP to determine the user&#8217;s role, team, and whether their account has been flagged for any access reviews. Correlate with CI/CD activity (via GitHub or deployment platform MCP) to determine if the change aligns with a recent code merge or deployment. Look for surrounding signals within a two-hour window: new console logins, AssumeRole calls from unfamiliar source accounts, changes to CloudTrail logging configuration, or S3 bucket policy modifications that could indicate data staging.</p><p><strong>Output:</strong> Produce a structured finding for any activity where the combination of identity, method, timing, and surrounding context suggests behavior outside the expected operational pattern. Include confidence level, supporting evidence, the enrichment sources consulted, and whether the finding warrants analyst review or can be logged as a tracked observation.</p></blockquote><p>That prompt doesn&#8217;t specify a single pattern that&#8217;s definitely malicious. It defines a threat model, provides the agent with judgment criteria to distinguish routine operations from attacker behavior, and points it to the enrichment sources that enable a confident assessment. The agent might find nothing notable for three days straight and then surface a finding on day four because a contractor&#8217;s identity, one that the identity provider shows hasn&#8217;t completed its latest access review, created an IAM role with a cross-account trust policy an hour after authenticating via the console for the first time in weeks. No rule was written for that exact chain of events. The agent connected the signals because it was told what mattered, why it mattered, and how to evaluate what it found.</p><p>You can build these hunting scopes across any domain where you have structured security signals: configuration changes, identity-based access anomalies, or data movement in cloud storage. Each scope is a standing brief to an agent, and over time, they become a library of continuous hunts, each running on a cadence that matches the risk profile of the activity it monitors: IAM changes reviewed daily, data movement signals reviewed hourly, authentication policy changes reviewed in near real-time.</p><h2>What Happens to Detection Engineering</h2><p>Detection engineering doesn&#8217;t disappear in this model, but its center of gravity shifts. Rules become the known-good baseline: high-confidence, high-severity patterns that you want firing deterministically every time. If someone disables CloudTrail logging on a production account, you don&#8217;t need an agent to hypothesize about whether that&#8217;s interesting.</p><p>But the coverage between those high-confidence rules, the territory that today only gets examined during periodic hunts if it gets examined at all, becomes the domain of continuous agent analysis. <strong>Detection engineers spend less time trying to write rules for every conceivable variation of a behavior and more time defining the hunting scopes that agents operate against.</strong> The work shifts from authoring pattern-match logic to articulating threat models, judgment criteria, and enrichment paths that agents can execute on a rolling basis. In a sense, the prompt we walked through in the previous section is what a detection artifact starts to look like: not a rule, but a standing brief that encodes how your team thinks about a class of threat.</p><p>This doesn&#8217;t reduce the rigor of detection engineering. If anything, it raises the bar. Writing a rule that matches a specific log pattern is hard. Writing a hunting scope that teaches an agent how to distinguish attacker behavior from routine operations across multiple data sources, with clear guidance on false positives and enrichment steps, requires deeper threat modeling and a more explicit articulation of what &#8220;suspicious&#8221; actually means in your environment. The discipline gets harder, not easier. But the coverage it produces is dramatically broader.</p><h2>We&#8217;re Still in the Early Innings</h2><p>Most security teams today are still in the first phase of the agentic transition: agents triaging alerts written by humans. That&#8217;s a meaningful step, and teams doing it well are already seeing real gains in analyst productivity and response consistency. But it&#8217;s a long way from agents generating their own findings autonomously.</p><p>Continuous agent hunting only works if agents have something worth hunting through. You need a queryable data lake with enough retention to establish behavioral baselines, structured security signals that give agents an opinionated starting point rather than a raw log firehose, and MCP tooling that connects agents to enrichment sources outside the SIEM. Without those foundations, an agent hunting continuously is just burning tokens and compute against unstructured data, producing low-confidence findings that create more work than they save. The economics of this architecture are directly tied to how well you&#8217;ve curated the data underneath it.</p><p>The trust model is also immature. Most organizations aren&#8217;t ready to act on a finding that an agent surfaced independently without a human validating the reasoning. That&#8217;s a reasonable position today, and teams should build toward autonomy incrementally rather than flipping a switch.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5hmB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5hmB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 424w, https://substackcdn.com/image/fetch/$s_!5hmB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 848w, https://substackcdn.com/image/fetch/$s_!5hmB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 1272w, https://substackcdn.com/image/fetch/$s_!5hmB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5hmB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png" width="1456" height="243" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:243,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:51829,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/193488701?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5hmB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 424w, https://substackcdn.com/image/fetch/$s_!5hmB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 848w, https://substackcdn.com/image/fetch/$s_!5hmB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 1272w, https://substackcdn.com/image/fetch/$s_!5hmB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04413164-de30-4450-bea7-d81bbd2d2aef_1633x272.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Each step requires the agent to earn trust through transparent reasoning and consistent accuracy. Most teams are somewhere in the first or second stage, and that&#8217;s fine. The architecture described in this post is what makes the later stages possible.</p><p>But the trajectory is clear. If you&#8217;re investing in a security data lake, security-relevant signals, detections designed for agent reasoning, and connecting enrichment sources via MCP, you&#8217;re building the architecture that makes continuous agent hunting possible. The flywheel that detection engineering always promised, detect, learn, improve, repeat, finally spins at a pace that matches the threat landscape. The constraint was never the data or the models. It was the human bottleneck in the loop.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/continuous-agent-hunting-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Detection at Scale! This post is public, so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/continuous-agent-hunting-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/continuous-agent-hunting-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p style="text-align: center;"><em>Cover photo by <a href="https://unsplash.com/@oksdesign?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Oxana Melis</a> on <a href="https://unsplash.com/photos/a-group-of-people-climbing-a-tall-metal-tower-fYTQz5lpAdM?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p><div><hr></div><h3>Continued Reading</h3><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;58e82020-3026-4494-9c6f-d416603909d9&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter on AI-native security operations. I&#8217;m Jack, Founder &amp; CEO at Panther. If you find this valuable, please share it!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;What Happens to Detections When Agents Do the Work&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-09T10:04:01.582Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1f0116e9-98fb-43cf-860f-656376847f6b_2400x1600.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ai-agent-alert-triage-detection-engineering&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:190319737,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!XxsJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b9c789-fe95-4606-a6a9-244c47745de8_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;2d37745a-b31a-4ef9-9d61-b8696fe3aecf&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work. I&#8217;m Jack, founder &amp; CTO at Panther. If you find this valuable, please share it with your team!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Threat Hunting with Claude Code and MCP&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-20T14:03:26.553Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db1dce9a-e23a-4654-ba11-dc12c4c7239b_4608x2592.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ai-threat-hunting-mcp-workflows&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:185101296,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:11,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!XxsJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b9c789-fe95-4606-a6a9-244c47745de8_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[What Happens to Detections When Agents Do the Work]]></title><description><![CDATA[The mindset shift from alerting for humans to alerting for AI agents.]]></description><link>https://www.detectionatscale.com/p/ai-agent-alert-triage-detection-engineering</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ai-agent-alert-triage-detection-engineering</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 09 Mar 2026 10:04:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1f0116e9-98fb-43cf-860f-656376847f6b_2400x1600.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter on AI-native security operations. I&#8217;m Jack, Founder &amp; CEO at <a href="https://panther.com/">Panther</a>. If you find this valuable, please share it!</em></p><div><hr></div><p>Every detection you&#8217;ve ever written was, at its core, a message to a human. The alert title, the severity label, the description, and the runbook &#8212; all of it was carefully designed to transfer situational awareness to an analyst as fast as possible. The assumption baked into that design was that a person would be on the receiving end: someone who would read the alert, follow a set of deterministic steps (check the asset inventory, look up the user&#8217;s recent activity, consult the runbook), and decide what to do next. That workflow, whether executed manually or partially automated through SOAR playbooks, was fundamentally built around human cognition as the reasoning engine. That assumption is now changing.</p><p>AI agents are increasingly handling the first layer of alert triage and investigation in modern SOCs. When an agent is the first reader of your detection, almost everything about what makes that detection good changes. The fields that matter, how severity gets used, what a runbook is actually for, and what noise even means all shift. And critically, the detection itself evolves: rather than pure rule logic designed to fire a signal at a human, detections combine rule logic with natural language prompts that communicate the threat model, the criteria for risky versus benign, and the investigative intent behind the rule. You&#8217;re not just writing logic anymore; you&#8217;re providing guidance for a sophisticated reasoning system.</p><p>The human role in this model doesn&#8217;t disappear; it becomes more strategic. The questions that require human judgment shift from &#8220;is this alert legitimate?&#8221; to &#8220;how would we judge this particular type of alert as risky?&#8221; Coverage decisions, compliance requirements, and threat modeling for your specific environment are areas where human expertise sets the direction that agents then execute against.</p><p>In this post, we&#8217;ll review the evolution from human-led to AI-led alerting in the SOC: what the old world looked like, what concrete changes the new one brings, and what practical steps teams can take to build detections for an agentic SOC.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h2>The Old World: Detections Designed for Human Throughput</h2><p>To understand what changes, let&#8217;s review how the current detection lifecycle works.</p><p>A detection engineer writes a rule: some combination of log conditions, thresholds, and field matching that fires when a specific pattern appears in the data. The rule generates an alert with a title, severity level, description, and, ideally, a runbook outlining next steps. That alert lands in a queue (a SIEM console, a ticketing system, a Slack channel) and an analyst picks it up.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!coPv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!coPv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 424w, https://substackcdn.com/image/fetch/$s_!coPv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 848w, https://substackcdn.com/image/fetch/$s_!coPv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 1272w, https://substackcdn.com/image/fetch/$s_!coPv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!coPv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png" width="1456" height="772" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:772,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:284928,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/190319737?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!coPv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 424w, https://substackcdn.com/image/fetch/$s_!coPv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 848w, https://substackcdn.com/image/fetch/$s_!coPv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 1272w, https://substackcdn.com/image/fetch/$s_!coPv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc2aa716-cf8d-4c5b-9dda-0e95a8f9e311_2308x1224.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What happens next is a predominantly human process.</p><p>The analyst reads the alert title and forms an initial hypothesis. They read the runbook, which walks through a set of investigation steps: check whether the asset is managed, look up the user&#8217;s recent authentication history, cross-reference against known-bad IPs, and verify whether this behavior has been seen before for this user or peer group. Each step requires navigating to a different tab or running a different query. The analyst mentally assembles the context from those data points, reaches a judgment (risky or benign), and either escalates or closes.</p><p>SOAR platforms were built to accelerate this by automating the deterministic parts. If the IP is in a threat intel feed, automatically enrich it. If the asset is unmanaged, automatically change the severity. But the reasoning step, the moment where someone had to look at the assembled evidence and decide what it meant, always stayed with the human.</p><p>The whole system is optimized around that constraint. Runbooks are written to guide human judgment. Severity labels are calibrated to manage human attention. Alert descriptions are written to help a person quickly orient themselves. Even tuning decisions are fundamentally about protecting analyst time, because every suppressed alert reclaims capacity in a finite human queue. The detection lifecycle, from rule authorship to triage to tuning, is a system designed to move signals through human cognition as efficiently as possible.</p><p>That&#8217;s not a criticism. Given the constraints, the design is reasonable. But it means almost every convention we have around what a &#8220;good detection&#8221; looks like was shaped by human cognitive limits rather than by what&#8217;s actually optimal for identifying threats.</p><h2>What Changes When the Reader Is an Agent</h2><p>When an AI agent handles tier 1 alert triage, the reader of your detection changes entirely, reshaping the requirements and design for a good detection.</p><p>The most important shift is that the detection&#8217;s metadata, which existed to orient a human investigator, now serves a fundamentally different purpose. </p><p>A human analyst reads a description and fills in context from experience. They read a runbook and apply judgment at every step. An agent doesn&#8217;t do either of those things gracefully. What it does well is reason over <em>explicit</em> context: a clear statement of what this behavior means, what makes an instance of it risky versus routine, and what investigative goals to pursue. The gap between the two models &#8212; implicit knowledge versus explicit context &#8212; is where most traditional detections will fall short for agents.</p><p><strong>The evolution is that detections stop being purely logical and become a combination of rule-based logic and an investigative prompt.</strong> The descriptions become a statement of the threat model: what adversary behavior this rule is designed to surface, why it matters, and what legitimate activity looks like in comparison. The runbook becomes agent instructions: the risk criteria the agent should reason over and the investigative goals it should work toward.</p><p>Here&#8217;s what that looks like in practice, using a real detection for <code>CloudTrail StopLogging</code>, a common technique for evading detection after gaining access to an AWS environment.</p><p>The traditional description and runbook are written for a human reader:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XMCE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XMCE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 424w, https://substackcdn.com/image/fetch/$s_!XMCE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 848w, https://substackcdn.com/image/fetch/$s_!XMCE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 1272w, https://substackcdn.com/image/fetch/$s_!XMCE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XMCE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png" width="1456" height="512" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:512,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:257887,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/190319737?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XMCE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 424w, https://substackcdn.com/image/fetch/$s_!XMCE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 848w, https://substackcdn.com/image/fetch/$s_!XMCE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 1272w, https://substackcdn.com/image/fetch/$s_!XMCE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e2b988-a93b-4886-8328-4f4a36a1e771_3142x1104.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The AI-native version provides a threat model and decision framework rather than a procedure. While this description and runbook are longer, agents can write AI-optimized descriptions and runbooks tailored to your SOC&#8217;s operational practices, saving time across the entire lifecycle:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OmkZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OmkZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 424w, https://substackcdn.com/image/fetch/$s_!OmkZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 848w, https://substackcdn.com/image/fetch/$s_!OmkZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 1272w, https://substackcdn.com/image/fetch/$s_!OmkZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OmkZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png" width="1456" height="1188" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1188,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:751032,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/190319737?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OmkZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 424w, https://substackcdn.com/image/fetch/$s_!OmkZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 848w, https://substackcdn.com/image/fetch/$s_!OmkZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 1272w, https://substackcdn.com/image/fetch/$s_!OmkZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F997c9842-0600-4f3b-9a06-39980174eb83_2999x2448.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The detection logic is identical. What changed is how the agent is guided to work once the alert fires.</p><p><strong>We actually ran this exact experiment.</strong> These two versions of this detection were deployed simultaneously on a live environment, both firing on the same event: a <code>StopLogging</code> call on a test trail using Stratus Red Team, performed via an admin from the same IP. Both agents ran the same SQL queries against the same CloudTrail data. The only variables were the runbook and description.</p><p>The difference in analysis came down to a single benign indicator that only the AI-native agent surfaced: the detection rule itself had been modified 10 minutes before the <code>StopLogging</code> action, and a detection update occurred at almost the same timestamp, a strong signal that this was active testing rather than an attacker disabling important audit trails.</p><p>That CI/CD signal appeared in both agents&#8217; query results. The agent with the step-by-step runbook didn&#8217;t recognize it as relevant because its runbook said: &#8220;check for a change management ticket or deployment pipeline run.&#8221; The agent searched for a formal ticket but didn&#8217;t find one and moved on. The AI-native runbook&#8217;s explicit criterion &#8212; a known IaC role with a matching deployment pipeline run at the same time &#8212; primed the agent to scan for CI/CD activity, which is how it discovered the detection upload and weighted it as a meaningful, benign indicator.</p><p><strong>The runbook didn&#8217;t change the agent&#8217;s investigation. It changed what the agent recognized as meaningful and how it ultimately scored the alert&#8217;s risk level.</strong></p><h2><strong>Give Agents Goals, Not Scripts</strong></h2><p>The traditional runbook&#8217;s numbered steps are written for a human who reads &#8220;check for a deployment pipeline run&#8221; and intuitively knows to look at CI/CD history, recent commits, Slack messages, and deployment logs. An agent reading the same instruction performs a more literal interpretation: it looks for what the instruction most directly implies, finds nothing, and moves on. <strong>The AI-native runbook closes that gap by encoding the analyst&#8217;s implicit reasoning explicitly, as structured risk criteria rather than procedural steps.</strong></p><p>Over-specifying investigation steps creates a different problem: it turns a reasoning system into a script executor. An agent told exactly what to do in a fixed sequence will do exactly that, even when the situation it encounters doesn&#8217;t fit the template. If the actor ARN immediately matches a known provisioning role at 2 pm, a scripted agent wastes steps. If unusual IP geolocation activity appears at 2 am, a scripted agent may not place much weight on it because it wasn&#8217;t in the procedure. Agents perform better when the runbook specifies what a correct conclusion looks like and leaves room for them to find it.</p><p>That said, specificity is right in some cases. Compliance-driven response procedures are one: if your controls require a specific notification within a defined time window when certain events occur, explicitly tell the agent. Response actions with real-world consequences are another, disabling an account, revoking credentials, and isolating a host. Those deserve prescribed guardrails because the cost of the agent improvising is too high. The principle isn&#8217;t &#8220;never be specific,&#8221; it&#8217;s that specificity should be reserved for the parts of the workflow where deviation genuinely creates risk. For investigation and reasoning, leave room.</p><p>Severity labels are worth addressing briefly here as well. For humans, severity was a prioritization tool (&#8221;high&#8221; meant &#8220;interrupt what you&#8217;re doing&#8221;). For agents, severity is an input to routing and escalation logic. That distinction changes how it should be assigned: less about managing analyst attention and more about accurately encoding the risk profile of the detected behavior so the agent routes correctly. A medium-severity detection that should almost always resolve as benign is different from a medium-severity detection that&#8217;s genuinely ambiguous. Encoding that distinction in severity, or in the description, gives the agent better inputs for its routing decision.</p><h2>The Tuning Problem Inverted</h2><p>One of the most counterintuitive aspects of the agentic SOC is what happens to the noise problem.</p><p>In the human-led model, alert fatigue was the central operational challenge. Teams spent enormous energy tuning detections to reduce noise rates because every alert was burning analyst time they couldn&#8217;t get back. The economics were simple: analyst hours are finite, and every wasted alert is a tax on a resource you can&#8217;t scale cheaply.</p><p>AI agents change those economics in one important way and leave them unchanged in another. Agents don&#8217;t get fatigued by volume. They can process alert queues at a scale no human team could match, so the raw-volume problem that drove so many tuning decisions largely goes away. You no longer need to suppress a class of alerts because your analysts can&#8217;t get to them.</p><p>But noise still matters.</p><p>Bad data (low-quality signals, alerts that fire on activity the agent has no meaningful way to distinguish from benign behavior, detections without sufficient context for the agent to reason about) creates two new problems. </p><p>First, it&#8217;s expensive. Every alert an agent processes consumes tokens, and every tool call it makes to investigate a low-signal alert, pulling enrichment, querying logs, assembling context, compounds that cost. At scale, a noisy detection program isn&#8217;t just a quality problem; it&#8217;s a budget problem. </p><p>Second, it pollutes reasoning. An agent that processes large volumes of low-signal alerts doesn&#8217;t burn out; it builds up a pattern of confident-but-wrong conclusions that is hard to audit and even harder to trust. Alert fatigue in the human model degraded analyst morale and response time. The agentic equivalent degrades decision quality in ways that are less visible and harder to catch.</p><p>The tuning imperative shifts from protecting analyst attention to protecting agent reasoning quality. The goal of tuning isn&#8217;t really to reduce alert volume in aggregate anymore; it&#8217;s to ensure that what fires is something the agent can make a meaningful decision about, either because the signal is clean enough to act on autonomously or because it&#8217;s genuinely ambiguous enough to warrant routing to a human.</p><p>In practice, this creates a new kind of tuning question: not just &#8220;is this alert noise?&#8221; but &#8220;does this alert give the agent what it needs to make a good call?&#8221; A detection that produces a technically accurate signal but lacks the enrichment and context for confident agent reasoning is effectively noisy in the agentic model, even if it would have been workable for a human analyst who could go gather that context manually. The threshold for what constitutes a well-formed detection gets higher.</p><h2>Where Humans Still Own the Work</h2><p>None of this means analysts are now obsolete. It means the nature of the work shifts, and understanding where that shift lands is important for teams thinking about how to structure their security operations going forward.</p><p>The work that remains deeply human is the strategic layer: deciding what to monitor in the first place. Threat modeling (mapping the specific assets, workflows, and data flows that matter for your organization and identifying the attack paths that could reach them) requires organizational context that agents don&#8217;t inherently have. Compliance requirements are similar: understanding which controls need to be verifiable, what your audit obligations are, and how to translate those into detection coverage requires judgment about business risk, not just pattern matching over log data. We&#8217;ve explored this in prior posts on threat modeling and detection coverage strategy, and the same principle applies here. Agents are very good at executing against a well-defined coverage map; they&#8217;re not yet good at defining what that map should look like in the first place.</p><p>Novel threat recognition is another area where humans hold an edge that matters. Agents reason well over patterns they&#8217;ve been given context for (known TTPs, behaviors encoded in detection logic, threat models that have been explicitly documented). What they struggle with is recognizing that something new and previously unmodeled is happening. Experienced analysts develop intuition for when a pattern doesn&#8217;t match anything they&#8217;ve seen before, which is a fundamentally different skill from applying existing knowledge. That intuition still needs to be in the loop.</p><p>Response decisions with real-world consequences belong here as well. When an investigation concludes that an account has been compromised, the question of whether to disable it, notify the user, involve legal, or escalate to executive leadership involves organizational, legal, and relationship considerations that agents aren&#8217;t equipped to navigate. Agents can surface findings and recommend actions, but the accountability for consequential decisions should stay with people.</p><p>And finally, humans need to maintain a critical eye on agent reasoning itself. Agents can be wrong in confident, systematic ways that are harder to catch than the more obvious mistakes a tired analyst might make. Someone needs to review the agents&#8217; decisions, identify patterns in cases they&#8217;re getting wrong, and update the detection logic and context when agent reasoning consistently goes off track. Oversight of the agents becomes a core analyst function.</p><h2>Building Toward This Transition</h2><p>Teams that want to take advantage of what agentic triage makes possible need to carefully consider what must be true in their environment for it to work. The shift from human-led to agent-led triage isn&#8217;t just an architectural change; it requires rethinking the context that agents receive to properly delegate triage to the quality expected of senior team members.</p><p>The most important investment is in the data layer. Agents reason over what&#8217;s available at alert time. If your detections fire against raw, un-enriched log data with inconsistent field naming and no behavioral context attached, agents will make poor decisions regardless of how capable the underlying model is. Normalized schemas, enriched asset context (is this device managed? is this account a service account?), and behavioral baselines (is this unusual for this user or peer group?) should be present in the data that feeds the agent. <strong>This is one of the core reasons the security data lake architecture matters: it provides the queryable, structured foundation that enables agentic reasoning at scale.</strong></p><p>The second investment is in detection quality as a prompt engineering problem. Start treating your detection descriptions and runbooks as context for a reasoning system, not instructions for a human reader. What does the agent need to know about why this behavior is suspicious? What context should it gather? What criteria distinguish risky from benign in this specific case? The detection engineering discipline doesn&#8217;t go away; it gets more rigorous because the output of that work now needs to be interpretable by an AI system, not just a person.</p><p>The third investment is organizational. The shift toward agentic triage creates room to reallocate analyst time toward detection coverage, threat modeling, and oversight, but that reallocation doesn&#8217;t happen automatically. Teams that get the most out of this transition will be deliberate about it, giving analysts explicit ownership of the strategic coverage questions while building the feedback loops that let agent performance improve over time.</p><p>The goal is a flywheel: better threat modeling yields better detections, which give agents better context, which in turn surfaces better signal, and that signal informs the next round of threat modeling. Getting that flywheel moving is the real work of building an agentic SOC and closing the loop.</p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/ai-agent-alert-triage-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Detection at Scale! If you enjoyed reading, please share this post.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/ai-agent-alert-triage-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/ai-agent-alert-triage-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><p><em>The experiment described above was run live on a real Panther deployment, using Panther both as the detection system and as the agentic triage engine evaluating the alerts. If you&#8217;re building toward the agentic SOC vision described here, <a href="https://panther.com/">that&#8217;s exactly what we&#8217;ve built with Panther.</a> Reach out or reply to this email, and I&#8217;ll give you a demo.</em></p><div class="directMessage button" data-attrs="{&quot;userId&quot;:85379436,&quot;userName&quot;:&quot;Jack Naglieri&quot;,&quot;canDm&quot;:null,&quot;dmUpgradeOptions&quot;:null,&quot;isEditorNode&quot;:true}" data-component-name="DirectMessageToDOM"></div><h3>Recent Posts</h3><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;70cb57f2-2c4c-4d95-a3a5-2274127df2d1&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work. I&#8217;m Jack, founder &amp; CTO at Panther. If you find this valuable, please share it with your team!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Threat Hunting with Claude Code and MCP&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-20T14:03:26.553Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db1dce9a-e23a-4654-ba11-dc12c4c7239b_4608x2592.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ai-threat-hunting-mcp-workflows&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:185101296,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:10,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V41S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b5f543-9751-4f42-99a8-9354836383e6_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;0a1ffc8f-cbde-45d1-bf7d-708359e6b214&quot;,&quot;caption&quot;:&quot;In the latest episode of Detection at Scale, I sat down with Michael Sinno, Director of Detection and Response at Google. With 20 years at Google, starting as a Windows sysadmin in 2006, Michael&#8217;s security journey began during Operation Aurora and has evolved through Google&#8217;s transformation from 10,000 to 200,000 employees. His experience building and s&#8230;&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;D@S #76 - Google's Detection Director: 99% of Our Million Annual Tickets Never Reach a Human&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-26T19:03:28.378Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/189009016/672628bd-221a-47da-956d-342301c1419b/transcoded-1771935242.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ep-76-google-michael-sinno-autonomous-soc-gemini-agents&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189009016,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V41S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b5f543-9751-4f42-99a8-9354836383e6_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p><em>Cover photo by <a href="https://unsplash.com/@eddrobertson?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Ed Robertson</a> on <a href="https://unsplash.com/photos/assorted-title-book-lot-eeSdJfLfx1A?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p>]]></content:encoded></item><item><title><![CDATA[D@S #76 - Google's Detection Director: 99% of Our Million Annual Tickets Never Reach a Human]]></title><description><![CDATA[Fine-tuned agents on Gemini, achieving 95% precision in ticket deduplication, and why speed matters more than ever in the era of AI attackers.]]></description><link>https://www.detectionatscale.com/p/ep-76-google-michael-sinno-autonomous-soc-gemini-agents</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-76-google-michael-sinno-autonomous-soc-gemini-agents</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Thu, 26 Feb 2026 19:03:28 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/189009016/c1db7915c598a3386f7fcea39f147588.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the latest episode of Detection at Scale, I sat down with Michael Sinno, Director of Detection and Response at Google. With 20 years at Google, starting as a Windows sysadmin in 2006, Michael&#8217;s security journey began during Operation Aurora and has evolved through Google&#8217;s transformation from 10,000 to 200,000 employees. His experience building and scaling detection systems that process 7 trillion log lines daily while automating 99%+ of a million annual tickets positions him to discuss the intersection of extreme-scale security operations, AI integration, and the future of autonomous detection and response.</p><p>Our conversation explores Google&#8217;s methodical approach to AI adoption, starting with incident summaries and progressing to fine-tuned agents for specific detection workflows. Michael discusses the critical distinction between when to use AI versus traditional automation, Google&#8217;s &#8220;infer and interrupt&#8221; model for faster containment, and why the team&#8217;s stretch goal is 70% automated operations. His emphasis on golden datasets for training, human-in-the-loop validation even at scale, and the shift from tool expertise to domain expertise provides concrete guidance for security leaders navigating the march to autonomous SOC operations while maintaining precision.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/ep-76-google-michael-sinno-autonomous-soc-gemini-agents?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/ep-76-google-michael-sinno-autonomous-soc-gemini-agents?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h4>Topics Covered</h4><ul><li><p><strong>Processing 7 Trillion Log Lines with 99%+ Automation</strong>: How Google&#8217;s detection and response team handles a million tickets annually with less than 1% requiring human intervention, built on 15 years of detection-as-code fundamentals and automation before AI.</p></li><li><p><strong>The AI Adoption Journey from Assisted to Autonomous</strong>: Google&#8217;s progression from AI-assisted incident summaries (reducing 30 minutes to 90 seconds) to AI-led deduplication agents to autonomous workflows, while maintaining conservative precision requirements given their no-fail mission.</p></li><li><p><strong>Fine-Tuned Agents on Gemini with Golden Datasets</strong>: Why Google uses fine-tuned models validated by humans for specific agents like exfiltration detection, rather than relying solely on prompting, with golden datasets ensuring high-quality training data.</p></li><li><p><strong>The Critical Distinction: AI vs Traditional Automation</strong>: How Google&#8217;s lead engineer established that not everything needs AI&#8212;things requiring judgment, nuance, and data analysis benefit from AI, while deterministic &#8220;if A then B&#8221; workflows should remain traditional automation.</p></li><li><p><strong>Deduplication Agent Achieving 95% Precision</strong>: Google&#8217;s ticket deduplication agent operates at 95% precision with 38% recall, with humans still in the loop but not on every ticket, demonstrating the precision-recall tradeoff in production AI systems.</p></li><li><p><strong>Vulnerability Workflow Automation</strong>: How AI collects daily vulnerability data from trusted sources, pulls metadata, evaluates infrastructure impact, writes reports, and recommends actions&#8212;reducing hours of work to minutes while asking &#8220;is Google infrastructure affected&#8221; with high confidence.</p></li><li><p><strong>Overseer Agents for Quality Control</strong>: Google deploys agents that evaluate other agents&#8217; outputs in aggregate and agents that assess ticket quality based on documentation criteria, kicking incomplete work back to analysts&#8212;AI evaluating both AI and human work.</p></li><li><p><strong>The Infer and Interrupt Model</strong>: Google&#8217;s security-wide shift toward detecting suspicious behavior early and automatically containing it (cutting email, locking accounts) rather than spinning up full investigations&#8212;necessary because AI attackers don&#8217;t sleep and move faster.</p></li><li><p><strong>TimeSketch Integration with SecGemini</strong>: How Google achieved 50x speed improvements in forensic timeline analysis by integrating Gemini with TimeSketch, inventing new chunking methods, and pulling out events no human would catch without knowing exactly what to look for.</p></li><li><p><strong>The Future: Broader Detections with Specific Intel Layers</strong>: Michael predicts 70% of detections becoming broader &#8220;this looks odd&#8221; signals that trigger challenges or containment, layered with shorter-lived specific detections based on current threat intelligence for rapid pivoting.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>The transformation Michael describes aligns with Panther's approach to AI integration, using agents to scale judgment and pattern recognition, while maintaining deterministic logic for routine workflows. Security teams can focus on the critical thinking and domain expertise that Michael emphasized as irreplaceable. <a href="https://panther.com/product/panther-ai">Learn more about Panther AI</a> and how our AI SOC platform helps scale human judgment rather than replace it!</p>]]></content:encoded></item><item><title><![CDATA[D@S #75 - The Bigger Risk Is Refusing to Adopt AI Agents At All]]></title><description><![CDATA[How Block built its AI-first security operations, democratized detection engineering across the company, and achieved breakthrough efficacy in automated triage.]]></description><link>https://www.detectionatscale.com/p/ep-74-block-james-nettesheim-block-goose-mcp-ai-detection-engineering</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-74-block-james-nettesheim-block-goose-mcp-ai-detection-engineering</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Wed, 11 Feb 2026 13:49:06 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/187399108/c2257f402987a3102f97f889ab1c26ca.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the latest episode of Detection at Scale, I sat down with James Nettesheim, CISO at Block. James&#8217; career spans the U.S. government, including various overseas deployments; a master&#8217;s degree in computer security; computer forensics work at the United Nations; leading high-profile incident response at Mandiant; and running incident response worldwide at Google before joining Block. His background in detection, response, and forensics, combined with his experience securing large-scale technology organizations, positions him to discuss the intersection of agentic AI, security operations, and open source principles.</p><p>Our conversation explores Block&#8217;s journey building <a href="https://block.github.io/goose/">Goose</a>, a general-purpose AI agent used across the company, and co-designing the <a href="https://modelcontextprotocol.io/docs/getting-started/intro">Model Context Protocol</a>  with Anthropic. James discusses Block&#8217;s &#8220;democratizing detections&#8221; principle, where nearly half of all new detections in 2025 were created with AI, and how the company balances principled risk-taking with security rigor through data safety levels and AI security principles. His emphasis on human accountability for agent actions, the development of Binary Intelligent Triage, which achieves 99.9% efficacy, and Block&#8217;s commitment to open source provide concrete guidance for security leaders navigating AI adoption while maintaining high security standards.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/ep-74-block-james-nettesheim-block-goose-mcp-ai-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/ep-74-block-james-nettesheim-block-goose-mcp-ai-detection-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h4>Topics Covered</h4><ul><li><p><strong>Building Goose as a General-Purpose Agent</strong>: How Block developed Goose as an open source agent for the entire company to perform analysis, deep research, and automate common tasks with recipes, eventually contributing it to the Agentic AI Foundation under the Linux Foundation.</p></li><li><p><strong>Co-Designing MCP with Anthropic</strong>: Block&#8217;s partnership with Anthropic to develop the Model Context Protocol alongside Goose, creating a reference implementation platform that unlocked automation across the company by connecting to numerous systems through MCP servers.</p></li><li><p><strong>Prompt Injection Defenses in Goose</strong>: Block&#8217;s research into hardening Goose against hidden prompt injection attacks, implementing both deterministic detection and adversarial AI concepts where one LLM reviews commands and context provided to another LLM as a judge.</p></li><li><p><strong>AI Security Principles and Data Safety Levels</strong>: How Block evolved its CDC-inspired data safety levels into AI security tiers, creating an accelerated review path that balances speed with security based on what data agents process and what actions they can take.</p></li><li><p><strong>Democratizing Detection Engineering</strong>: Block&#8217;s principle that anyone at the company can write detections using natural language with Goose and MCP-Panther, leading to 40% of new detections in 2025 being created with AI assistance, including contributions from teams outside security, like Bitcoin product engineers.</p></li><li><p><strong>Binary Intelligent Triage Achieving 99.9% Efficacy</strong>: How Block&#8217;s system stores historical detections, alerts, and investigations in a vector database to perform semantic analysis on new alerts, achieving near-perfect efficacy and enabling confidence in automated analysis actions.</p></li><li><p><strong>Human Accountability for Agent Actions</strong>: Why Block requires agents to be connected to internal identity so code appears as written by the human operator, maintaining responsibility and avoiding &#8220;the agent just wrote that&#8221; scenarios, with humans still required to review PRs.</p></li><li><p><strong>Headless Goose for Autonomous Workflows</strong>: Block&#8217;s CLI version of Goose that integrates with frameworks to create PRs, JIRA tickets, and automatically fix vulnerabilities from scanner output, while still requiring human approval before code is pushed.</p></li><li><p><strong>The Future SOC Without Tool Expertise</strong>: James predicts that security professionals won&#8217;t need expertise in specific tools or domain-specific languages, but will work in natural language, while still requiring a deep understanding of complex technical systems and domain expertise to stay ahead of attackers.</p></li><li><p><strong>Open Source as Economic Empowerment</strong>: How Block&#8217;s open-source commitment stems from CEO Jack Dorsey&#8217;s belief in economic empowerment and in providing financial tools for everyone, with its secret sauce being the ability to scale and empower people rather than closed-source software.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>The conversation with James mentions Panther's partnership with Block on <a href="https://github.com/panther-labs/mcp-panther">MCP-Panther</a>, which enables natural language detection, alert triage, and investigation while maintaining code-based rigor and a human-in-the-loop approach. By democratizing detection creation through AI agents while preserving accountability and review processes, security teams can scale detection coverage and empower broader organizations to contribute security expertise. <a href="https://panther.com/product/panther-ai">Learn more about Panther AI</a> and MCP integration and how we're building systems that combine accessibility with engineering discipline.</em></p><div><hr></div><h3>Additional Reading</h3><ul><li><p><strong><a href="https://www.theregister.com/2026/01/12/block_ai_agent_goose/">Block CISO: We red-teamed our own AI agent to run an infostealer on an employee laptop</a></strong></p></li><li><p><strong><a href="https://block.github.io/goose/blog/2025/06/02/goose-panther-mcp/">Democratizing Detection Engineering at Block: Taking Flight with Goose and Panther MCP</a></strong></p></li></ul>]]></content:encoded></item><item><title><![CDATA[D@S #74 - Compass' Ryan Glynn on Why LLMs Shouldn't Make Security Decisions — But Should Power Them]]></title><description><![CDATA[Language models for semantic understanding, custom ML models achieving 95% on-call reduction, and why deterministic detection rules still matter in the age of AI.]]></description><link>https://www.detectionatscale.com/p/ep-74-compass-ryan-glynn</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-74-compass-ryan-glynn</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Tue, 27 Jan 2026 14:06:27 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/185840957/45c3f2a77fd2085f8e8fb891c63104ac.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the latest episode of Detection at Scale, I sat down with Ryan Glynn, Staff Security Engineer on the Detection Response Team at Compass. Ryan brings hands-on experience building custom machine learning models for security automation, having developed a phishing classification system that reduced on-call burden by 95% while processing 400+ emails daily. His background spans both traditional detection engineering and practical ML implementation, positioning him to discuss the intersection of deterministic security controls and AI-powered analysis.</p><p>Our conversation explores Ryan&#8217;s philosophy on where LLMs excel in security operations&#8212;particularly their strength in semantic understanding and intent classification&#8212;versus where traditional deterministic models remain superior. Ryan&#8217;s practical experience building custom ML models for phishing automation, combined with his evaluation of commercial AI SOC products, provides a grounded perspective on AI adoption. His emphasis on explainability, the importance of tuning at the detection layer rather than the analysis layer, and the need for human-in-the-loop validation provides concrete guidance for security teams navigating AI agents while building sustainable automation.</p><h3>Topics Covered</h3><ul><li><p><strong>LLMs for Semantic Understanding Over Decision-Making</strong>: Ryan argues the biggest strength of language models is natural language processing for documentation and intent classification, not making binary malicious/benign determinations, where deterministic models prove more reliable and explainable.</p></li><li><p><strong>Using LLMs as Feature Generators for Deterministic Models</strong>: Rather than having LLMs make security decisions directly, Ryan uses them to generate binary feature flags that analyze email context (tone, product selling, aggression) and feed them into more reliable traditional ML models.</p></li><li><p><strong>The 95% On-Call Reduction Through Custom ML</strong>: Ryan&#8217;s phishing automation model processes 400+ daily reported emails, handling classification (phishing/benign/spam), automated response (quarantine/release), and reducing analyst burden while maintaining high accuracy through company-specific training.</p></li><li><p><strong>Agent SOC Limitations and Hallucinations</strong>: Ryan&#8217;s evaluation of commercial AI SOC products revealed gaps in which agents claim to perform analysis steps they didn&#8217;t actually execute, and make false statements such as &#8220;user never authenticated from this IP&#8221; when logs show otherwise.</p></li><li><p><strong>Tuning at the Detection Layer, Not Analysis Layer</strong>: Why applying AI-powered allow-listing and tuning at the analysis layer across multiple detections is more dangerous than tuning individual detection rules, as blanket AI rules can inadvertently suppress legitimate alerts.</p></li><li><p><strong>SOAR Integration for Contextual Flexibility</strong>: How language models can make SOAR workflows less brittle by handling ambiguous cases like determining if a reported email is actually a &#8220;forward of a forward&#8221; versus a legitimate report, routing appropriately for manual or automated triage.</p></li><li><p><strong>The Challenge of Context Management</strong>: The difficulty of documenting business partner relationships, third-party integrations, and legitimate, unusual behaviors in ways that both humans and AI systems can reliably access during incident analysis.</p></li><li><p><strong>Useful vs. Noise Alert Tagging</strong>: Why binary alert classification (useful/noise) with subcategories provides better feedback loops for AI systems than ambiguous &#8220;true positive/false positive&#8221; labels, enabling pattern matching and detection tuning over time.</p></li><li><p><strong>The Importance of Analytical Skills for Detection Engineers</strong>: Ryan emphasizes that detection engineers need data science and analytical tool experience beyond security knowledge, recommending that everyone build at least one decision tree model to understand ML effectiveness and limitations.</p></li><li><p><strong>Prompt Injection Risks and Documentation Poisoning</strong>: How malicious actors can manipulate LLM responses through confluence page ranking or documentation injection, similar to 1990s Google SEO spam, creating attack vectors for autonomous security systems.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>The transformation Ryan describes aligns with Panther's approach to AI&#8212;leveraging language models' semantic understanding strengths for alert analysis and investigation, while maintaining deterministic detection logic and human validation. By automating the pattern matching and initial triage that LLMs excel at, security teams can focus on the custom model building, detection tuning, and strategic security decisions that Ryan emphasized as critical for reducing analyst burnout. <a href="https://panther.com/product/panther-ai">Learn more about Panther AI</a> and how we're building systems that combine AI efficiency with detection engineering rigor.</em></p>]]></content:encoded></item><item><title><![CDATA[Threat Hunting with Claude Code and MCP]]></title><description><![CDATA[Validate threats are real before building alerts. AI-assisted hunting finds detection gaps and prioritizes what actually matters to your business.]]></description><link>https://www.detectionatscale.com/p/ai-threat-hunting-mcp-workflows</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ai-threat-hunting-mcp-workflows</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Tue, 20 Jan 2026 14:03:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/db1dce9a-e23a-4654-ba11-dc12c4c7239b_4608x2592.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work. I&#8217;m Jack, founder &amp; CTO at Panther. If you find this valuable, please share it with your team!</em></p><p>This post continues our series on using AI agents and Model Context Protocol (MCP) servers to build and operationalize threat models for security operations:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e79ca739-8ea6-4de4-b269-75fd55f03ca8&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work. I&#8217;m Jack, founder &amp; CTO at Panther. If you find this valuable, please share it with your team.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Building Threat Models with MCP and AI Agents&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-05T14:04:15.838Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bbe933d-48ed-4edd-93e3-4d7ca5e1ba1d_2400x1350.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/threat-modeling-ai-agents-mcp&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:183304270,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:9,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V41S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b5f543-9751-4f42-99a8-9354836383e6_1080x1080.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Threat hunting has historically been a tedious, manual process: you get an indicator or set of suspicious behaviors from threat intelligence, then spend hours (or days) searching through logs to determine if there&#8217;s evidence of activity in your environment. Security analysts manually parse reports for IP addresses, domains, or attack patterns, then craft queries against their security data lake, iterating through different log sources until they either find something or exhaust their patience. It&#8217;s analytically intensive work that requires deep expertise in both attacker tradecraft and your organization&#8217;s specific infrastructure.</p><p>This is exactly the kind of work AI agents should excel at. The SOC analyst&#8217;s job is fundamentally analytical&#8212;synthesizing context from multiple sources, forming hypotheses about adversary behavior, and validating those hypotheses against evidence in your data. <strong>The limiting factors for AI-driven security operations aren&#8217;t model capabilities anymore</strong>; they&#8217;re access to the right data and tools (can the agent query your identity provider, logs, and HR systems?), human comprehension speed (can your team review findings fast enough?), and organizational alignment (is everyone clear on what threats matter most?). AI agents like Claude Code provide an intelligent harness that combines data access through MCP servers, tool execution, and iterative workflows to compress what used to take days of manual work into hours of guided analysis.</p><p><a href="https://www.detectionatscale.com/p/threat-modeling-ai-agents-mcp">In our previous post</a>, we demonstrated how to build threat models using AI agents and MCP, resulting in a comprehensive, prioritized set of threats. But a threat model sitting in a document doesn&#8217;t improve your security posture until you operationalize it. The right sequence is to prioritize threats with stakeholders, hunt for historical evidence of those threats in your environment, and then formalize successful hunts into detection-as-code rules. Every alert is a claim on your team&#8217;s time and attention, so you need to validate that a threat is relevant before committing to long-term monitoring.</p><p>This post will dive into using Claude Code and MCP to hunt for the threats you&#8217;ve already prioritized, gathering evidence that will directly inform which detections to build and how to scope them. We&#8217;ll walk through the stakeholder alignment process that turns threat models into hunt priorities, show concrete examples of AI-assisted hunting workflows, and explain how hunt findings translate into detection logic.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Stakeholder Alignment on the Threat Models</strong></h2><p>Before diving into threat-hunting workflows, you need organizational alignment on which threats matter most to your business right now. Many security teams either skip stakeholder buy-in entirely, leading to hunts that don&#8217;t align with leadership priorities, or they schedule a three-hour meeting where everyone argues about hypotheticals without clear outcomes. The threat model you&#8217;ve built makes this conversation substantially easier because you&#8217;re not debating what could theoretically happen; you&#8217;re reviewing documented threats with context.</p><p>The stakeholder alignment meeting should include security leadership, detection engineers, threat intelligence analysts, and incident response leads. Essentially, anyone who will either execute the hunts or respond to these specific threats. The agenda is to review your identified crown jewels (what systems are existential if compromised versus merely operational), discuss which threat actors are actually relevant to your industry and geography, and align on the top five priority threats from your model that warrant immediate hunting. Prior to this meeting, you should carefully validate the AI&#8217;s findings for accuracy/correctness.</p><p>The expected outcome is a ranked shortlist of three to five threat scenarios to hunt for with explicit business justification documented for each. The common pitfall is trying to hunt for everything at once, resulting in 20 different threat scenarios across 10 log sources with no clear hypothesis about what success looks like. <strong>Focused hunts with clear hypotheses win.</strong> For example, if you&#8217;ve aligned on contractor privilege abuse, compromised third-party integrations, and SSH lateral movement from developer workstations, you now have a concrete plan for the technical work ahead. The rest of your threat model doesn&#8217;t disappear; it becomes your backlog for future hunting sprints as you systematically work through priorities.</p><h2><strong>Hunting for Signals with MCP and Claude Code</strong></h2><p>You&#8217;ve aligned on three to five priority threats with your stakeholders. Now comes the technical work: hunting through your security data lake to determine if there&#8217;s historical evidence of these threats in your environment. This is where AI agents with MCP access compress weeks of manual query iteration into hours of guided analysis. </p><p>The workflow is straightforward. Start with a specific threat ID from your model, translate it into testable hypotheses, query your data lake through MCP servers, and iterate based on what you find. Throughout this process, the agent will query real log data, review past alerts, examine existing detection rules, and produce structured findings on where and how to create actionable detections.</p><h3><strong>Teaching the Agent to Hunt</strong></h3><p>Claude Code supports a feature called <a href="https://code.claude.com/docs/en/skills">Skills</a>, which are reusable instruction sets that teach the agent how to approach specific types of work.</p><p>A Skill is just a folder containing a <a href="http://SKILL.md">SKILL.md</a> file followed by instructions. Claude automatically discovers and loads Skills when their description matches the current task, applying specialized knowledge without you having to repeat the same prompting patterns in every conversation.</p><p>For threat hunting, this builds a repeatable thought process for the agent to orient on the task and understand effective hunting techniques. </p><p>Here&#8217;s how to create one:</p><p><strong>1. Create the skill directory structure:</strong></p><pre><code><code>mkdir -p ~/.claude/skills/threat-hunter # Or in your local directory</code></code></pre><p><strong>2. Create the <a href="http://SKILL.md">SKILL.md</a> file</strong> at <code>~/.claude/skills/threat-hunter/SKILL.md</code>:</p><pre><code><code>---
name: threat-hunter
description: Hunt for cyber threats and investigate security incidents. Use when investigating alerts, hunting for IOCs, analyzing suspicious activity, or performing incident response.
---

# Cyber Threat Hunter

You are assisting a security analyst with threat hunting and incident investigation.

## Threat Hunting Methodology

### 1. Hypothesis-Driven Hunting
Start with a hypothesis about attacker behavior:
- "An attacker with initial access would enumerate cloud resources"
- "Compromised credentials would show unusual login patterns"
- "Data exfiltration would involve large outbound transfers"

### 2. The Pivot Loop
Effective hunting follows a continuous pivot pattern:
1. **Start** with a known indicator (IP, user, hash, domain)
2. **Query** for all activity involving that indicator
3. **Identify** new related indicators from the results
4. **Pivot** to those new indicators
5. **Repeat** until you've mapped the full scope

### 3. Correlation Priorities
When investigating, correlate across these dimensions:
- **Time**: What happened before/after the suspicious event?
- **Identity**: What else did this user/service account do?
- **Network**: What other systems communicated with this IP?
- **Host**: What other processes ran on this machine?

## Investigation Patterns

### Alert Triage
1. Review the alert and triggering events
2. Assess: Is this expected behavior for this user/system?
3. Pivot: What else happened in the same timeframe?
4. Scope: Are there similar alerts across other entities?
5. Conclude: True positive, false positive, or needs escalation?

### User Compromise Assessment
1. Establish baseline: What's normal for this user?
2. Check authentication: Unusual locations, times, or devices?
3. Review access: Did they touch sensitive resources?
4. Look for persistence: New MFA devices, API keys, OAuth grants?
5. Check lateral movement: Access to other accounts or systems?

### IOC Hunting
When given an indicator (IP, domain, hash):
1. Search for any historical activity involving the IOC
2. Identify all affected users and systems
3. Determine first and last seen timestamps
4. Map the attack timeline
5. Look for related IOCs to expand the hunt

## Key Questions to Answer

- **Who**: Which identities are involved?
- **What**: What actions were taken?
- **When**: What's the timeline of events?
- **Where**: Which systems/regions/networks?
- **How**: What techniques were used (map to MITRE ATT&amp;CK)?
- **Why**: What was the likely objective?
</code></code></pre><p>Once saved, Claude Code will discover this Skill and apply it whenever you&#8217;re working on threat hunting or incident investigation tasks. <strong>We recommend refining this guidance based on your own preferred best practices, organizational context, and internal hunting methodologies.</strong></p><p>After you create this file, you&#8217;ll need to quit and reopen Claude Code. Then, you can invoke the skill explicitly using <code>/threat-hunter</code> &lt;prompt&gt;.</p><h3>Executing the Hunt for Past Activity</h3><p>With your threat-hunter Skill in place and MCP servers connected to your SIEM, like <a href="https://github.com/panther-labs/mcp-panther">mcp-panther</a>, you&#8217;re ready to start hunting. The prompt structure is straightforward: simply point the agent at your threat model, specify which threat(s) to hunt for, and set guardrails on scope and output:</p><pre><code><code>Using the threat-model.md file, hunt for evidence of threat T-INSIDER-002 (contractor privilege abuse) across the last 90 days. 

Check for:
1. Related existing detection rules and subsequent alerts
2. Authentication patterns from contractor accounts
3. Access to sensitive resources or data stores
4. Privilege escalation attempts or unusual permission changes

For each finding, assess: Is this expected behavior or anomalous? 
Document evidence with timestamps, actors, and affected resources. 
When identifying gaps, be specific and avoid ambiguity.

Store your findings in markdown format in a local hunts/ directory, 
recommendations in a recommendations/ directory, and track progress 
in a tracker.yml file.
</code></code></pre><p><strong>A few notes on effective hunt prompts:</strong></p><ul><li><p>Start with the CRITICAL or HIGH-severity threats from your model (the ones your stakeholders prioritized).</p></li><li><p>You can optionally hunt entire threat categories (such as supply chain or insider threat) or individual threats, depending on how deep you want to go.</p></li><li><p>Specify the time range explicitly (30, 60, or 90 days) to balance pulling recent activities with the costs of looking back multiple months at once.</p></li><li><p>You should also request external research to ground the hunt in reality: &#8220;Include research on common detection methods and past attacks for contractor privilege abuse.&#8221;</p></li></ul><h3><strong>Interpreting Hunt Results</strong></h3><p>Threat hunting rarely produces simple yes/no answers. The output from a successful hunt is a structured assessment that synthesizes evidence across multiple dimensions, including what activity occurred, the confidence level assigned to the conclusions, and the context gaps that prevented stronger findings.</p><p>The hunt report structure should communicate any uncertainty appropriately while still providing actionable intelligence. A complete hunt produces several outputs:</p><ul><li><p><strong>Executive summary with findings and confidence</strong>: States what you found (or didn&#8217;t find) and assigns a confidence level based on data completeness. When you see &#8220;No evidence of compromise&#8221; paired with MEDIUM confidence, it signals that the hunt found no anomalies, but there are detection gaps that could hide sophisticated activity. The confidence assessment tells you how much weight to give the conclusion.</p></li><li><p><strong>Evidence analysis by activity type</strong>: Breaks findings into logical categories, like authentication patterns, resource access, privilege changes, or workflow modifications. Each piece of evidence gets assessed as EXPECTED (clearly legitimate business activity), NEEDS REVIEW (requires stakeholder verification to determine if it&#8217;s authorized), or ANOMALOUS (suspicious enough to escalate immediately).</p></li><li><p><strong>Follow-up items prioritized by severity</strong>: Separates findings that need immediate investigation from those requiring routine verification. When the hunt identifies admin role assignments, secret deletions, or workflow modifications, it&#8217;s flagging them for validation.</p></li><li><p><strong>Detection coverage gaps mapped to attack techniques</strong>: Documents where you lack visibility. If the hunt revealed you&#8217;re collecting CloudTrail but missing GitHub audit logs, or you have authentication events but no privileged access management telemetry, those gaps represent blind spots in your detection posture.</p></li></ul><p>The hunt findings directly inform what happens next. Genuine anomalies or confirmed policy violations become high-priority detection rules, using the hunt queries you&#8217;ve already developed as the starting point for the detection logic. Coverage gaps become your instrumentation backlog, prioritized by the severity of threats you can&#8217;t currently see.</p><p><strong>Not everything discovered during hunting needs to become an alert.</strong> Some detections perform better with scheduled hunting procedures (quarterly supply chain threat hunts, monthly privileged access reviews). The question then becomes &#8220;what&#8217;s the right operational cadence for monitoring this threat given its likelihood and triage automation?&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/ai-threat-hunting-mcp-workflows?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/ai-threat-hunting-mcp-workflows?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>The Compounding Value of Agent-Assisted Hunting</strong></h2><p>At the beginning of this post, we outlined the traditional threat hunting workflow: analysts manually parsing intelligence reports, crafting queries, iterating through log sources, and hoping to find evidence before exhausting their patience. The limiting factors weren&#8217;t about whether your team was smart enough to find the threats&#8212;it was whether they had enough time to synthesize context from dozens of sources, enough stamina to iterate through query variations, and enough organizational alignment to prioritize the right hunts in the first place. AI agents fundamentally change this calculus by compressing days of manual correlation work into hours of guided analysis, freeing your team to focus on the analytical work that actually requires human judgment: validating hypotheses, assessing business impact, and deciding what deserves sustained monitoring.</p><p>What we&#8217;ve demonstrated in this post is the full cycle of operationalizing threat intelligence with AI agents and MCP. You start with structured threat models that document business context and detection gaps, align stakeholders on priority threats through focused conversations rather than endless debates, and then deploy agents with direct access to your security data lake to hunt for evidence across multiple log sources simultaneously. The agent synthesizes findings and produces structured assessments with appropriate confidence levels and gaps. <strong>This augmentation lets your team operate at a fundamentally different pace</strong>, turning quarterly threat-hunting exercises into weekly sprints where you continuously validate your threat model against real evidence in your environment.</p><p><strong>The productivity gains compound over time because agents help you maintain detection quality, not just create initial rules.</strong> Hunt findings that reveal detection gaps become top priorities for instrumentation. Queries that successfully identify suspicious activity serve as the foundation for new detection-as-code rules. As your threat landscape evolves and new attack techniques emerge, you can re-run hunts against updated intelligence to validate whether your existing detections still provide adequate coverage or whether new blind spots have developed. This is the compounding advantage of AI-first security operations. Your team&#8217;s analytical capacity grows with every hunt rather than resetting to zero each time you need to investigate a new threat. <strong>In our next post, we&#8217;ll show how to formalize these hunt findings into production detection-as-code rules&#8212;translating the queries and correlation logic you&#8217;ve validated during hunting into sustainable, version-controlled detections that run continuously across your data lake.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Detection at Scale!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4><strong>Operationalizing AI-Assisted Hunting at Scale</strong></h4><p><em>If you&#8217;re building this workflow yourself with Claude Code and MCP servers, you now have a blueprint for agent-assisted threat hunting that integrates directly with your existing security data infrastructure. But most security teams don&#8217;t have the engineering capacity to maintain custom MCP integrations, develop hunting Skills, and orchestrate these workflows across dozens of analysts. <a href="https://panther.com/product/panther-ai">Panther AI brings embedded AI agents directly into your security operations platform</a>, making these capabilities available to your entire team without requiring everyone to become prompt engineers or MCP developers.</em></p><p><em>Panther&#8217;s AI agents have native access to your normalized security data lake and existing detection-as-code rules, enabling them to run ad hoc hunts like the ones we&#8217;ve demonstrated here, suggest alert tuning based on historical false-positive patterns, and recommend detection priorities based on threat model alignment. Instead of every analyst learning how to write the perfect hunting prompt, your team gets a shared intelligence layer that operationalizes these workflows consistently across your SOC.</em></p><p><em>If you&#8217;re ready to move from manual threat hunting to AI-assisted security operations, visit <a href="https://panther.com">panther.com</a> to see how Panther AI accelerates detection engineering and threat hunting workflows for modern security teams.</em></p><div><hr></div><h4>Continued Reading</h4><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;6fe1bcd5-720a-4282-b1ee-9d50b6459b84&quot;,&quot;caption&quot;:&quot;Security operations teams have spent years trying to build the perfect integration layer between their tools and workflows. We've gone from manual API scripts to elaborate SOAR platforms, yet most security analysts still jump between countless tabs and interfaces during investigations. While generative AI has reshaped how we interact with data, connecti&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;MCP: Building Your SecOps AI Ecosystem&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-04-02T13:29:59.951Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47e512e4-2aa6-423c-ac34-60dfbabd4460_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/mcp-and-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160394652,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:25,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V41S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b5f543-9751-4f42-99a8-9354836383e6_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h4>Recent Podcast Episode</h4><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;1d48585d-12b1-4103-a12a-e355d6ba00c7&quot;,&quot;caption&quot;:&quot;In the latest episode of Detection at Scale, I sat down with Mike Vetri, Director of Security Operations at Veeva Systems. With a 10.5-year background in the Air Force working in cyber and intelligence operations, Mike brings a military perspective to cybersecurity. His experience spans both the analytical world of threat intelligence and the operationa&#8230;&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;D@S #73 - Veeva Systems' Mike Vetri on Building Resilient Security Teams in the Age of AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-14T14:07:30.995Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/184433212/5f9f3dc8-58fd-47a7-85eb-4fc04d82def6/transcoded-1768311377.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ep-73-veeva-mike-vetri-security-leadership-ai-soc-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:184433212,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V41S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b5f543-9751-4f42-99a8-9354836383e6_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p><em>Cover Photo by <a href="https://unsplash.com/@lukejonesdesign?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Luke Jones</a> on <a href="https://unsplash.com/photos/a-close-up-of-a-computer-circuit-board-tBvF46kmwBw?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p>]]></content:encoded></item><item><title><![CDATA[D@S #73 - Veeva Systems' Mike Vetri on Building Resilient Security Teams in the Age of AI]]></title><description><![CDATA[Cyber leadership principles, the C3 Matrix for prioritization, and why emotional intelligence drives 20% revenue differences&#8212;plus practical insights on integrating AI into SOC operations.]]></description><link>https://www.detectionatscale.com/p/ep-73-veeva-mike-vetri-security-leadership-ai-soc-operations</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-73-veeva-mike-vetri-security-leadership-ai-soc-operations</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Wed, 14 Jan 2026 14:07:30 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/184433212/b98fd6298be1b73a60c9c08cf155e84b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the latest episode of Detection at Scale, I sat down with Mike Vetri, Director of Security Operations at Veeva Systems. With a 10.5-year background in the Air Force working in cyber and intelligence operations, Mike brings a military perspective to cybersecurity. His experience spans both the analytical world of threat intelligence and the operational demands of running a modern SOC, positioning him to discuss the intersection of leadership, technology, and threat landscape.</p><p>Our conversation explores Mike&#8217;s research into cyber leadership qualities and his framework for prioritizing security efforts, which he calls the C3 Matrix. Mike&#8217;s perspective on the psychological evolution of cyber threats, from clandestine network attacks to AI-powered assaults on human judgment, challenges conventional approaches. His emphasis on emotional intelligence as a critical leadership trait, backed by Harvard Business Review research showing a 20% impact on revenue goals, provides a data-driven framework for building effective security teams. Mike&#8217;s practical experience implementing AI-powered tools in his SOC, combined with his analytical approach to threat operations and deception capabilities, offers concrete guidance for practitioners navigating the transition to AI-enhanced security programs.</p><h4>Topics Covered</h4><ul><li><p><strong>The Essential Qualities of Cyber Leaders</strong>: Mike&#8217;s research across 100 sources revealed that 60% prioritize effective communication and 59% value emotional intelligence in security leaders, with Harvard Business Review data showing these traits correlate to a 20% difference in meeting annual revenue goals.</p></li><li><p><strong>The C3 Matrix for Security Prioritization</strong>: A three-tier framework categorizing assets into <strong>Centers of Gravity</strong> (compromise means cigars time), <strong>Crown Jewels</strong> (requires SEC 8K filing but recoverable), and <strong>Capability Enablers</strong> (supports mission but transparent to customers), helping teams focus security controls where they matter most.</p></li><li><p><strong>The Seven Ds of Security</strong>: Beyond the passive &#8220;discover and detect,&#8221; Mike outlines deny, disrupt, degrade, destroy, and deceive as active counter-adversary measures, with deception operations providing the most accurate and actionable threat intelligence.</p></li><li><p><strong>Threat Intelligence vs. Threat Operations</strong>: Why every SOC needs a dedicated threat operations team that goes beyond consuming external reports to operationalizing intelligence, conducting deception operations, and providing strategic guidance to leadership&#8212;a fundamentally different skillset from blue team operations.</p></li><li><p><strong>The Psychological Evolution of Threats</strong>: How attacks have progressed from technical viruses to phishing to ransomware, and now to AI-powered attacks targeting human judgment and decision-making, with adversaries openly challenging defenders to distinguish reality from fabrication.</p></li><li><p><strong>AI as Both Force Multiplier and New Attack Vector</strong>: Mike&#8217;s practical experience shows AI dramatically reduces investigation time by aggregating data across tools, but also introduces new risks through prompt injection, AI poisoning attacks like the Minja attack, and the potential for AI-based malware that learns network behavior before striking.</p></li><li><p><strong>The Bloom&#8217;s Taxonomy Limitation</strong>: Why AI currently stops at step four of Bloom&#8217;s educational model&#8212;knowledge, comprehension, application, and analysis&#8212;but struggles with evaluation and creation, meaning human analysts remain essential for validation and critical thinking.</p></li><li><p><strong>Defense in Personnel</strong>: Beyond defense in depth for technology, organizations need multiple people trained on each capability to prevent single points of failure, with cross-functional training programs enabling teams to handle unexpected scenarios.</p></li><li><p><strong>Preventing Analyst Burnout</strong>: How AI tools help reduce the manual effort of pivoting between multiple security tools during investigations, enabling faster incident resolution and more sustainable work practices for security teams.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>The transformation Mike describes mirrors Panther's AI-powered capabilities&#8212;automating the time-consuming work of correlating data across multiple sources while maintaining human expertise for strategic decisions and validation. By reducing the manual burden of log analysis and tool-hopping, security teams can focus on the threat modeling, leadership, and cultural aspects that Mike emphasized as critical for long-term success. <a href="https://panther.com/product/panther-ai">Learn more about Panther AI</a> and how we're building tools that amplify human expertise rather than replace it.</em></p>]]></content:encoded></item><item><title><![CDATA[Building Threat Models with MCP and AI Agents]]></title><description><![CDATA[A practitioner's guide to using AI agents and MCP to analyze your environment, map threats to attack paths, and identify detection coverage gaps]]></description><link>https://www.detectionatscale.com/p/threat-modeling-ai-agents-mcp</link><guid isPermaLink="false">https://www.detectionatscale.com/p/threat-modeling-ai-agents-mcp</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 05 Jan 2026 14:04:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3bbe933d-48ed-4edd-93e3-4d7ca5e1ba1d_2400x1350.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work. I&#8217;m Jack, founder &amp; CTO at Panther. If you find this valuable, please share it with your team.</em></p><p><em>We hope you had a great start to 2026!</em></p><div><hr></div><p>Threat modeling is the process of analyzing systems from an attacker&#8217;s perspective to prioritize defenses, monitoring, and countermeasures. It ensures that detection efforts are high-leverage and focused on preventing the next potential incident rather than reacting to whatever threat is trending this week.</p><p>In 2025, security leaders sent a resounding message: focus on the fundamentals, and that starts with understanding your highest-priority threats to the business. But what does modern threat modeling look like with AI at your disposal?</p><p>Security teams have historically been siloed and relied on peer expertise to understand how the business&#8217;s core systems work and where potential security flaws exist. With AI agents, security teams can gain direct access to context previously locked within engineering, product, and infrastructure teams. This is especially powerful in understanding what you&#8217;re protecting.</p><p>This post kicks off a series on using AI agents and Model Context Protocol (MCP) servers to build and operationalize threat models for security operations. We&#8217;ll use Claude Code and MCP servers to research, analyze, and gather data from your SIEM and organizational context to answer three critical questions: Where should we focus our detection efforts? What are our current blind spots? What should we do about them?</p><p>In this post, we&#8217;ll walk through generating a comprehensive threat model using AI agents. The subsequent post will show you how to hunt for active threats using that model and immediately formalize findings into detection-as-code rules. These workflows are generally repeatable across different AI agent tools, so you can adapt them to whatever&#8217;s in your stack.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h2>Why Threat Modeling Matters for Detection Programs</h2><p>Threat modeling defines your priorities as a security team. It answers the question: What&#8217;s most important to be monitoring for right now? Without this foundation, detection programs drift toward either alert fatigue (monitoring everything, prioritizing nothing) or reactive mode (chasing whatever threat made headlines this week). Both approaches waste the scarcest resources security teams have: <strong>time and attention.</strong></p><p>Strong threat modeling creates a through line from detection to response. Each alert should be treated as a line item in your team&#8217;s time-and-attention budget. If you&#8217;re not confident that an alert maps to a real threat against a critical asset, you&#8217;re wasting that budget on noise. Slava Klimov from CoreWeave described this problem on the <a href="https://www.detectionatscale.com/p/ds-71-slava-klimovs-coreweave-threat-modeling-ai-agents">Detection at Scale podcast #71</a>:</p><blockquote><p><em>Security teams often build detection coverage based on what&#8217;s easy to instrument rather than what&#8217;s actually important to protect. The result is high-fidelity alerts on low-value assets and blind spots on crown jewels.</em></p></blockquote><p>The traditional barrier to good threat modeling has been coordination.</p><p>Building a useful model requires input from engineering (what are we running?), product (what&#8217;s business-critical?), infrastructure (where are trust boundaries?), and security (what are the active threats?). AI agents with access to organizational context via MCP servers can now synthesize this information directly, removing the weeks of meetings and documentation review that made threat modeling a once-a-year exercise rather than a continuous practice.</p><h2>The Five Contexts AI Agents Need for Threat Modeling</h2><p>AI agents can only be as good as the context they&#8217;re provided, and threat modeling requires five distinct layers of intelligence that most organizations keep in separate systems. In a previous post on context engineering, we outlined four foundational layers for security operations. For threat modeling specifically, we need to expand this model to distinguish between detection posture (what we <em>can</em> see) and operational history (what we <em>have</em> seen).</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e05bb507-e4c5-4f51-9527-7b803cc73174&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter for scaling and sustaining security operations teams. We focus on effectively utilizing AI agents in the SOC with the best practices on context, prompts, and tools like MCP. If you enjoy reading Detection at Scale, please share it with your friends!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Data Your AI-Powered SOC Needs&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-22T13:16:36.315Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aa8b63ab-6099-4282-9eb9-2dcab0e227ca_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/context-engineering-ai-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:174200522,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V41S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b5f543-9751-4f42-99a8-9354836383e6_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Each of the following layers answers a critical question that informs where detection efforts should focus:</p><ol><li><p><strong>Identities and Assets</strong> tell you what exists and who has access to it. This layer includes user directories, service accounts, cloud resources, SaaS applications, databases, and APIs. For threat modeling, this context answers &#8220;What are we protecting?&#8221; and &#8220;What are the attack paths between assets?&#8221; An AI agent querying your identity provider and cloud environment can map trust relationships, identify privileged accounts, and surface shadow IT that traditional asset inventories miss.</p></li><li><p><strong>Threat Intelligence</strong> provides the adversary perspective tailored to your industry and technology stack. This includes threat actor profiles actively targeting your sector (financial services faces different adversaries than healthcare) or MITRE ATT&amp;CK techniques commonly used in your vertical. This context answers &#8220;Who attacks organizations like ours?&#8221; and &#8220;What techniques are they using against our sector right now?&#8221;</p></li><li><p><strong>Logs and Detection Coverage</strong> reveal what you can currently see in your environment. This layer includes which log sources are instrumented (Windows Event Logs, CloudTrail, Kubernetes audit logs), data sources collected but not actively monitored, detection rules currently deployed and their MITRE ATT&amp;CK mappings, and coverage gaps where you have no visibility. This context answers &#8220;What can we detect?&#8221; and &#8220;Where are our blind spots?&#8221; that map your theoretical detection capabilities against threat scenarios.</p></li><li><p><strong>Alerts and Case History</strong>&nbsp;reveal prior security events&nbsp;in your environment. This layer includes SIEM alerts from the past N days, closed incident tickets and their root causes, false-positive patterns, and mean time to detect across different attack types. For threat modeling, this context answers &#8220;What have we seen before?&#8221; and &#8220;What&#8217;s generating noise versus signal?&#8221; This historical perspective prevents threat models from treating all scenarios equally, as your operational reality shows that specific attack paths recur while others remain more hypothetical.</p></li><li><p><strong>Organizational Context</strong> connects technical assets to business criticality. This includes architecture documentation, data classification policies, revenue-impacting systems, compliance requirements, and deployment patterns (on-prem, cloud, hybrid). This context answers &#8220;What matters most to the business?&#8221; and &#8220;What&#8217;s the impact if this system is compromised?&#8221; Understanding which applications handle customer PII or which infrastructure supports your core product can prioritize threats based on business impact rather than technical severity scores.</p></li></ol><p><strong><a href="https://modelcontextprotocol.io/docs/getting-started/intro">Model Context Protocol</a></strong> makes these five layers accessible to AI agents through standardized interfaces. Instead of manually gathering context from a dozen different browser tabs or documents and synthesizing it, you can connect MCP servers to your agent and let it query across all five layers simultaneously. Security teams can choose from <a href="https://github.com/modelcontextprotocol/servers?tab=readme-ov-file">official integrations</a> that either run locally or remotely.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;cf6c958d-90d4-4f5a-9f02-1090ec06c7fe&quot;,&quot;caption&quot;:&quot;Security operations teams have spent years trying to build the perfect integration layer between their tools and workflows. We've gone from manual API scripts to elaborate SOAR platforms, yet most security analysts still jump between countless tabs and interfaces during investigations. While generative AI has reshaped how we interact with data, connecti&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;MCP: Building Your SecOps AI Ecosystem&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-04-02T13:29:59.951Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47e512e4-2aa6-423c-ac34-60dfbabd4460_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/mcp-and-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160394652,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:25,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V41S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b5f543-9751-4f42-99a8-9354836383e6_1080x1080.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h2>Building the Threat Model with AI</h2><h3>Connecting your MCPs</h3><p>Before you can generate a useful threat model, your AI agent needs access to the five context layers. At minimum, you&#8217;ll want to connect:</p><ul><li><p><strong>SIEM data</strong> (e.g., <a href="https://github.com/panther-labs/mcp-panther">Panther</a>) for log sources, detection rules, alert history, and query capabilities across your security data lake. This MCP server provides both read access to existing detections and the ability to run ad-hoc queries for threat hunting.</p></li><li><p><strong>Ticketing and case management systems</strong> (<a href="https://www.atlassian.com/blog/announcements/remote-mcp-server">Jira</a>, <a href="https://linear.app/docs/mcp">Linear</a>) to surface historical incident patterns and understand what your team has responded to before.</p></li><li><p><strong>Documentation</strong> (<a href="https://developers.notion.com/docs/mcp">Notion</a>, Confluence) where application architecture, runbooks, and data classification policies live. This organizational context prevents the AI from treating all assets equally when some are clearly more critical than others.</p></li><li><p><strong>Operational monitoring</strong> (<a href="https://docs.sentry.io/product/sentry-mcp/">Sentry</a>, <a href="https://docs.newrelic.com/docs/agentic-ai/mcp/overview/">New Relic</a>, <a href="https://awslabs.github.io/mcp/servers/cloudwatch-mcp-server">CloudWatch</a>) to understand which systems are customer-facing, which handle sensitive data, and where trust boundaries exist in your infrastructure.</p></li></ul><p>The specific MCPs you&#8217;ll need depend on your stack, but the point is to give the agent read access to systems that hold context about your environment, threats, and detection posture. <a href="https://code.claude.com/docs/en/mcp">Claude Code supports MCP natively</a>, and other AI agent frameworks are rapidly adding support.</p><h3>Analyzing Your Internal Software</h3><p>Most companies maintain internal software and services that serve as their revenue source. This context is crucial for threat modeling as it helps map specific components to attack techniques, identify necessary telemetry, and focus detection logic where it matters most.</p><p>The sample prompt below dissects these applications and pinpoints the most critical context for AI/SOC consumption, focusing on detection and response rather than application security.</p><div><hr></div><p><em>Generate a security threat model for this codebase as a YAML file optimized for AI/SOC consumption. Focus on detection and response, not application security.</em></p><p><em>Structure it with:</em></p><ul><li><p><em>metadata</em></p></li><li><p><em>architecture_components (with IDs, data sources, log verbosity)</em></p></li><li><p><em>data_flows (what&#8217;s visible to defenders, detection points)</em></p></li><li><p><em>trust_boundaries</em></p></li><li><p><em>authentication (what gets logged, session indicators)</em></p></li><li><p><em>sensitive_assets (crown jewels, business impact)</em></p></li><li><p><em>external_integrations (third-party attack surface, credential exposure)</em></p></li><li><p><em>log_sources (what telemetry exists, gaps, retention)</em></p></li><li><p><em>attack_paths (MITRE ATT&amp;CK mapped: initial_access &#8594; persistence &#8594; lateral_movement &#8594; exfiltration, with TTPs and detection opportunities)</em></p></li><li><p><em>threat_scenarios (attacker objectives, kill chain stages, IOCs, forensic artifacts)</em></p></li><li><p><em>detection_coverage (which ATT&amp;CK techniques are detectable, blind spots)</em></p></li><li><p><em>response_playbooks (containment actions, evidence preservation, escalation triggers)</em></p></li><li><p><em>monitoring_recommendations (alerts to build, threat hunting queries, dashboards).</em></p></li></ul><p><em>Use explicit IDs, map to MITRE ATT&amp;CK technique IDs where applicable, and identify visibility gaps.</em></p><div><hr></div><p>Point the agent at your application repository and let it analyze the code, infrastructure definitions, and documentation. What you&#8217;ll get back is a structured system architecture that identifies potential areas of interest for an attacker, which will serve as a reference point for the next prompt to produce a prioritized threat model that considers broader organizational data sources.</p><h3><strong>Generating Organizational Threat Models</strong></h3><p>The organizational-level threat model takes a broader view, synthesizing business context, historical incidents, and threat intelligence to produce prioritized recommendations. This is where connecting to your SIEM becomes necessary. The agent needs to query your alert history, detection rules, and log sources to ground the model in operational reality.</p><p>This prompt positions the agent as a senior security architect conducting a comprehensive threat modeling exercise:</p><div><hr></div><p><em>You are a senior security architect conducting a threat modeling exercise for [<strong>COMPANY_NAME</strong>]. Your goal is to produce a prioritized threat model covering current risks, controls, monitoring gaps, and historical activity.</em></p><h3><em>Discovery</em></h3><ol><li><p><em><strong>Business Context</strong>: Research the company&#8217;s industry, geography, technology stack, and what makes it a valuable target.</em></p></li><li><p><em><strong>Assets &amp; Identities</strong>: Query log sources, cloud environments, identity providers, and critical data stores.</em></p></li><li><p><em><strong>Detection Coverage</strong>: Review active detection rules, MITRE ATT&amp;CK coverage, alert volumes, and log sources without rules.</em></p></li><li><p><em><strong>Historical Activity</strong>: Analyze the last 90 days of alerts for patterns, recurring issues, and unresolved items.</em></p></li></ol><h3><em>Interview</em></h3><p><em>Interview me for additional context. I&#8217;m the [<strong>TITLE</strong>].</em></p><p><em>If available, request any internal application threat models or architecture documentation to incorporate org-level security findings (authentication flows, sensitive data inventory, third-party integrations, CI/CD pipelines).</em></p><h3><em>Output</em></h3><p><em>Produce a markdown threat model document that includes:</em></p><ul><li><p><em>Executive summary with overall risk level</em></p></li><li><p><em>Critical assets ranked by business impact</em></p></li><li><p><em>Relevant threat actors based on industry/geography</em></p></li><li><p><em>Threat register with unique IDs (e.g., T-001, T-002), likelihood, impact, and mitigation status</em></p></li><li><p><em>Current security controls and monitoring coverage</em></p></li><li><p><em>Detection gaps mapped to ATT&amp;CK where applicable</em></p></li><li><p><em>Prioritized recommendations (immediate, short-term, long-term)</em></p></li></ul><p><em>Assign each identified threat a unique ID for tracking and reference.</em></p><div><hr></div><p>The agent will conduct a back-and-forth interview to clarify organizational specifics. The resulting threat model included sections like this:</p><ul><li><p><strong>Business Context</strong> that identifies the company profile, industry positioning, and what makes it a high-value target.</p></li><li><p><strong>Crown Jewels</strong> ranked by existential risk versus operational impact.</p></li><li><p><strong>Threat Actor Analysis</strong> prioritizes relevant threat actors based on your industry and customer base.</p></li><li><p><strong>Historical Incident Analysis</strong> based on the past 90 days of alerts from your SIEM.</p></li><li><p><strong>Threat Register</strong> with unique tracking IDs for each identified threat. For example:</p></li></ul><pre><code><code>T-INSIDER-002: External contractor abuse of admin privileges
Likelihood: Medium | Impact: High | Risk: HIGH
Mapped to: T1078 (Valid Accounts)
Mitigation: [context-specific recommendations]
</code></code></pre><p>This structured output becomes a living document you can reference when prioritizing detection engineering work, conducting quarterly security reviews, or explaining risk to executives. The unique threat IDs enable tracking over time&#8212;you can note when T-INSIDER-002 gets addressed with new monitoring controls and update its status accordingly.</p><h3>What to Look For in Your Threat Model</h3><p>A useful threat model should tell you three things immediately:</p><ol><li><p>Which assets matter most (crown jewels)</p></li><li><p>Which adversaries are relevant to your organization</p></li><li><p>Where detection gaps are mapped to actual attack techniques</p></li></ol><p><strong>If your threat model lists generic threats without tying them to your specific environment, the agent didn&#8217;t have enough context.</strong> Go back and ensure you&#8217;ve connected MCP servers or provide business context manually by typing it out in a stream of consciousness.</p><p>The most valuable output is the detection gap analysis&#8212;where the model explicitly identifies attack paths you can&#8217;t currently detect because you lack the necessary log sources or detection rules. These gaps become your prioritized backlog for threat hunting and detection engineering.</p><h2>From Threat Model to Detection Priorities</h2><p>The threat model you&#8217;ve just generated is the foundation for your detection engineering roadmap, monthly security reporting, and immediate threat hunting activities. With access to your SIEM, web search, ticketing systems, and internal documentation through MCP servers, AI agents can synthesize context that would typically require weeks of meetings across engineering, infrastructure, and security teams. The result is a living threat model with unique threat IDs and explicit detection gaps, aligning with what matters most rather than a static document that becomes stale the moment it&#8217;s published.</p><p>The natural next question is: are there signs of historical activity we missed? In the next post, we&#8217;ll use Claude Code and <code>mcp-panther</code> to hunt for evidence of these prioritized threats across your security data lake and immediately formalize successful hunts into detection-as-code rules. This workflow, from threat model to hunting queries to production detections, shows how AI agents compress what used to be month-long cycles into actionable security improvements you can deploy the same day.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Detection at Scale! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p><em>Cover Photo by <a href="https://unsplash.com/@a_chosensoul?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">A Chosen Soul</a> on <a href="https://unsplash.com/photos/a-snow-covered-mountain-sitting-on-top-of-a-lake-vLHkdFhL4as?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p>]]></content:encoded></item><item><title><![CDATA[D@S #72 - Trustpilot's Gary Hunter on Structuring Security Knowledge for AI Success ]]></title><description><![CDATA[How to scale security operations by automating alert triage, treating AI agents like interns, and creating space for preventative work that actually moves the needle.]]></description><link>https://www.detectionatscale.com/p/ds-72-gary-hunter-trustpilot-scaling-security-ai-operations</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ds-72-gary-hunter-trustpilot-scaling-security-ai-operations</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Tue, 30 Dec 2025 13:56:59 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/182361728/2c8331e760737c75c16abb8fbd70ab42.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the latest episode of Detection at Scale, we sat down with Gary Hunter, Head of Security Operations at Trustpilot, to explore how security teams can leverage AI agents to scale their impact. Gary brings a unique perspective, having built Trustpilot&#8217;s security operations team from the ground up: starting as one of the first two security hires, and growing the team to ten members across security operations, platform security, and GRC.</p><p>Gary offers a refreshing take on building security programs under constraints at one of the world&#8217;s most recognized trust platforms. His team&#8217;s approach to AI, from automated alert triaging to brand protection, demonstrates how smaller security teams can punch above their weight class. The conversation delves into the cultural challenges of introducing AI, the importance of guardrails, and how to free up security professionals from repetitive work so they can focus on prevention and strategic initiatives.</p><h4><strong>Topics Covered</strong></h4><ul><li><p><strong>Bootstrapping Security at Trustpilot</strong>: Gary&#8217;s journey building Trustpilot&#8217;s security operations from two people to a team of ten, starting with understanding business pain points and working backwards from POCs to fill security gaps.</p></li><li><p><strong>The Alert Capacity Math</strong>: Why understanding your team&#8217;s capacity&#8212;8 hours per day, 15 minutes per alert equals only 32 alerts maximum&#8212;forces strategic decisions about automation and horizontal scaling.</p></li><li><p><strong>AI for Alert Triage and Enrichment</strong>: How Trustpilot uses AI within SOAR workflows to automatically triage alerts, parse JSON, apply logic, and route decisions, including transforming complex security alerts into language end users can understand.</p></li><li><p><strong>Competitive Prompt Testing for AI Adoption</strong>: Gary&#8217;s approach of A/B/C testing three different prompts with the same input during development, measuring outputs, and promoting the winner to production, democratizing AI learning across the team.</p></li><li><p><strong>The Intern Framework for AI Safety</strong>: Treating AI agents like interns by asking &#8220;What would you train them to do before giving them tools to lock users, wipe machines, or take down websites?&#8221; Codifying playbooks and implementing infrastructure-as-code for governance.</p></li><li><p><strong>Multimodal AI for Brand Protection</strong>: Using AI to analyze screenshots and HTML of potential brand infringement sites, scoring violations 0-100, and automating responses while maintaining safety checks and keyword filters.</p></li><li><p><strong>Data Governance and Residency Challenges</strong>: The balance between giving AI all the data for training versus careful sanitization, especially under GDPR requirements in the UK/Europe, where data categories in breaches must be explicitly reported.</p></li><li><p><strong>Enterprise Knowledge Management</strong>: Why pointing AI at entire documentation corpora produces confused answers, and the need for curated, well-structured, concise documentation&#8212;learning that less is more for both context and processes.</p></li><li><p><strong>Creating Space for Shift-Left Work</strong>: How automating 20% of alert triage effectively adds a team member&#8217;s capacity back, reducing cognitive load and allowing focus on prevention over response, moving from security theater to impactful work.</p></li><li><p><strong>Building Weatherproof, T-Shaped Teams</strong>: Gary&#8217;s philosophy of creating generalists who aren&#8217;t tied to specific technologies, encouraging experimentation with tools that don&#8217;t scale costs, and maintaining backlogs without creating team burnout.</p><p></p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>The approach Gary describes aligns perfectly with Panther&#8217;s AI-powered capabilities&#8212;automatically handling the repetitive alert triage and enrichment work that Gary emphasized as essential for scaling lean security teams, while preserving human oversight for critical decisions. By automating the pattern matching, data correlation, and initial investigation that LLMs excel at, security teams can focus on the preventative work and strategic initiatives that truly reduce risk.  <a href="https://panther.com/product/panther-ai">Learn more about Panther AI </a> and how we&#8217;re building the AI-first SIEM that gives security teams their time back.</em></p><div><hr></div><h4>Recent Posts</h4><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c3670adb-42d4-4e8a-8c56-439b7490c8f9&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;2025 Wrapped: Essential Reading on AI in Security Operations&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-22T13:54:25.110Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b618acaf-463c-4793-998e-a4a5133e06a0_1024x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/2025-wrapped-ai-security-operations-reading&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:181947348,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:2,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;8c42e1d7-2072-4cf0-bbb0-ad2db461d92c&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes it possible. I&#8217;m Jack, founder &amp; CTO at Panther. If you find this valuable, please share it with your team.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The State of AI in Security Operations: 5 Patterns That Defined 2025&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-10T14:25:46.313Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21eaab57-2ca1-4160-aea9-d80a22479037_1920x1280.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ai-security-operations-2025-patterns&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:181189996,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:2,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h4><strong>More Episodes</strong></h4><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;ecd356ab-0619-40e1-aa2e-ceed53f284f0&quot;,&quot;caption&quot;:&quot;In the latest episode of Detection at Scale, we sat down with Vjaceslavs (Slava) Klimovs, a security leader at CoreWeave responsible for threat modeling, detection, prevention, response, and compliance. With 13 years at Google working on infrastructure security, followed by 18 months at Snapchat and now at CoreWeave, Slava brings a hard-earned perspecti&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;D@S #71 - CoreWeave's Slava Klimovs on Threat-Model-Driven Security and the AI-First Future&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-15T14:13:29.911Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/181457650/c04a758a-efa4-4e2e-aea3-f1f01b6f8f0d/transcoded-1765573124.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ds-71-slava-klimovs-coreweave-threat-modeling-ai-agents&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:181457650,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:2,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;630f9dae-d711-45f7-874d-c5112459a253&quot;,&quot;caption&quot;:&quot;In the latest episode of Detection at Scale I had a great conversation with Ken Bowles, Director of Security Operations at GreenSky, to explore how AI is transforming day-to-day security work beyond the hype. With 15 years in security operations spanning healthcare and fintech, Ken brings a grounded perspective on what&#8217;s actually working in production v&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;D@S #70 - GreenSky's Ken Bowles on Protecting Crown Jewels First and AI's Real Role in the SOC&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-11-26T09:15:34.467Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/179917272/e9c18c98-cfb0-4816-a928-d6ccc9d24163/transcoded-1764077211.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ep-68-greenskys-ken-bowles-ai-practical-impact-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:&quot;e9c18c98-cfb0-4816-a928-d6ccc9d24163&quot;,&quot;id&quot;:179917272,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:2,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[2025 Wrapped: Essential Reading on AI in Security Operations]]></title><description><![CDATA[The posts that shaped the conversation on AI-first security operations, from vendor landscapes to implementation risks]]></description><link>https://www.detectionatscale.com/p/2025-wrapped-ai-security-operations-reading</link><guid isPermaLink="false">https://www.detectionatscale.com/p/2025-wrapped-ai-security-operations-reading</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 22 Dec 2025 13:54:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b618acaf-463c-4793-998e-a4a5133e06a0_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes modern SOCs work.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>2025 was the year AI in security operations moved from ambitious predictions to production reality. The conversation shifted from &#8220;will AI work in the SOC?&#8221; to &#8220;how do we architect these systems?&#8221; and, more sobering, &#8220;what happens when we get it wrong?&#8221; We saw AI agents handling alert triage, watched the first documented AI-orchestrated espionage campaign unfold, and learned hard lessons about context engineering, guardrails, and operational complexity.</p><p>This year, we were drawn to writing from analysts and practitioners wrestling with implementations, researchers documenting risks, and experts explaining market shifts. As we head into the holiday break, here&#8217;s our essential reading from 2025 on AI-first security operations&#8212;posts that shaped the conversation, challenged the industry, and influenced how we build moving forward.</p><h2>The Industry Perspective</h2><p><strong><a href="https://softwareanalyst.substack.com/p/sacr-ai-soc-market-landscape-for">SACR AI SOC Market Landscape For 2025</a> by Francis Odum</strong></p><p>This 2025 report provides a rigorous evaluation of 13 leading <a href="https://docs.google.com/spreadsheets/d/1mZLj5WbEcrL6ASX9hLPRdProqie1NMa_/">AI SOC vendors</a> to help security leaders distinguish marketing claims from meaningful technical capabilities. Building on the foundation of Odum&#8217;s <a href="https://softwareanalyst.substack.com/p/revolutionizing-secuity-operations">2024 exploratory research</a>, the <a href="https://softwareanalyst.substack.com/p/sacr-ai-soc-market-landscape-for">report</a> provides a practical decision framework and defined architectural models to guide organizations through the phased adoption of agentic security automation.</p><p><strong><a href="https://medium.com/anton-on-security/decoupled-siem-where-i-think-we-are-now-89ab9f3df43f">Decoupled SIEM: Where I Think We Are Now?</a> by Anton Chuvakin</strong></p><p>This 2025 analysis by Anton Chuvakin examines the ongoing tension between &#8220;decoupled SIEM&#8221; architectures and the industry&#8217;s shift toward tightly integrated, AI-powered platforms. Building on his earlier debates regarding security data lakes, the post argues that while federated search and modular components offer a &#8220;romantic ideal,&#8221; the practical simplicity of unified platforms will likely see them &#8220;reign supreme&#8221; in the coming years.</p><p><strong><a href="https://nheudecker.medium.com/why-agentic-ai-startups-will-struggle-against-cybersecurity-incumbents-7750f6569deb">Why Agentic AI Startups Will Struggle Against Cybersecurity Incumbents</a> by Nick Heudecker</strong></p><p>This 2025 analysis by Nick Heudecker explores the significant &#8220;brick wall&#8221; facing agentic AI startups as they attempt to disrupt the SOC. The post argues that while startups focus on custom-tuned models, real differentiation comes from the massive volumes of telemetry data already owned by industry incumbents such as CrowdStrike and Palo Alto Networks. Ultimately, Heudecker suggests that startups without a proprietary data advantage risk becoming mere &#8220;features&#8221; of the giants they intended to replace.</p><h2>Technical Architecture &amp; Implementation</h2><p><strong><a href="https://simonwillison.net/2025/Apr/9/mcp-prompt-injection/">Model Context Protocol has prompt injection security problems</a> by Simon Willison</strong></p><p>Willison sounded the alarm on MCP security risks as early adoption accelerated, introducing the concept of the &#8220;Lethal Trifecta&#8221;: access to private data, exposure to malicious instructions, and the ability to exfiltrate information. This became required reading for anyone implementing MCP in security contexts, and the vulnerabilities he warned about materialized throughout the year, with 1,000-2,000 exposed MCP servers found without authentication.</p><p><strong><a href="https://block.github.io/goose/blog/2025/06/02/goose-panther-mcp/">Taking Flight with Goose and Panther MCP</a> by Tomasz Tchorz and Glenn Edwards</strong></p><p>This blog explores how <strong>Block</strong> is democratizing detection engineering by integrating its open-source AI agent, <strong><a href="https://github.com/block/goose">Goose</a></strong>, with the <strong><a href="https://github.com/panther-labs/mcp-panther">Panther MCP</a></strong><a href="https://github.com/panther-labs/mcp-panther"> server</a> to automate complex security workflows. By enabling natural language-to-rule generation and automated testing, the integration allows non-specialist engineers to contribute high-quality, production-ready security detections that were previously reserved for niche experts.</p><p><strong><a href="https://securetrajectories.substack.com/p/security-takeaways-from-2025-ai-engineer">Security Takeaways from 2025 AI Engineer World&#8217;s Fair</a> by Matt Maisel</strong></p><p>Maisel bridged the AI engineering and security communities better than anyone, capturing insights from the AI <a href="https://www.ai.engineer/">conference</a> that defined the year&#8217;s technical direction. His key observation was that &#8220;the industry&#8217;s focus is moving beyond the model itself and toward the broader systems in which agents operate&#8221;, which explained the shift we saw from foundation model discussions to context engineering and agent orchestration.</p><h2>Risks &amp; Reality Checks</h2><p><strong><a href="https://medium.com/@aryan.dcgpt/the-dark-side-of-llm-powered-security-automation-d59e044a852e">The Dark Side of LLM-Powered Security Automation</a> by Aryan D</strong></p><p>This post delivered a balanced treatment of AI security automation risks, covering indirect prompt injection, insecure output handling, and automation bias with technical specificity. Aryan&#8217;s warning that &#8220;security automation magnifies whatever you plug into it&#8221; resonated as incidents accumulated, and the post served as a practical checklist for teams deploying AI agents in production.</p><p><strong><a href="https://www.anthropic.com/news/disrupting-AI-espionage">Disrupting the first reported AI-orchestrated cyber espionage campaign</a> by Anthropic</strong></p><p>Not a blog post but an incident disclosure that changed the conversation. Anthropic documented a Chinese state-sponsored group using Claude for 80-90% of their attack operations against approximately 30 global entities. The AI executed &#8220;thousands of requests, often multiple per second,&#8221; with sophisticated operational security measures. This validated what many suspected, but few had documented: attackers are adopting agentic AI faster than defenders.</p><h2>From Detection at Scale</h2><p>We published several posts in 2025 that tried to push the conversation forward on AI-first security operations. </p><ul><li><p><a href="https://www.detectionatscale.com/p/the-agentic-siem">The Agentic SIEM</a> introduced the vision of AI agents as &#8220;analysts with impressive memories who never need coffee.&#8221;</p></li><li><p><strong><a href="https://www.detectionatscale.com/p/mcp-and-security-operations">MCP: Building Your SecOps AI Ecosystem</a></strong> broke down the paradigms and tradeoffs of implementing MCP servers in the SOC.</p></li><li><p><a href="https://www.detectionatscale.com/p/the-cursor-moment-for-security-operations">The Cursor Moment for Security Operations</a> reinforced AI as a powerful assistant rather than a carte blanche replacement for human intuition.</p></li><li><p><a href="https://www.detectionatscale.com/p/context-engineering-ai-security-operations">The Data Your AI-Powered SOC Needs</a> introduced the four-layer context engineering framework for powering SecOps AI agents.</p></li><li><p><a href="https://www.detectionatscale.com/p/ai-security-operations-2025-patterns">The State of AI in Security Operations: 5 Patterns That Defined 2025</a> synthesized what we learned through the various podcasts and posts.</p></li></ul><div><hr></div><p>We&#8217;re taking a break for the holidays and will be back in January with fresh perspectives on where AI-first security operations are heading in 2026. In the meantime, if you haven&#8217;t caught up on this year&#8217;s essential reading, there&#8217;s no better time than a quiet week between Christmas and New Year&#8217;s! </p><p>Happy holidays from the team at <a href="https://panther.com/">Panther</a>!</p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/2025-wrapped-ai-security-operations-reading?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Detection at Scale! If you enjoyed reading, please share it!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/2025-wrapped-ai-security-operations-reading?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/2025-wrapped-ai-security-operations-reading?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div>]]></content:encoded></item><item><title><![CDATA[D@S #71 - CoreWeave's Slava Klimovs on Threat-Model-Driven Security and the AI-First Future]]></title><description><![CDATA[Bootstrapping security at hyperscale, the AIUC-1 standard for agents, and the shift to AI-first detection engineering.]]></description><link>https://www.detectionatscale.com/p/ds-71-slava-klimovs-coreweave-threat-modeling-ai-agents</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ds-71-slava-klimovs-coreweave-threat-modeling-ai-agents</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 15 Dec 2025 14:13:29 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181457650/f95a9b83e0a3ad12dea1f29fe196c21d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the latest episode of Detection at Scale, we sat down with Vjaceslavs (Slava) Klimovs, a security leader at CoreWeave responsible for threat modeling, detection, prevention, response, and compliance. With 13 years at Google working on infrastructure security, followed by 18 months at Snapchat and now at CoreWeave, Slava brings a hard-earned perspective on bootstrapping security programs in high-growth environments.</p><p>Our conversation explores his perspective that 40-50% of security work isn&#8217;t tied to concrete threat models, why detection observability should precede prevention controls in fast-moving environments, and how AI agents will make previously tolerable security gaps catastrophically exploitable.</p><p>Slava&#8217;s zero-to-one journey at CoreWeave reveals how security leaders must prioritize when resources are constrained, and the business is moving at breakneck speed. His framework for threat-model-driven work, his mandate that the new detection platform be &#8220;AI-first from the get-go,&#8221; and his work on host integrity from firmware through userspace offer concrete examples for practitioners building similar programs. This conversation cuts through abstract security principles to focus on implementation: what to build first, how to justify security investments through threat models, and why the age of AI agents fundamentally changes the calculus on security debt.</p><h4><strong>Topics Covered</strong></h4><ul><li><p><strong>Building Security from Zero to One:</strong> Slava&#8217;s experience joining CoreWeave and the process of bootstrapping a security program at a hyper-growth AI infrastructure company.</p></li><li><p><strong>Observability vs. Prevention:</strong> Why establishing deep security observability and forensic capabilities is often less intrusive and more critical than rolling out heavy-handed prevention controls early on in a fast-moving environment.</p></li><li><p><strong>The &#8220;Threat Model&#8221; Problem:</strong> Slava&#8217;s hot take that 40-50% of security work is not done in relation to a concrete threat model, often driven by a culture of chasing &#8220;flashy&#8221; projects over solving complex, unglamorous problems.</p></li><li><p><strong>Host Integrity at Scale:</strong> How CoreWeave verifies software provenance and integrity from the firmware level up to the user space, treating the boot process as a single verifiable model.</p></li><li><p><strong>AI Agents &amp; Technical Debt:</strong> How the introduction of AI agents into the enterprise will make historical technical debt (like over-provisioned access or exportable bearer tokens) unforgivable and immediately risky.</p></li><li><p><strong>LLMs for Engineering Rigor:</strong> Using LLMs to strip the &#8220;fluff&#8221; from engineering design docs to force engineers to expose their true human intuition and local context, rather than just generating boilerplate content.</p></li><li><p><strong>The AIUC-1 Standard:</strong> An overview of Slava&#8217;s contribution to the <a href="https://www.aiuc-1.com/">AIUC-1</a> standard for AI agent insurance, focusing on determining if an agent&#8217;s software provenance and environment make it &#8220;insurable&#8221;.</p></li><li><p><strong>The Evolution of the SOC:</strong> The shift toward &#8220;AI-first&#8221; detection platforms and why the role of the traditional analyst is evolving into end-to-end detection engineering, where manual log analysis is replaced by engineering reliable detection code.</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>The transformation Slava describes aligns with Panther&#8217;s AI-powered capabilities&#8211;automatically handling the initial analysis that Slava emphasized as critical for reducing investigation time, while maintaining the human-in-the-loop validation. By automating the pattern matching and correlation that LLMs excel at, security teams can focus on the threat modeling and strategic security decisions that require human expertise. <a href="https://panther.com/product/panther-ai">Learn more about Panther AI </a>and how we&#8217;re building the AI-first SIEM for the modern SOC.</em></p><div><hr></div><h4>Recent Posts</h4><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;8c42e1d7-2072-4cf0-bbb0-ad2db461d92c&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes it possible. I&#8217;m Jack, founder &amp; CTO at Panther. If you find this valuable, please share it with your team.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The State of AI in Security Operations: 5 Patterns That Defined 2025&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-10T14:25:46.313Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21eaab57-2ca1-4160-aea9-d80a22479037_1920x1280.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ai-security-operations-2025-patterns&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:181189996,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:2,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;1169a3bc-b42a-4dcf-949f-1914befedb0d&quot;,&quot;caption&quot;:&quot;Over the past year, we&#8217;ve seen an explosion of AI capabilities built directly into security products: intelligent triage assistants, automated investigation tools, and AI-powered rule generation. These vendor-built capabilities deliver significant value by bringing sophisticated AI directly to security practitioners without requiring teams to become exp&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Building Custom AI SOC Agents with MCP&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-11-17T14:25:38.913Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/52437860-849f-4d5e-9e6f-56a0e3efd808_3840x2160.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/building-custom-ai-soc-agents-with-mcp&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:179109080,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:2,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h4><strong>More Episodes</strong></h4><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;630f9dae-d711-45f7-874d-c5112459a253&quot;,&quot;caption&quot;:&quot;In the latest episode of Detection at Scale I had a great conversation with Ken Bowles, Director of Security Operations at GreenSky, to explore how AI is transforming day-to-day security work beyond the hype. With 15 years in security operations spanning healthcare and fintech, Ken brings a grounded perspective on what&#8217;s actually working in production v&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;D@S #70 - GreenSky's Ken Bowles on Protecting Crown Jewels First and AI's Real Role in the SOC&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-11-26T09:15:34.467Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/179917272/e9c18c98-cfb0-4816-a928-d6ccc9d24163/transcoded-1764077211.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ep-68-greenskys-ken-bowles-ai-practical-impact-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:&quot;e9c18c98-cfb0-4816-a928-d6ccc9d24163&quot;,&quot;id&quot;:179917272,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:2,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;d1316832-455a-4975-986d-aaf079781be7&quot;,&quot;caption&quot;:&quot;There&#8217;s a shift underway in the implementation of AI in security operations. Tyler Martin, Senior Director of Security Engineering at FanDuel, has been working at the edge of this transformation, building custom AI agents that have changed the scaling laws for his team. Instead of the traditional tier 1-2-3 analyst model, they develop and maintain agent&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;D@S #69 - FanDuel's All-Engineer SOC: From Phishing to IR with Custom Agents&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-11-18T14:15:36.177Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/179133642/c4739c2f-b6e3-404b-af2f-326820efb1dc/transcoded-1763381451.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/ep-69-tyler-martin-fanduel-ai-agents-secops-automation&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:&quot;c4739c2f-b6e3-404b-af2f-326820efb1dc&quot;,&quot;id&quot;:179133642,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[The State of AI in Security Operations: 5 Patterns That Defined 2025]]></title><description><![CDATA[From cautious experiments to production deployments&#8212;what worked, what didn't, and where we're headed.]]></description><link>https://www.detectionatscale.com/p/ai-security-operations-2025-patterns</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ai-security-operations-2025-patterns</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Wed, 10 Dec 2025 14:25:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/21eaab57-2ca1-4160-aea9-d80a22479037_1920x1280.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter on AI-first security operations, detection engineering, and the infrastructure that makes it possible. I&#8217;m Jack, founder &amp; CTO at <a href="https://panther.com/">Panther</a>. If you find this valuable, please share it with your team.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>2025 was a pivotal year for AI in security operations, where agent deployments transitioned from cautiously optimistic experiments to operational realities driven by excitement and mandates to adopt AI. This rise was characterized by autonomous alert triage, threat hunting across vast security data, and intelligent detection tuning. While teams are still striking the right balance between human and agent decision-making, nearly everyone has realized the early benefits of increased efficiency, team capacity, deeper knowledge, and productivity gains enabled by AI.</p><p><strong>2025 delivered what enterprises needed: AI models that are simultaneously cheaper and smarte</strong>r, providing continuous upgrades and economies of scale throughout the year. Frontier models like Claude 4.5 and GPT-5 pushed the boundaries for reasoning, tool calling, and intelligence, while expanding context windows to maintain state across complex investigations without losing critical details. Protocols like MCP connected thousands of important applications to these models, powering one of the most important recent trends in AI: context engineering. Teams can now easily reach across the IT, ops, and security stack to easily orchestrate response and manage incidents, powered by AI.</p><p>There was also a rise in technical security teams building their own SOC agents to serve bespoke internal use cases and augment vendor capabilities. As security vendors introduced their MCP servers, and tools like Claude Code became standard in the enterprise, the barrier to entry dropped significantly. Security teams can now scale at the pace of AI innovation. Sophisticated AI capabilities are accessible to every security team that&#8217;s ready to adopt them. </p><p>As we navigate this exciting technological shift, a few patterns have become clear:</p><ol><li><p>Context engineering is the key to effective agents</p></li><li><p>Agent accuracy is preferred over speed, but both are critical</p></li><li><p>Data privacy is table stakes for commercial AI solutions</p></li><li><p>Autonomy with human-in-the-loop balances productivity with safety</p></li><li><p>Focused agents are preferred for specialized security workflows</p></li></ol><h3>Context Engineering Powers Effective Agents</h3><p>AI in security operations is fundamentally a context problem, requiring broad, deep knowledge of an organization&#8217;s people, technology, history, and threat models. When agents can access diverse telemetry (e.g., to understand the criticality of an asset), analysis accuracy dramatically increases. Without this foundation, even leading models deliver shallow results that create more work than they save, which is every security team&#8217;s worst-case scenario. <strong>Agents need the same access to data and tools as their human counterparts so they can continue complementing one another&#8217;s strengths.</strong></p><p>Consider a data exfiltration scenario that security teams often encounter: an alert fires when an employee downloads a large volume of files from a sensitive internal repository. <strong>The alert tells us what happened, but critical questions immediately arise to find the why</strong>: Is this employee in a role that normally accesses this data? Was this a personal machine? Have they exhibited other suspicious data access patterns recently? </p><p>Traditional SIEM workflows require analysts to manually gather surrounding information through separate queries, forcing them to context-switch and rebuild their mental model of the investigation with each query. Effective agents encode the organizational context needed to answer these questions&#8212;pulling employee profiles from identity providers, applying user behavior analytics to detect anomalies, referencing location and device profiles, and following detection-specific runbooks that guide scenario analysis. Agents provide efficient tool calling and context management, making only the necessary queries to test hypotheses and presenting judgment for final approval.</p><p>Teams that invested early in data lakes and structured security data pipelines built exactly this foundation. They can now feed agents enrichments, historical patterns, and organizational context, transforming a 30-minute manual investigation into a 2-3-minute agent-assisted analysis.</p><p>While agents have transformed how teams automate context gathering, teams have strong opinions about expectations for analytical and reasoning capabilities, especially when precision is required. The challenge is ensuring they reason correctly about what that context means.</p><h3>Accuracy &gt; Speed</h3><p>A clear expectation emerged in 2025: security teams prefer accuracy over speed. &#8220;I&#8217;d rather it take longer and be right&#8221; became a common refrain, with current expectations settling around five minutes or less for alert triage and investigation tasks. This reflects the reality that proper context gathering takes time, and agents need to query multiple systems, correlate signals, and apply organizational context before reaching conclusions. <strong>Security teams cannot afford to draw the wrong conclusions due to an incorrect tool call or incomplete context.</strong> The advantage of AI agents isn&#8217;t purely about instant responses; it&#8217;s about compressing what used to take 30 minutes of manual work into a few minutes of agent-orchestrated analysis.</p><p>Transparency in <em>how</em> agents reach conclusions has become equally critical for production deployments. Agents that don&#8217;t show their reasoning or present conclusions without attribution to specific evidence or tools waste analyst time rather than amplify it. When an agent says &#8220;this alert is a false positive&#8221; without explaining why, the analyst must either blindly trust the conclusion or repeat the entire investigation to verify it. Both outcomes erode trust and create friction. Agents gaining traction in production environments expose their reasoning, show which tools were called and what data was retrieved, and make it easy for analysts to verify, challenge, or follow up on any conclusions. Accuracy and transparency aren&#8217;t separate requirements&#8212;they&#8217;re two aspects of the same fundamental need: agents that security teams can trust to make increasingly autonomous decisions.</p><h3><strong>Data Privacy Is Non-Negotiable</strong></h3><p>Security operations teams have made their data privacy expectations unambiguous in 2025. Both the security telemetry fed into agents and the analytical conclusions coming out must remain under strict organizational control. Security leaders closely scrutinize solutions that require fine-tuning or model training to be effective, refusing to send proprietary data to third parties or create new exfiltration risks in the name of security automation. The synthesized insights that agents produce&#8212;correlations between alerts, user behavior patterns, threat actor attribution&#8212;often carry more strategic sensitivity than any individual log event.</p><p>This requirement shapes the entire approach to agent implementation. Teams want to leverage AI capabilities without sending sensitive data for model training, without building dependencies on models trained on their proprietary information, and without creating new compliance headaches. The good news is that zero-shot and few-shot capabilities in frontier models have reached the threshold where fine-tuning is genuinely unnecessary for most agentic workflows in the SOC. <strong>Agents can be effective through prompt engineering, tool access, and retrieval-augmented generation rather than requiring custom model training.</strong> The data stays in your data lake, the context is assembled at inference time, and the agent reasons over it without any information leaving your control.</p><h3>Autonomy with Human-in-the-Loop</h3><p>The autonomy conversation in security operations has matured significantly in 2025. Early reactions swung from banning AI tools entirely to unrealistic expectations that agents would replace Tier 1 analysts. Teams have settled into a more pragmatic model: <strong>AI-assisted humans with increasing levels of autonomy based on confidence and risk</strong>. The goal with agents is to automate the repetitive grunt work of context gathering that consumes valuable analyst time, but is difficult to automate with deterministic automation. Agents can now handle the initial alert assessment, dynamically adjust priorities based on context, and enrich alerts with threat intelligence before an analyst ever sees them. This progression happens in phases: the &#8220;crawl&#8221; phase involves simple enrichment/summarization, the &#8220;walk&#8221; phase involves agents applying reasoning models to make judgments about alerts, while the &#8220;run&#8221; phase extends that reasoning into automated containment and remediation actions for high-confidence scenarios.</p><p>The human role is fundamentally shifting from assessment to oversight. Traditionally, analysts spent 15 to 30 minutes per alert gathering context and deciding next steps&#8212;a time-consuming process that agents now handle far more efficiently. The analyst&#8217;s interaction moves upstream: instead of investigating every alert from scratch, they validate the agent&#8217;s work, provide additional context when the agent escalates uncertainty, and focus on the complex cases that genuinely require nuanced human judgment. At the highest level of autonomy, analysts transition from reviewing individual alerts to managing a team of agents, auditing their output weekly or monthly rather than operating in a constant triage cycle. The human remains in the loop, but the loop itself has changed&#8212;less time triaging alerts means more time improving detection logic, refining agent workflows, and addressing the novel threats that agents correctly escalate.</p><p>This balanced approach of autonomy with guardrails has emerged as the most successful path forward among CISOs and security practitioners. Agents handle the repetitive, time-intensive work of context gathering and initial assessment, freeing analysts to apply their expertise where it matters most. The productivity gains are substantial, but the model only works when agents earn trust through transparency, accuracy, and knowing when to escalate rather than conclude.</p><h3>Focused Agents</h3><p>The most successful agent deployments in 2025 followed a pattern that mirrors how human security teams actually operate: specialized agents working together, rather than a single generalist agent attempting to replace an entire analyst. The &#8220;Uber agent&#8221; that can handle every aspect of security operations doesn&#8217;t exist yet, and teams that have tried to build one have found that generalization comes at the cost of effectiveness. Instead, organizations are deploying focused agents with narrow, well-defined responsibilities&#8212;such as a CloudTrail analysis agent that specializes in AWS activity patterns or a detection-tuning agent that optimizes rule performance based on noisy alerts. Each agent becomes an expert in its domain, and collectively they make the SOC more effective.</p><p>This architecture enables transfer learning and feedback loops across the entire detection and response lifecycle. When the triage agent repeatedly escalates a specific alert pattern as benign, the detection tuning agent can adjust thresholds or add filters. These specialized agents working in concert create a system that learns and improves continuously, with each agent contributing expertise that compounds across the team. We can now deploy specialized agents, with coordination overhead handled programmatically rather than through meetings and handoffs.</p><h2>Looking Ahead</h2><p>The contrast between where we started in 2025 and where we ended is remarkable. Security teams entered the year cautiously experimenting with AI capabilities, unsure of what was hype and what was real. We&#8217;re closing the year with agents deployed in production, handling thousands of alerts, running investigations 80% faster, and proving their value in measurable ways. The foundational investments in data lakes, structured telemetry, and detection-as-code are paying dividends, making agents effective rather than just impressive demos.</p><p>The future of security operations is a blend of human expertise with agent capabilities. The teams succeeding are amplifying analyst knowledge by automating the repetitive context gathering, initial assessment, and routine response actions that consumed so much time. They&#8217;re building focused agents that work together like a well-coordinated team, with humans managing the agent team rather than drowning in alert queues. They&#8217;re prioritizing accuracy and transparency over speed, maintaining strict data privacy, and implementing autonomy with appropriate guardrails. The shift from traditional SIEM to agent-driven security operations isn&#8217;t complete, but the path forward is clear!</p><div><hr></div><p><em>Thanks for reading Detection at Scale. If you found this valuable, please share it with your colleagues who are exploring AI-powered automation in security operations!</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/ai-security-operations-2025-patterns?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/ai-security-operations-2025-patterns?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><em>Cover photo by <a href="https://unsplash.com/@christopher__burns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Christopher Burns</a> on <a href="https://unsplash.com/photos/white-and-black-digital-wallpaper-Kj2SaNHG-hg?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p><div><hr></div><h3>Related AI Posts in 2025</h3><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;3a807436-cb5a-40ad-9442-99d97f36fdb2&quot;,&quot;caption&quot;:&quot;Security operations teams have spent years trying to build the perfect integration layer between their tools and workflows. We've gone from manual API scripts to elaborate SOAR platforms, yet most security analysts still jump between countless tabs and interfaces during investigations. While generative AI has reshaped how we interact with data, connecti&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;MCP: Building Your SecOps AI Ecosystem&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-04-02T13:29:59.951Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47e512e4-2aa6-423c-ac34-60dfbabd4460_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/mcp-and-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160394652,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:25,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;89ca1d12-a3e9-4815-84bd-1330d7ccc192&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter covering security monitoring, cloud infrastructure, the latest breaches, and more. Enjoy!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Agentic SIEM&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-01-21T14:06:49.285Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4962c-afed-4aa8-89a6-b532f2e52ecb_1408x768.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/the-agentic-siem&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:155046728,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:16,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;ad4d741a-f7c4-4006-9acb-420a541d7583&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter diving into SIEM, generative AI, cloud-centric security monitoring, and more. Enjoy! If you enjoy reading Detection at Scale, please share it with your friends!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Cursor Moment for Security Operations&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-06-16T13:08:41.049Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72062601-eeaf-40ba-9c0d-3685d97a59ba_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/the-cursor-moment-for-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:166042003,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:14,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[D@S #70 - GreenSky's Ken Bowles on Protecting Crown Jewels First and AI's Real Role in the SOC]]></title><description><![CDATA[Ken Bowles on navigating AI hype in the SOC, prioritizing crown jewels over comprehensive coverage, and why human judgment remains irreplaceable]]></description><link>https://www.detectionatscale.com/p/ep-68-greenskys-ken-bowles-ai-practical-impact-security-operations</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-68-greenskys-ken-bowles-ai-practical-impact-security-operations</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Wed, 26 Nov 2025 09:15:34 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179917272/51ba20a79e4bbc7fefbfef0dda5d7d87.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the latest episode of Detection at Scale I had a great conversation with Ken Bowles, Director of Security Operations at GreenSky, to explore how AI is transforming day-to-day security work beyond the hype. With 15 years in security operations spanning healthcare and fintech, Ken brings a grounded perspective on what&#8217;s actually working in production versus what remains aspirational. The conversation cuts through vendor buzzwords to reveal practical insights about leveraging AI for alert investigation, the evolution of detection strategies, and why understanding your data is more critical than ever in the age of large language models.</p><p>Ken&#8217;s journey from healthcare security at Tempest to securing credit card data at GreenSky provides a unique lens on how security operations have evolved from basic alerting to AI-enhanced investigation. His emphasis on protecting the crown jewels first, embracing automation pragmatically, and maintaining healthy skepticism about AI&#8217;s limitations offers a refreshing counterpoint to the &#8220;AI will solve everything&#8221; narrative that dominates vendor pitches. This is a conversation about real-world implementation challenges, the changing role of security analysts, and why the fundamentals still matter even as the tools become more sophisticated.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>Key Takeaways</strong></h3><ol><li><p><strong>Prioritize Crown Jewels and Work Outward: </strong>Ken emphasizes starting with what matters most&#8212;identifying your organization&#8217;s most critical assets (such as credit card data at GreenSky) &#8212; and building security controls outward from there. This focused approach prevents teams from drowning in generic alerts and ensures resources are allocated where they&#8217;ll have the most significant impact on actual risk reduction.</p></li><li><p><strong>AI Enables the &#8220;Single Pane of Glass&#8221; Through Context, Not Dashboards: </strong>Rather than forcing analysts to context-switch between multiple tools, AI acts as intelligent middleware, pulling data from EDR, SIEM, email security, and identity platforms into a cohesive alert context. This reduces investigation time dramatically by having AI assemble the complete picture before an analyst even starts looking, transforming what used to take 30+ minutes into near-instant contextual awareness.</p></li><li><p><strong>Detection Strategy Needs Nuance Beyond MITRE Framework Coverage: </strong>While MITRE ATT&amp;CK provides valuable guidelines, Ken cautions against the audit-driven mentality of &#8220;we need an alert for everything.&#8221; Not every technique applies to every organization, and trying to alert on everything leads to analyst burnout. The more intelligent approach involves understanding your threat model, implementing compensating controls where possible, and focusing detections on what actually matters in your specific environment.</p></li><li><p><strong>Human Judgment Remains Essential Despite AI Advances:</strong> Ken draws a critical distinction&#8212;AI excels at pattern matching and data analysis but cannot determine intent, which is fundamental to security analysis. While AI can flag that someone accessed sensitive data from an unusual location, only humans can decide whether it&#8217;s a legitimate business trip or a credential compromise. This understanding should shape how teams deploy AI: as a force multiplier for analysts, not a replacement.</p></li><li><p><strong>Audit Your Controls Because Tech Debt Compounds Security Risk: </strong>Ken shares a hard-earned lesson: security controls established years ago often drift as ownership changes hands, configurations evolve, and cloud environments grow more complex. Regular auditing of access control lists, security group rules, and other foundational controls is essential because that &#8220;tiny little crack&#8221; in your defenses often emerges from accumulated changes no single person fully understands anymore.</p></li></ol><div><hr></div><p><em>The practical AI implementation Ken describes reflects Panther&#8217;s approach to enhancing security operations. Panther AI handles the context gathering that Ken emphasized, providing that &#8220;single pane of glass&#8221; through intelligent enrichment rather than dashboard sprawl. This allows your analysts to focus on validating AI judgment rather than spending time manually pivoting across multiple tools. <a href="https://panther.com/product/panther-ai">Learn more about Panther AI</a> and how we are keeping human expertise at the center of security operations.</em></p>]]></content:encoded></item><item><title><![CDATA[D@S #69 - FanDuel's All-Engineer SOC: From Phishing to IR with Custom Agents]]></title><description><![CDATA[Tyler Martin on eliminating tier 1-3 analyst work, automating incident response from Slack, and why security teams need to think about "context rot".]]></description><link>https://www.detectionatscale.com/p/ep-69-tyler-martin-fanduel-ai-agents-secops-automation</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-69-tyler-martin-fanduel-ai-agents-secops-automation</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Tue, 18 Nov 2025 14:15:36 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179133642/c271557d488e2484ab35c33b4c2d6b65.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>There&#8217;s a shift underway in the implementation of AI in security operations. Tyler Martin, Senior Director of Security Engineering at FanDuel, has been working at the edge of this transformation, building custom AI agents that have changed the scaling laws for his team. Instead of the traditional tier 1-2-3 analyst model, they develop and maintain agents that autonomously handle what used to be manual triage work.</p><p>Tyler&#8217;s path from accidentally enumerating a healthcare database as a Java developer to leading one of the more innovative SecOps teams around provides good context for this conversation. His team has built multiple specialized agents, including &#8220;SAGE&#8221; (Security Analysis and Guided Escalation) for phishing, account takeover, and an incident response automation that runs entire IR workflows via Slack. The results are notable: a 90% reduction in incomplete post-incident review action items and engineers spending their time building rather than working on tickets.</p><p>This conversation gets past the AI hype to discuss the practical realities of building production-grade security agents. Tyler talks about everything from the &#8220;bronze-silver-gold&#8221; approach to automation maturity, to managing context rot in LLMs, to why the industry needs to start measuring different things. The most helpful part is understanding why starting with specific, high-volume use cases and gradually expanding works better than trying to automate everything at once.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>Key Takeaways</strong></h3><ol><li><p><strong>Moving to All-Engineering Teams Enables Better AI Outcomes</strong>: By dropping the traditional tier 1-2-3 analyst model and staffing entirely with security engineers, FanDuel created a team that can build and maintain its own agentic AI systems. This change unlocked the ability to continuously improve automation rather than operate it, shifting from &#8220;working tickets&#8221; to &#8220;building systems that work tickets.&#8221;</p></li><li><p><strong>Context Rot is the New Challenge in Agent Design</strong>: Just as analysts can be overwhelmed with too much information, AI agents suffer from &#8220;context rot&#8221; when given excessive data. The key is finding the right amount of information&#8212;enough signal for accurate decisions without overwhelming the model&#8217;s attention. This requires careful thought about what data enters the context window and in what order, similar to how you&#8217;d organize information for a human analyst.</p></li><li><p><strong>The Bronze-Silver-Gold Maturity Model for AI Automation</strong>: Start with bronze (human-in-the-loop validation, no automated closures), move to silver (automated closures with some manual intervention), and eventually reach gold (fully autonomous triage). This phased approach lets teams build confidence, identify missing context, and add necessary integrations step by step rather than attempting full automation right away, which usually fails.</p></li><li><p><strong>Runbooks are Now AI Agent Instructions, Not Human Documentation</strong>: The traditional detection runbook has evolved from documentation for human analysts to specific instructions for AI agents. While basic investigation steps should be in the agent&#8217;s system prompt, runbooks should focus on the context and investigation patterns unique to each detection. This is where prompt engineering becomes a critical security skill.</p></li><li><p><strong>Incident Response Automation Through Slack Changes Things</strong>: FanDuel&#8217;s IR automation handles entire incident response workflows with simple Slack commands&#8212;automatically creating channels, inviting stakeholders, spinning up Zoom bridges with recording enabled, generating real-time documentation from transcripts, and assigning action items to specific team members in Confluence. This solved the problem of incomplete PIR action items and significantly reduced post-incident administrative work.</p></li></ol><div><hr></div><p><em>The transformation Tyler describes aligns with where Panther is going with our AI-powered triage capabilities. Our agent automatically handles the assessment layer Tyler discussed, gathering context and presenting risk-based summaries so your team can focus on investigation and response rather than manual enrichment. Learn more about <a href="https://panther.com/product/panther-ai">Panther AI</a> and how we&#8217;re helping teams make this transition.</em></p>]]></content:encoded></item><item><title><![CDATA[Building Custom AI SOC Agents with MCP]]></title><description><![CDATA[How security teams are orchestrating vendor capabilities with internal tooling through conversational bots, workflow automation, and enhanced developer tools]]></description><link>https://www.detectionatscale.com/p/building-custom-ai-soc-agents-with-mcp</link><guid isPermaLink="false">https://www.detectionatscale.com/p/building-custom-ai-soc-agents-with-mcp</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 17 Nov 2025 14:25:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/52437860-849f-4d5e-9e6f-56a0e3efd808_3840x2160.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the past year, we&#8217;ve seen an explosion of AI capabilities built directly into security products: intelligent triage assistants, automated investigation tools, and AI-powered rule generation. These vendor-built capabilities deliver significant value by bringing sophisticated AI directly to security practitioners without requiring teams to become experts in agents or machine learning. But technical security teams have begun augmenting with custom agents that act as connective tissue between vendor-built AI capabilities, internal services, and organization-specific context to accomplish bespoke workflows.</p><p>MCP continues to play an essential role for technical teams that want to orchestrate multiple capabilities rather than use them in isolation. Consider a typical alert triage scenario: your agent needs to check your internal runbook database, query your SIEM for related activity, pull employee context from your HR system, and correlate findings from your threat intelligence platform, all while maintaining conversation context in your team&#8217;s Slack channel. This is where MCP&#8217;s standardized approach to tool integration creates leverage, allowing teams to combine pre-built agents from various vendors with their own internal tooling and custom MCP servers without writing extensive integration code for each connection.</p><p>The pattern emerging from organizations implementing custom security agents breaks down into three distinct approaches: <strong>conversational interfaces</strong> embedded in communication platforms like Slack, <strong>orchestrated multi-agent workflows</strong> coordinated through tools like n8n, and <strong>enhanced developer environments</strong> that bring security context directly into coding assistants like Claude Code or Cursor. </p><p>In this post, we&#8217;ll explore how these custom agents work, what makes them effective, and the practical challenges teams are encountering as they build this connective tissue between vendor capabilities and internal operations.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h4>Hear from Practitioners About Real-World Implementations!</h4><p>This Thursday, I&#8217;m hosting a webinar featuring security practitioners from companies like OpenTable and Block who run MCP in production settings. They&#8217;ll walk through their implementations, share what&#8217;s working, and lessons from workflows they&#8217;ve automated. If you&#8217;re building custom agents, this conversation will save you weeks of trial-and-error. </p><p><em><strong><a href="https://panther.com/webinar/how-mcp-helps-security-teams-move-faster">Register for &#8220;How MCP Helps Security Teams Move Faster&#8221;</a></strong>, Thursday, Nov 20th, 2025!</em></p><div><hr></div><h2>The Custom Agent Opportunity</h2><p>Vendor-built AI capabilities excel at addressing common security workflows that span many organizations, like analyzing alerts and synthesizing investigation findings. These capabilities become even more powerful when connected to organization-specific context and internal systems that vendors can&#8217;t access or don&#8217;t know exist. Your internal runbook database, your custom asset inventory system, your team&#8217;s historical incident data, or your specific escalation logic represents the connective tissue that transforms generic AI capabilities into precisely tuned automation for your environment.</p><p>Consider a typical alert triage scenario. A vendor-provided triage agent can analyze the technical indicators of an alert and assess its severity using general threat intelligence. But it can&#8217;t (always) check whether this user recently submitted an IT ticket about unusual activity, whether this asset is scheduled for decommissioning next week, whether similar patterns triggered false positives in your environment last month, or whether this matches the specific attack scenarios your threat modeling identified as high-risk for your industry. Custom agents fill these gaps by orchestrating multiple capabilities together and combining vendor-built analysis with internal context gathering and organization-specific decision logic.</p><p>MCP&#8217;s standardized approach to tool integration is what makes this orchestration practical for technical security teams. Rather than writing custom integration code for each vendor API and maintaining separate authentication mechanisms for every tool, teams can expose capabilities through MCP servers and let agents discover and use them through a common protocol. This standardization reduces the integration burden from an N&#215;M problem (where every agent needs custom code for every tool) to a single build and reuse across different agents and workflows. The result is connective tissue that&#8217;s maintainable by security engineers rather than requiring dedicated integration development teams.</p><h2>Common Patterns of Custom Agents</h2><p>Custom agents manifest in multiple distinct forms, each optimized for different security workflows and team interaction patterns. Understanding which pattern fits your use case determines both implementation complexity and operational effectiveness.</p><h3>Pattern 1: Conversational Interfaces (Slackbots and Chatbots)</h3><p>Conversational agents embedded in communication platforms like Slack or Microsoft Teams bring AI capabilities directly into the flow of security operations. These agents respond to natural language queries in channels or threads, maintaining conversation context while orchestrating multiple MCP servers to gather information and execute actions. The interface is familiar. Just @ mention the bot and ask a question.</p><p>The power of conversational agents lies in their ability to meet analysts where they already work. During alert triage, an analyst can ask &#8220;what&#8217;s the context around this AWS console login from Romania?&#8221; and the agent orchestrates multiple lookups: checking if the user has traveled recently (HR system MCP server), reviewing their normal login patterns (SIEM MCP server), examining recent access modifications (IAM MCP server), and correlating with threat intelligence (vendor MCP server). All of this happens in a Slack thread where the entire team can see the investigation, ask follow-up questions, and collaborate on the response. The agent becomes a force multiplier that eliminates context switching while maintaining natural team communication patterns.</p><p>Implementation typically involves connecting a Slackbot application server (for sending and receiving messages), your SIEM MCP server (for querying security data), and any custom MCP servers you build to expose internal systems. The agent uses an LLM to understand queries and determine which tools to invoke, then formats results back into conversational responses. Thread context provides automatic conversation history, and the human-in-the-loop design ensures analysts maintain control over decisions while the agent handles the mechanical work of gathering context and synthesizing findings.</p><h3>Pattern 2: Orchestrated Workflows (n8n and Low-Code Platforms)</h3><p>Multi-agent workflows coordinated through platforms like <a href="https://www.n8n.io">n8n</a> represent a different approach: rather than conversational interaction, these implementations encode complex security processes as visual workflows that orchestrate multiple specialized agents together. Each node in the workflow might represent a different agent capability: one for enrichment, another for severity scoring, and a third for determining escalation paths. The workflow tool provides the orchestration logic, error handling, and state management between them.</p><p>This pattern excels at automating repetitive, multi-step processes that must occur consistently and reliably. When a detection fires, an orchestrated workflow can immediately trigger an enrichment agent that gathers context from multiple sources, pass those findings to a specialized analysis agent that assesses severity and identifies similar historical incidents, then route to either automated remediation (for known-safe scenarios) or analyst review (for ambiguous cases) based on explicit business logic. The visual workflow makes this automation auditable and maintainable, and security engineers can see exactly what happens at each step, modify the logic without touching code, and add new agents or tools as capabilities evolve.</p><p>The n8n ecosystem has embraced MCP through community-built nodes that allow workflows to connect to any MCP server as a tool. This means a single workflow can orchestrate agents powered by different LLM providers, call multiple vendor MCP servers for different data sources, and invoke custom MCP servers for internal systems, all through a standardized interface. The workflow platform handles the complexity of chaining these calls together, managing failures and retries, and providing observability into how the automation executes. Teams can start with simple linear workflows (gather context &#8594; analyze &#8594; notify) and gradually add sophistication (parallel enrichment, conditional branching, human approval steps) as they learn what works for their environment.</p><h3>Pattern 3: Enhanced Developer Tools (Claude Code, Cursor)</h3><p>The third pattern integrates MCP directly into AI-powered coding assistants, bringing security context and capabilities into the detection development workflow. Tools like Claude Code or Cursor can connect to MCP servers that expose your SIEM APIs, allowing the AI assistant to query your actual security data, understand your log schemas, and test detection logic without leaving the development environment. This tight integration between AI assistance and security infrastructure accelerates the entire detection engineering lifecycle.</p><p>Consider the typical detection development process: an engineer focuses on a specific threat model, researches how it manifests in logs, writes detection logic in the SIEM&#8217;s query language, deploys it to a test environment, runs it against historical data, tunes for false positives, and iterates through multiple rounds of refinement. With MCP-enhanced coding assistants, this process compresses significantly. The engineer can describe the threat scenario in natural language, and the assistant queries the SIEM MCP server to retrieve sample logs showing how this activity appears in the environment, generates detection logic tailored to the actual log schema, and validates the syntax before deploying anything to production.</p><p>This pattern particularly benefits teams practicing detection-as-code, where rules are developed in version-controlled repositories rather than directly in SIEM interfaces. The coding assistant understands both the security context (what you&#8217;re trying to detect) and the technical implementation details (your data structure, query syntax, deployment process) by combining its training with real-time access to your security infrastructure through MCP. The result is faster iteration, fewer bugs, and detection rules that account for your environment&#8217;s specific characteristics rather than being adapted from generic examples.</p><div><hr></div><p><em>During the day, I&#8217;m founder &amp; CTO at <a href="https://panther.com/">Panther</a>, where we&#8217;re building an AI SOC platform with <a href="https://github.com/panther-labs/mcp-panther">an open-source MCP server</a> that provides all the benefits described above! Whether you&#8217;re orchestrating agents in n8n, building detections in Claude Code, or using our AI copilot, the same MCP interface gives you programmatic access to your security operations. We open-sourced the server because this connective tissue layer should be transparent for teams building custom automation. Check it out on GitHub!</em></p><div><hr></div><h2>What&#8217;s Working (and What&#8217;s Still Rough)</h2><p>Early adopters report significant time savings on routine tasks. Context gathering that previously required 15 minutes of jumping between dashboards now completes in under two minutes through conversational agents. Alert enrichment that analysts manually applied to every suspicious event now runs automatically via orchestrated workflows. Detection engineers who spent hours researching log schemas and testing queries now iterate faster with AI assistants that understand their specific data structure. But the real value might be institutional knowledge capture. Senior analyst investigation techniques encoded into agent prompts and tool design become reproducible workflows that junior analysts can leverage immediately, and runbook knowledge transforms from documentation into executable logic that gets tested every time an agent uses it.</p><p>The most critical insight from successful implementations is that agent effectiveness depends less on model sophistication and more on thoughtful tool design. Narrow, composable tools consistently outperform kitchen-sink capabilities. Rather than building a single &#8220;query_siem&#8221; tool that accepts arbitrary queries, practical implementations create focused tools like &#8220;get_user_login_history&#8221; and &#8220;check_asset_vulnerabilities&#8221; that each do one thing well and compose naturally together. Tools that return analysis-ready context rather than raw log lines dramatically improve agent performance, and production implementations start with read-only operations before carefully expanding to write capabilities with explicit approval steps and comprehensive audit logging.</p><p>The rough edges are real and require honest acknowledgment. Agents with access to 20+ tools sometimes struggle with tool selection; context window limits can become binding constraints during lengthy investigations; and error handling remains challenging when agents misinterpret data or make incorrect assumptions. Human review stays critical for high-stakes decisions. Security considerations around credential management, audit logging, and supply chain concerns for community-built MCP servers require the same disciplined approach you&#8217;d apply to any privileged access. These aren&#8217;t showstoppers, but they require thoughtful implementation rather than naive deployment.</p><h2>Moving Forward</h2><p>Custom AI agents represent a fundamental shift in how technical security teams approach automation, moving from consuming standalone AI features to building connective tissue that orchestrates multiple capabilities together. MCP&#8217;s standardized approach to tool integration makes this accessible to security engineers rather than requiring dedicated development teams, and the three patterns we&#8217;ve explored (conversational, orchestrated, developer-focused) provide clear templates for different security workflows.</p><p>The teams building custom agents now are learning which tools pair well, how to design agents that scale human judgment, and which operational patterns work at scale. This experimentation is valuable precisely because security operations are inherently organization-specific. Your tools, your infrastructure, and your team structure are all unique, and generic AI features can only take you so far.</p><p>If you&#8217;re considering building custom agents for your security operations, learn from teams who are already running these implementations in production. <a href="https://panther.com/webinar/how-mcp-helps-security-teams-move-faster">Our webinar this week</a> features practitioners who will walk through their architectures, share what surprised them, and demonstrate actual workflows they&#8217;ve automated. Moving forward, the security teams operating at scale won&#8217;t be the ones with the best individual tools, but the ones who orchestrated them most effectively.</p><div><hr></div><p><em>Thanks for reading Detection at Scale. If you found this valuable, please share it with your colleagues who are exploring AI-powered automation in security operations!</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/building-custom-ai-soc-agents-with-mcp?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/building-custom-ai-soc-agents-with-mcp?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h4>Related Reading</h4><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;229e2432-b81a-4642-afa1-7a0fefcd455c&quot;,&quot;caption&quot;:&quot;Security operations teams have spent years trying to build the perfect integration layer between their tools and workflows. We've gone from manual API scripts to elaborate SOAR platforms, yet most security analysts still jump between countless tabs and interfaces during investigations. While generative AI has reshaped how we interact with data, connecti&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;MCP: Building Your SecOps AI Ecosystem&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther: Transforming security operations at scale with AI-first workflows across detection, triage, and response&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-04-02T13:29:59.951Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47e512e4-2aa6-423c-ac34-60dfbabd4460_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/mcp-and-security-operations&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160394652,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:25,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Cover Photo by <a href="https://unsplash.com/@theshubhamdhage?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Shubham Dhage</a> on <a href="https://unsplash.com/photos/a-black-and-white-photo-of-a-bunch-of-cubes-gC_aoAjQl2Q?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[How AI Agent Tools Work: A Practical Guide for SOC Analysts]]></title><description><![CDATA[How tool calling turns natural language questions into actionable security investigations.]]></description><link>https://www.detectionatscale.com/p/ai-agent-tools-soc-analyst-guide</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ai-agent-tools-soc-analyst-guide</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 10 Nov 2025 14:08:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ad96c62a-8389-4b02-be7a-b448d7191463_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, providing weekly insights on building AI-powered security operations, from detection-as-code to autonomous triage for practitioners managing threats at cloud scale. Subscribe now to stay up-to-date!</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>Today, you write SIEM queries. Tomorrow, you&#8217;ll supervise an agent who writes them for you. AI agents represent the ultimate evolution of security automation, building dynamic, nondeterministic investigation paths that traditional playbooks can&#8217;t anticipate.</p><p>Consider triaging a phishing email alert. You identify the delivery method, determine who received it, validate if anyone clicked the link, hunt for signs of credential theft, and track lateral movement if credentials were compromised. The typical response flow involves multiple manual queries across various systems, but with agents, this transitions to typing a series of coherent, natural-language questions. But how does the agent know where to look? How does it understand the query syntax for accessing the data to answer the question? How does it access your threat intelligence feeds or ticketing systems for context? The answer is tool calling.</p><p>Tools are structured interfaces that serve as the middleware layer between an agent&#8217;s reasoning and the execution of tasks in your various security platforms. A tool might be as simple as <code>search_alerts</code> with a clear function that accepts time ranges and filter conditions, or as sophisticated as <code>indicator_pivot</code> that chains multiple queries across systems before returning an answer. The agent uses these tool definitions to properly format queries, make correct API calls, and access the appropriate data sources.</p><p>The result is a powerful virtual assistant that can perform everyday security analysis tasks that require interaction with data and tools, quickly triaging alerts, confirming hypotheses, and synthesizing answers across large, complex data sets.</p><p>In this post, we&#8217;ll examine how tools define agent capabilities, how they decide which tools are most appropriate to call based on the situation, and the constraints you should know while using agents. These concepts will help you become more effective and productive by delegating work to AI agents.</p><div><hr></div><p><em>Quick note: I&#8217;m the Founder/CTO at Panther &#8212; a SOC platform helping security teams scale with AI agents and as-code workflows. In this post, I&#8217;ll use examples of how Panther powers these agent workflows in leading SOCs. If you want to see the product behind the ideas, <a href="https://panther.com/product/panther-ai">you can check it out here</a>.</em></p><div><hr></div><h2><strong>How Tools Define What Your Agent Can Actually Do</strong></h2><p>We all have experience using AI to summarize text, spot patterns, or answer questions, thereby expediting analysis. But for agents to carry out the work delegated to them, they need tools to access real-world context or to perform tasks dynamically. Remember, LLMs (Large Language Models) are bound by their training data, and tools serve as a bridge to break free from that isolated knowledge. By providing agents the same access to security tooling as human security analysts, they can execute similar work in the SOC and become a force multiplier on our teams. But what do tools look like in practice? How are they defined? Let&#8217;s explain using an everyday use case: Alert Tuning.</p><p>Every day, security teams triage and resolve alerts that flag high-risk behaviors. As teams process these alerts, they interpret what happened, gather surrounding context, and make a judgment call. When alerts are noise, the analyst determines which part of the rule failed and documents the findings in a ticketing system like Jira to prevent future alerts.</p><p>What would it look like to automate that process with an AI agent? It would need access to read the detection logic, review the created alerts, query log data, and then file a ticket. Let&#8217;s see this in action with Panther&#8217;s AI copilot:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!q2bd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!q2bd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 424w, https://substackcdn.com/image/fetch/$s_!q2bd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 848w, https://substackcdn.com/image/fetch/$s_!q2bd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 1272w, https://substackcdn.com/image/fetch/$s_!q2bd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!q2bd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png" width="728" height="445.1798561151079" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3dae58b-dae9-44ad-a269-67945028e28d_973x595.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:595,&quot;width&quot;:973,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:126337,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/178456902?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!q2bd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 424w, https://substackcdn.com/image/fetch/$s_!q2bd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 848w, https://substackcdn.com/image/fetch/$s_!q2bd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 1272w, https://substackcdn.com/image/fetch/$s_!q2bd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dae58b-dae9-44ad-a269-67945028e28d_973x595.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Let&#8217;s dissect what&#8217;s happening here. The user asks a question about tuning CloudTrail alerts from the last three weeks. The agent calls a tool to list alerts for that timeframe, discovers 18 alerts, then fetches each detection rule to understand why those alerts originated. Once it has this external context, it reasons about which rules need tuning and which are working correctly, and produces an analysis for how fewer or more accurate alerts could have been generated.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PMA_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PMA_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 424w, https://substackcdn.com/image/fetch/$s_!PMA_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 848w, https://substackcdn.com/image/fetch/$s_!PMA_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 1272w, https://substackcdn.com/image/fetch/$s_!PMA_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PMA_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png" width="728" height="120.72361809045226" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:165,&quot;width&quot;:995,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:36422,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/178456902?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PMA_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 424w, https://substackcdn.com/image/fetch/$s_!PMA_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 848w, https://substackcdn.com/image/fetch/$s_!PMA_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 1272w, https://substackcdn.com/image/fetch/$s_!PMA_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fe063dd-53cb-48e3-8012-e5d87fc6df3a_995x165.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This agent revealed that most rules appear to be legitimate administrative actions, which is a great observation and creates solid candidates for tuning. One particular piece of feedback was identifying a redundant rule that caused duplicate alerts and increased work for the team:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rfWR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rfWR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 424w, https://substackcdn.com/image/fetch/$s_!rfWR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 848w, https://substackcdn.com/image/fetch/$s_!rfWR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 1272w, https://substackcdn.com/image/fetch/$s_!rfWR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rfWR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png" width="728" height="297.38321536905966" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:404,&quot;width&quot;:989,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:109174,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/178456902?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rfWR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 424w, https://substackcdn.com/image/fetch/$s_!rfWR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 848w, https://substackcdn.com/image/fetch/$s_!rfWR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 1272w, https://substackcdn.com/image/fetch/$s_!rfWR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76f60144-395d-4ac7-ad90-653fd2fce742_989x404.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Tool Definitions</h3><p>Tools accomplish a specific task and contain a description that explains when they should be used and the expected output. This combination of tools, each with clear boundaries and purposes, combines into an agent that performs a specific role on your security team. Tools are defined using a name, description, and a set of parameters:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sizj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sizj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 424w, https://substackcdn.com/image/fetch/$s_!sizj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 848w, https://substackcdn.com/image/fetch/$s_!sizj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 1272w, https://substackcdn.com/image/fetch/$s_!sizj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sizj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1047819,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/178456902?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sizj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 424w, https://substackcdn.com/image/fetch/$s_!sizj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 848w, https://substackcdn.com/image/fetch/$s_!sizj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 1272w, https://substackcdn.com/image/fetch/$s_!sizj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe95bca08-e21a-4ee1-87d9-6f50a051dd44_3852x2167.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These tools are then loaded into the agent&#8217;s system prompt, and once an agent decides to use a tool, it passes parameters to obtain the correct output. For example, when an analyst asks, &#8220;Show me all high alerts from yesterday,&#8221; the agent parses that request, recognizes that <code>list_alerts</code> is the appropriate tool, and constructs a function call with <code>severities=[&#8221;HIGH&#8221;]</code> and appropriate date parameters.</p><p>Understanding how agents decide which tools to use can help ensure they carry out delegated tasks correctly. Let&#8217;s learn how to influence their decisions in the right ways.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Detection at Scale&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Detection at Scale</span></a></p><h2><strong>How Agents Decide Which Tools to Use (And How to Guide Them)</strong></h2><p>There are two ways agents choose which tools to use, and understanding the distinction changes how you approach your role in the process.</p><p>The first is direct invocation, where you use the agent as an execution layer. You say, &#8220;list alerts from failed logins in the last hour,&#8221; and the agent dutifully calls the <code>list_alerts</code> tool with those exact parameters. You&#8217;re the director; the agent is just translating your natural language into the proper tool call.</p><p>The second is agentic invocation, where you give the agent a higher-level goal and it decides which tools to use and in what sequence, based on its reasoning about the information it needs. You say, &#8220;We need to investigate the privilege escalation alert and find any related signs of compromise.&#8221; The agent uses its available tools to check the alert details, look up the user&#8217;s recent activity, compare that activity pattern against historical baselines, and check threat intelligence for the source IP. You&#8217;ve shifted from investigator to supervisor.</p><p>The way agents select tools depends on the clarity of tool declarations, the task you&#8217;ve given them, and any constraints in your instructions. When an agent receives a task, it examines all available tools, reads their descriptions to understand what each does, and builds a mental model of how it might accomplish the goal. It follows similar reasoning you would use as an analyst. The difference is that you&#8217;ve learned investigation patterns through experience and training, while the agent learns them through the quality of your prompts and examples.</p><p>The most effective way to teach agents good tool orchestration is to show them what good investigations look like. This is called few-shot prompting, and in the context of cybersecurity agents, it&#8217;s documenting and relaying to the agent how you&#8217;d handle a case. Take a phishing investigation: &#8220;When investigating phishing, first check the sender and recipients, then find logs for anyone that clicked embedded links, and create an alert to detect any follow-on activity.&#8221; By giving the agent these example workflows, you&#8217;re teaching it the investigative rhythm and logic that should carry over to similar cases.</p><p>Now that we have covered how tool selection occurs, let&#8217;s dive into the limitations and constraints of tool use and the optimal methods for keeping agents focused.</p><h2>Tool <strong>Constraints: Keeping Agents Focused and Effective</strong></h2><p>When you&#8217;re investigating an alert, you don&#8217;t start by pulling every log from every system for the past month. You scope deliberately. You ask targeted questions, you narrow your searches to relevant timeframes and systems, and you gather just enough context to reach a conclusion. When you&#8217;re dealing with security logs that can scale into terabytes across dozens of sources, this discipline is essential. The same principle applies to agents, but unlike human analysts who develop intuition about reasonable scope through experience, agents need explicit guidance about how to investigate efficiently.</p><p>Here&#8217;s what happens when agent investigations lack proper constraints. An analyst asks, &#8220;Was this authentication successful?&#8221; and the agent, seeing that it has access to a broad set of tools, decides to pull all authentication events for that user across all systems for the past week, then all network activity for those same systems, then all process execution logs from any endpoint that user touched. The agent isn&#8217;t being malicious or careless; it&#8217;s trying to be thorough. But now you&#8217;re waiting five minutes for results that should have taken 30 seconds, you&#8217;re spending tokens processing megabytes of irrelevant log data, and the agent&#8217;s working memory is cluttered with extraneous information that obscures the actual answer.</p><p>The underlying issue is that each tool call adds conversational turns to the agent&#8217;s reasoning process, and more turns create more opportunities for attention drift and degraded reasoning quality. An agent that makes 15 tool calls to answer a simple question isn&#8217;t just slow and expensive; it&#8217;s more likely to lose track of the original question or weigh irrelevant information too heavily in its final answer. <strong>The goal isn&#8217;t to prevent agents from using multiple tools when necessary; it&#8217;s to ensure each tool call is purposeful and moves the investigation forward rather than sideways.</strong></p><p>The fix is understanding that agent focus comes from two sources: well-designed tool definitions and clear guidance in your prompts. If you&#8217;re building tools yourself, make them specific to investigation patterns rather than generic data access. Instead of a single broad <code>search_logs</code> tool, create focused tools like <code>check_authentication_outcome</code> that accepts a user, timestamp, and source system, and returns only the success or failure status. <strong>Tools with clear, narrow purposes naturally guide agents toward efficient investigations because agents don&#8217;t have to make complex decisions about how to scope their queries.</strong></p><p>The other lever you control is the instructions and examples you provide. Be explicit about investigation scope: &#8220;Check only the past hour unless you find a suspicious pattern,&#8221; or &#8220;Start with authentication logs, only expand to network logs if you see evidence of lateral movement.&#8221; This is where AI playbooks become essential, representing the evolution from manual step-by-step procedures to encoded instructions that agents follow autonomously. Traditional playbooks told humans which buttons to click and which queries to run. AI playbooks encode the reasoning behind those decisions so agents can adapt the investigation to what they find.</p><p>For impossible travel alerts, an AI playbook might specify: &#8220;Compare authentication locations and timing. If locations are more than ~500 miles apart and there is less than 1 hour between them, check for concurrent sessions. Only query network logs if you find evidence of account takeover.&#8221; You&#8217;re teaching the agent the same investigation discipline you&#8217;d teach a junior analyst, but you have to be more explicit because the agent can&#8217;t read between the lines or apply common sense about what &#8220;reasonable scope&#8221; means. When agents have clear guidance about how to use tools efficiently, you get faster investigations, lower costs, and more reliable conclusions because the agent isn&#8217;t drowning in its own data collection.</p><h2><strong>From Writing Queries to Harnessing AI Capabilities</strong></h2><p>The shift from performing investigations manually to supervising agents is about the analyst&#8217;s job evolving to a higher level of leverage. </p><p>Understanding tools changes what it means to be effective in an AI-first SOC, because you&#8217;re no longer executing investigations end-to-end; you&#8217;re designing the guidance agents use. When you write an AI playbook that encodes how to investigate impossible travel alerts, you&#8217;re creating reusable investigative logic that handles hundreds of similar cases without your direct involvement. This is the fundamental skill that separates analysts who thrive with AI agents from those who struggle with them.</p><p><strong>Start by picking one investigation workflow you handle repeatedly, document it as if you&#8217;re training a junior analyst, and encode it as guidance for your agent.</strong> Be explicit about which tools to use, when to expand scope, and what findings should trigger deeper investigation. Test it, refine it based on how the agent actually performs, and you&#8217;ll quickly develop intuition for what makes agents effective versus what leads them astray. The analysts who master this are learning to teach agents how to investigate effectively rather than just investigating themselves. That&#8217;s the skill that matters going forward, and it starts with understanding that tools are the interface between what agents can reason about and what they can actually do in your environment.</p><div><hr></div><p>Thank you for reading! To get more blogs like this, subscribe below.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><p><em>If you are implementing the AI SOC workflows mentioned in this post but don&#8217;t want to build it from scratch, check out <a href="https://panther.com/">Panther</a>! We provide a highly scalable SIEM with AI agents for autonomous triage, threat hunting, and natural language search. Security teams using Panther triage 80% of their alerts autonomously while keeping analysts focused on complex cases that require human judgment. Reach out<a href="https://panther.com/request-a-demo"> to discuss bringing AI agents into your security operations.</a>  You can also send me a DM below. Thanks!</em></p><div class="directMessage button" data-attrs="{&quot;userId&quot;:85379436,&quot;userName&quot;:&quot;Jack Naglieri&quot;,&quot;canDm&quot;:null,&quot;dmUpgradeOptions&quot;:null,&quot;isEditorNode&quot;:true}" data-component-name="DirectMessageToDOM"></div>]]></content:encoded></item><item><title><![CDATA[D@S #68 - Building Production-Ready AI Agents in Security Operations]]></title><description><![CDATA[George Warbacher on navigating AI hype, building specialized agents from scratch, and why the SOAR market is facing disruption]]></description><link>https://www.detectionatscale.com/p/ep-68-warbacher-building-production-ready-ai-agents</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-68-warbacher-building-production-ready-ai-agents</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Thu, 30 Oct 2025 13:47:29 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/177490982/05288b70372fbe61c8f8ebf01d8e87b6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>There&#8217;s a distinct gap between reading about AI agents in security operations and building them for production use. The difference between spinning up a Langchain demo locally and deploying durable, reliable agents that your team depends on daily is vast, and it&#8217;s where most organizations struggle to translate AI enthusiasm into operational value.</p><p><strong>George Warbacher,</strong> Head of Security Operations at Live Oak Bank, has spent the past year bridging that gap. His journey from tinkering with Cursor and Claude Code to building production agents for his SecOps team reveals something crucial about where AI is genuinely transforming security work versus where it remains speculative. After months of late nights building agents from scratch, George developed a refined perspective on what&#8217;s hype and what&#8217;s real when it comes to AI in the SOC.</p><p>The conversation touches on everything from the technical challenges of managing agent context and memory to the broader implications of natural language interfaces replacing query language expertise to why George believes SOAR platforms face disruption. Most importantly, it illuminates a pragmatic path forward for security teams looking to adopt AI without falling prey to vendor hype or unrealistic expectations about automation replacing human judgment.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>Key Takeaways</strong></h3><ol><li><p><strong>The Real Power of AI in SecOps is Natural Language Investigation</strong>: The most immediate operational value is enabling analysts to investigate alerts using natural language instead of mastering specific query languages, platform APIs, and tool nuances. This fundamentally lowers the barrier to entry and accelerates investigations without requiring deep platform expertise.</p></li><li><p><strong>Building Production Agents Requires Engineering Rigor Beyond Demos</strong>: Creating agents locally is straightforward, but making them durable enough for production use demands solving challenges like retry logic, failbacks, context management, and multi-user execution that most tutorials skip entirely. This gap explains why many organizations struggle to move beyond proof-of-concepts.</p></li><li><p><strong>SOAR Platforms Face Disruption from Natural Language Automation</strong>: Just as cursor and Claude Code made building software accessible through conversation rather than coding mastery, AI agents will make security automation more accessible by replacing static playbooks with dynamic, conversational workflows that junior analysts can build and modify without extensive programming knowledge.</p></li><li><p><strong>The Analyst Role is Transforming, Not Disappearing</strong>: AI will shift the role from alert analysis toward investigation and threat hunting rather than eliminating security analyst positions. Agents will increasingly handle tier 1 work, while human analysts focus on the complex investigative work that emerges from agent output rather than starting from raw alerts.</p></li><li><p><strong>MCP Creates the Integration Layer Security Teams Need</strong>: The Model Context Protocol represents a significant breakthrough for security operations by enabling agents to interact with security tools through standardized interfaces. This solves the longstanding challenge of creating a true single pane of glass by letting agents orchestrate actions across disparate security platforms through natural conversation.</p></li></ol><div><hr></div><p><em>The transformation George describes is already happening in production at Panther, where our AI agent automatically triages alerts by gathering comprehensive context from your data lake, analyzing historical patterns, and presenting intelligent summaries in minutes instead of the 30+ manual minutes traditional workflows require. <a href="https://panther.com/product/panther-ai">Learn more about Panther AI</a> and our <a href="https://github.com/panther-labs/mcp-panther">MCP server implementation</a>.</em></p><div><hr></div><h3><strong>Continued Reading</strong></h3><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;7d06081a-b326-47c9-a969-b48145d13c11&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter covering security monitoring, cloud infrastructure, the latest breaches, and more. Enjoy!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Agentic SIEM&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther | Building AI agents in security operations&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-01-21T14:06:49.285Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4962c-afed-4aa8-89a6-b532f2e52ecb_1408x768.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/the-agentic-siem&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:155046728,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:16,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b955c58c-2080-4500-90b7-144b0c4119d1&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter diving into security monitoring, generative AI, and more! If you enjoy reading Detection at Scale, share it with friends!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Investigative AI Agents: Saving Time during Triage and Analysis&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther | Building AI agents in security operations&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-02-26T14:18:44.264Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f410c60-f15d-4bf6-b532-04a661fba09e_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/investigative-ai-agents&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:157962162,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;33407452-e636-4d40-8974-f902bf4dad9c&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter diving into SIEM, generative AI, security monitoring, and more. Enjoy!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How AI Agents Transform Alert Triage&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther | Building AI agents in security operations&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-04-22T15:07:24.854Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/18bee53d-856c-421e-ab09-ece212d67014_1536x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/how-ai-agents-transform-alert-triage&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:161844147,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:7,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[From SIEM to AI SOC: The Agent-Driven Future]]></title><description><![CDATA[How AI agents will transform security operations from alert-driven chaos to intelligent, autonomous analysis that finally scales to fit our needs.]]></description><link>https://www.detectionatscale.com/p/siem-to-ai-soc-the-agent-driven-future</link><guid isPermaLink="false">https://www.detectionatscale.com/p/siem-to-ai-soc-the-agent-driven-future</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 29 Sep 2025 13:00:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!A2zc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter for scaling security operations teams, focused on best practices applying AI agents in the SOC. If you enjoy reading Detection at Scale and find it helpful, <strong>please share</strong> <strong>it</strong> <strong>with your network</strong>!</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share Detection at Scale&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share Detection at Scale</span></a></p><div><hr></div><p>The SOC has a fundamental scaling problem. Not only are there too many alerts to monitor, but performing the job effectively requires high technical nuance across understanding operating systems, networks, cloud environments, attacker tactics, and the latest intelligence. Working in the SOC is also stressful, error-prone, and requires high attention to detail. Querying an incorrect timeframe, missing one event, or failing to check a related log source could wildly change the course of an investigation or introduce the organization to significant levels of risk. Precision and speed matter in security operations.</p><p>Throughout the years, security teams have tried various solutions to these problems. We adopted detection as code, built deterministic response automation to contain incidents and deeply understand alerts, and adopted data lakes to handle new scale needs.</p><p><strong>AI agents introduce a new category of automation that can finally address the fundamental scaling constraints facing security operations. This capability shift demands changes in SIEM architecture, team expectations, and the infrastructure foundation enabling effective AI-powered security operations.</strong></p><h2>Pattern Matching</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!A2zc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!A2zc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 424w, https://substackcdn.com/image/fetch/$s_!A2zc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 848w, https://substackcdn.com/image/fetch/$s_!A2zc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 1272w, https://substackcdn.com/image/fetch/$s_!A2zc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!A2zc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png" width="1456" height="967" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9069f236-7709-44cf-bb11-136b38b008b7_3840x2551.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:967,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:269233,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/174796122?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9069f236-7709-44cf-bb11-136b38b008b7_3840x2551.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!A2zc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 424w, https://substackcdn.com/image/fetch/$s_!A2zc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 848w, https://substackcdn.com/image/fetch/$s_!A2zc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 1272w, https://substackcdn.com/image/fetch/$s_!A2zc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd498124-67c3-4d4b-91e5-42d92b4e6dfc_3840x2551.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>During the last two decades in cybersecurity, we witnessed several fundamental shifts&#8212;from the emergence of modern SIEMs in the early 2000s, through the introduction of EDRs in the mid-2010s and XDRs in the late 2010s, to the adoption of data lake architectures and SOAR platforms, and now the integration of AI in a new context. <strong>Yet one constant remains: humans must ultimately always determine &#8220;good&#8221; versus &#8220;bad&#8221; in alerting and security operations.</strong> Why is this? Because attempting to automate every possible attack permutation would result in inaccurate, impossibly complex, and unmaintainable security code.</p><p>Security operations teams are very sophisticated pattern matchers. When alerts come in, like when our infrastructure team adds an IAM role that can be assumed from an AWS account <em>outside</em> of our organization, we typically know exactly why (because we just spoke with them about it) or we have a gut feeling that &#8220;oh, this is bad.&#8221; There are too many possible conditions to predict ahead of time to avoid that alert, and hence, we are &#8220;flooded with alerts,&#8221; which has been the canonical problem to solve in cyber. But what else is a very sophisticated pattern matcher? A large language model (LLM).</p><p><strong>Generative AI has high potential to automate most routine security operations tasks because LLMs can process vast amounts of context, instructions, tools, and data, then produce a complete analysis</strong>. When we prompt a model, every additional word (token) guides its attention to the right place. This means we can break from rigid, traditional automation and begin delegating novel tasks to AI agents, which are LLMs with carefully crafted prompts that specify personas and goals. <strong>As long as the model is given the appropriate depth and variety of context, it can perform nearly as well as a human analyst</strong>. But if it&#8217;s missing any key business or security contexts, it will naturally perform worse and be perceived as a hallucinated inferior solution.</p><p>Understanding that LLMs excel at sophisticated pattern matching opens the door to how we structure security operations workflows, moving from single-point-of-failure bottlenecks to AI agents that can operate across the entire security lifecycle.</p><h2>The Multi-Agent SOC</h2><p>The security team&#8217;s free time is a very fleeting resource. Between on-call rotations, incident response fire drills, and the constant pressure to stay current with new types of threats, analysts are burning out at alarming rates. The real challenge is the cognitive load of making high-stakes decisions under pressure, often with incomplete information. When you factor in the need for continuous learning, it becomes clear that throwing more people at the problem isn&#8217;t sustainable. </p><p><strong>We need to fundamentally change how security operations work gets distributed between humans and machines, allowing analysts to focus on the strategic, creative problem-solving that humans excel at while delegating the repetitive, context-heavy tasks to AI agents.</strong></p><p>There are several opportunities to apply agents across the lifecycle of security operations:</p><ol><li><p><strong>Threat Hunting and Modeling</strong>: What&#8217;s important for our organization to protect? Do we have the data to back that up? Can we find the indicators of an attack?</p></li><li><p><strong>Detection Creation</strong>: What behaviors do we need to track? Which ones deserve an on-call page? What are our security significant events?</p></li><li><p><strong>Incident Response</strong>: What do we do once we get paged? How do we assess/react/recover?</p></li></ol><p><strong>Let&#8217;s start with threat modeling</strong>&nbsp;agents, which provide a massive speed improvement in querying and understanding our vast security data that we spend large sums of time and money collecting. These agents can analyze your data using natural language and perform research on particular tactics/techniques, search for evidence of indicators, or look around to discover high-priority assets, baseline behavior, and map potential attack paths. Traditional threat modeling exercises happen quarterly at best and quickly become stale, but an AI agent can help maintain a living threat model that updates as your infrastructure changes, new vulnerabilities are disclosed, and landscapes shift.</p><p>For <strong>Detection Creation</strong>, AI agents can bridge the knowledge gap between our security team&#8217;s monitoring needs and how these get implemented as actionable rule logic. Rather than spending months or years learning specialized syntax to translate business logic into detection rules (a process that often takes weeks), agents can quickly assess your available data and unique environment characteristics and generate tailored detections that just work. This process quickly becomes a flywheel, where the more high-quality rules created and optimized, the easier net-new creation becomes. Additionally, incident response and triage agents can feed learnings into detection creation agents.</p><p>Finally, <strong>Incident Response,</strong> which often takes the most time and creates the most stress. For most security teams that haven&#8217;t yet applied AI in this area, they tend to create a playbook for a given type of alert, document a series of steps and if/else logic for handling the alert, then apply the playbook to one or many rules. The problem is that every incident has a unique context that doesn&#8217;t <em>always </em>fit neatly into predetermined logic trees.</p><p>AI agents can fundamentally change this by acting as triage assistants that intelligently combine the technical details of an alert with the broader business context and the identified leads during triage. Imagine an agent that automatically correlates a suspicious login attempt with recent employee departures and historical attack patterns against your industry. Instead of 30 minutes of manual research, the agent delivers a rich briefing in only 2-3 minutes: <em>&#8220;This login attempt from Romania targeting Sarah&#8217;s account is concerning because she left the company last week, her access should have been disabled, and we&#8217;ve seen similar patterns in recent attacks against financial services companies.&#8221;</em> The agent doesn&#8217;t always make the final decision, but it arms the human analyst with the context needed to make an informed judgment quickly.</p><h2>The Platform Shift</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!erGd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!erGd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 424w, https://substackcdn.com/image/fetch/$s_!erGd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 848w, https://substackcdn.com/image/fetch/$s_!erGd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 1272w, https://substackcdn.com/image/fetch/$s_!erGd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!erGd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png" width="1456" height="389" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c61bbe4-618b-4fc7-a421-80a3c53f579f_3840x1026.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:389,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:117191,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.detectionatscale.com/i/174796122?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c61bbe4-618b-4fc7-a421-80a3c53f579f_3840x1026.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!erGd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 424w, https://substackcdn.com/image/fetch/$s_!erGd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 848w, https://substackcdn.com/image/fetch/$s_!erGd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 1272w, https://substackcdn.com/image/fetch/$s_!erGd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc03417b5-8e18-4981-8403-db4774f73b1d_3840x1026.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Traditional SIEM platforms weren&#8217;t designed for the kind of rich, contextual analysis that AI agents require</strong>. Legacy systems store data in proprietary formats, limit access through rigid query interfaces, and charge prohibitive costs for the data volumes that effective AI agents need to consume. The shift toward data lake architectures creates the foundation that AI agents need to be truly effective. When security data lives in open formats in data lakes, agents can access vast amounts of historical data without the performance bottlenecks or cost penalties of traditional systems, enabling them to analyze years of data to understand normal patterns, seasonal variations, and subtle attack progressions that would be impossible with limited data retention.</p><p><strong>The &#8220;connective tissue&#8221; enabling AI SOC evolution extends far beyond data architecture, and SIEM platforms will likely evolve to fulfill this critical role</strong>. This requires robust APIs for agent interactions with security tools, comprehensive data catalog management for handling diverse log formats, and sophisticated identity and access controls that enable agents to operate securely throughout your environment. Security data pipelines have become essential&#8212;not merely for cost optimization, but for ensuring AI agents can access clean, enriched, and properly formatted data.</p><p>Most importantly, the infrastructure needs to support &#8220;context engineering,&#8221; the practice of systematically providing AI agents with the business context, threat intelligence, and environmental knowledge they need to make informed analysis. This means maintaining knowledge bases about your assets, business processes, risk appetite, and operational procedures in formats AI agents can consume and reason about.</p><h2>Evolution Determines AI Success</h2><p>While humans have remained the final arbiters of security alerting for decades, AI agents can now shoulder the exhausting work of context gathering, pattern analysis, and routine decision-making that overwhelms security teams. The organizations successfully deploying AI agents are embracing architectures purposefully designed for this new paradigm. Data lakes, open formats, flexible APIs, and a robust data fabric are requirements for survival in the modern security landscape.</p><p>For security teams ready to embrace this evolution, the potential for step-function improvement is tangible and immediate. The fundamental scaling problem that has plagued SOCs&#8212;too many alerts, too much context to gather, too few analysts with too little time&#8212;finally has a viable solution to build upon. The question isn&#8217;t whether this transformation will happen, but whether your current SIEM platform can power it or will become an obstacle to progress.</p><p>AI agents can transform every aspect of the security operations lifecycle. The agent-driven SOC is being deployed today by forward-thinking teams that understand the power of combining human expertise with AI capabilities. The journey from traditional SIEM to AI-powered security operations starts with choosing infrastructure that can truly support this vision&#8212;and the time to begin is now.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>Thanks for reading! I&#8217;m the Founder and CTO at Panther, building intelligent AI agents and security data infrastructure to automate and accelerate core security operations workflows. If you want to learn more about how Panther incorporates security pipelines, open data lakes, signals/detection layer, and AI agents into its platform, <a href="https://panther.com/">check out our </a></em><a href="https://panther.com/">dem</a><em><a href="https://panther.com/">o</a> or book a meeting with me below! Panther is trusted by leading security teams like Coinbase, Asana, Discord, and more.</em></p><div class="directMessage button" data-attrs="{&quot;userId&quot;:85379436,&quot;userName&quot;:&quot;Jack Naglieri&quot;,&quot;canDm&quot;:null,&quot;dmUpgradeOptions&quot;:null,&quot;isEditorNode&quot;:true}" data-component-name="DirectMessageToDOM"></div><div><hr></div><h3>Continued Reading</h3><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;7d6ecc8c-985c-4c00-9570-4d0585e5dcf2&quot;,&quot;caption&quot;:&quot;This generation of security analytics tools is based on a decoupled data architecture combining cloud storage, open data formats, and highly performant distributed query engines. While this provides improved performance, scalability, and new pricing models for security teams, it comes with a nuance in usability and technical understanding. This blog exp&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Transition from Monolithic SIEMs to Data Lakes for Security Monitoring&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther | Building AI agents in security operations&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2023-10-23T13:55:53.618Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!DLc6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde309da3-9d85-41e2-baef-d52326a8a79c_600x300.gif&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/the-transition-from-monolithic-siems&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:138021992,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:10,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;c1dad86e-bced-4e54-b1b1-d905657a3ea6&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter covering security monitoring, cloud infrastructure, the latest breaches, and more. Enjoy!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Agentic SIEM&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder, CTO @ Panther | Building AI agents in security operations&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-01-21T14:06:49.285Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4962c-afed-4aa8-89a6-b532f2e52ecb_1408x768.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/the-agentic-siem&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:155046728,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:16,&quot;comment_count&quot;:1,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[D@S #67: The Crawl, Walk, Run of Agentic Security Operations with Stephen Gubenia]]></title><description><![CDATA[Steven Gubenia (from Cisco Meraki) shares his framework and lessons learned for implementing AI agents in security operations.]]></description><link>https://www.detectionatscale.com/p/ep-67-gubenia-crawl-walk-run-ai-agents</link><guid isPermaLink="false">https://www.detectionatscale.com/p/ep-67-gubenia-crawl-walk-run-ai-agents</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Wed, 24 Sep 2025 13:34:18 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/174362131/4c84310cc7aa9c322d0376c8fbe4e6ea.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this episode of Detection at Scale, Steven Gubenia, Head of Detection Engineering for Threat Response at Cisco Meraki, shares his practical framework for implementing AI agents in security operations. With deep experience from one-man security teams to leading detection engineering at scale, Steven brings a refreshingly pragmatic perspective on how organizations can thoughtfully integrate AI into their security workflows. Steven now leads initiatives that bridge traditional SOAR capabilities with modern agentic workflows, emphasizing that AI enhancement requires solid foundational processes to avoid the "garbage in, garbage out" trap.</p><p><strong>The conversation centers around Steven's "crawl, walk, run" methodology for implementing AI in security operations</strong>, moving from simple data enhancement to autonomous decision-making with appropriate human oversight. He discusses the evolution of human-in-the-loop strategies, explaining how teams can build trust in AI agents over time while maintaining proper audit trails and governance. The discussion explores practical implementation details around enrichment agents, triage automation, and containment workflows, highlighting the importance of scoped permissions and security considerations when deploying AI agents with real operational impact.</p><p>Steven also addresses the organizational side of AI adoption, emphasizing the critical need for top-down buy-in, comprehensive training programs, and messaging that focuses on individual productivity benefits rather than cost-cutting narratives. Throughout the discussion, Steven reinforces that while AI won't replace security professionals, those who learn to use AI effectively will significantly out-compete those who don't.</p><h3><strong>Key Takeaways</strong></h3><ul><li><p><strong>The Crawl, Walk, Run Framework Works</strong>: Steven's three-phase approach provides a practical roadmap&#8212;start with data enrichment agents, progress to reasoning models (triage), then move to action-taking agents (containment). This graduated approach builds organizational trust while delivering immediate productivity gains.</p></li><li><p><strong>Human-in-the-Loop</strong>: Rather than reviewing every agent decision forever, successful implementations start with intensive human oversight, gradually shifting to audit-based review as confidence builds. </p></li><li><p><strong>Detection Engineering Becomes Mission-Critical</strong>: As AI enables more granular, environment-specific detection logic, detection engineering skills become more valuable, not less. Organizations will shift from generic rule sets to highly customized detection logic tailored to their threat landscape and infrastructure.</p></li><li><p><strong>Organizational Change Requires Individual Value Proposition</strong>: Top-down AI mandates fail without proper training and clear individual benefits. Successful adoption focuses on how AI eliminates tedious work and enables analysts to focus on high-value activities that advance their careers.</p></li><li><p><strong>Security Considerations Are Engineering Problems</strong>: Concerns about AI agent security, MCP server trust, and permission scoping are solvable through proper engineering practices, vendor management processes, and incremental deployment strategies rather than barriers to adoption.</p></li><li><p><strong>The Productivity Multiplier Reality</strong>: Steven's prediction that AI-proficient security professionals will out-compete their peers isn't hyperbole&#8212;it's already happening. Entry-level positions are evolving, but professionals who master AI augmentation will have significant competitive advantages in the job market.</p></li></ul><h3><strong>Relating Reading</strong></h3><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a6e4e878-b702-4676-aff2-6005a0a9c8ff&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter diving into SIEM, generative AI, cloud-centric security monitoring, and more. Enjoy! If you enjoy reading Detection at Scale, please share!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The AI-Powered Detection Engineer &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder &amp; CTO @ Panther.com, solving high-scale security monitoring. Former Security @ Airbnb, Yahoo, and Verisign.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-10T13:43:26.025Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5770d8ef-1cdf-4f80-a837-84105c05d09f_1408x768.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/the-ai-powered-detection-engineer&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:158747065,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:9,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;0cf16bda-f2c0-4d43-a87b-ae0b1e141ff5&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale, a weekly newsletter for SecOps practitioners covering detection engineering, cloud infrastructure, the latest vulns/breaches, and more. Enjoy!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;LLM Fundamentals for SecOps Teams&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder &amp; CTO @ Panther.com, solving high-scale security monitoring. Former Security @ Airbnb, Yahoo, and Verisign.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-09-03T13:05:52.227Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd168435-0810-4da1-8d3f-519a95823328_1024x1024.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/llm-fundamentals&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:148387931,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:8,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[The Data Your AI-Powered SOC Needs]]></title><description><![CDATA[Context Engineering for Automated Security Triage]]></description><link>https://www.detectionatscale.com/p/context-engineering-ai-security-operations</link><guid isPermaLink="false">https://www.detectionatscale.com/p/context-engineering-ai-security-operations</guid><dc:creator><![CDATA[Jack Naglieri]]></dc:creator><pubDate>Mon, 22 Sep 2025 13:16:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/aa8b63ab-6099-4282-9eb9-2dcab0e227ca_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to Detection at Scale, a weekly newsletter for scaling and sustaining security operations teams. We focus on effectively utilizing AI agents in the SOC with the best practices on context, prompts, and tools like MCP. If you enjoy reading Detection at Scale, please share it with your friends!</em></p><div><hr></div><p>Every SOC analyst knows the frustration of data gathering during their rotation: a suspicious login alert fires, but the investigation becomes a scavenger hunt across multiple log sources. Is this user typically remote? Has this IP been flagged before in an incident? Are there related alerts from the same timeframe? Every minute that goes by could mean potential escalation or another false positive, burning valuable analyst time.</p><p>AI SOC analysts promise to solve this problem by automating the tedious work of alert triage and investigation, but <strong>AI agents can only be as good as the context they are provided</strong>. Feed them isolated alerts without the proper background context, and you'll get shallow analysis. Give them the best data at the right time, and they can reason through complex security scenarios with the depth of your best analysts. <strong>The question is: What data does it take for AI agents to thrive in the SOC?</strong></p><p>The answer lies in "context engineering"&#8212;the art and science of providing AI systems with the right source and depth of information needed to solve complex problems. In security operations, this means going beyond simple alert forwarding to building rich, contextual intelligence that helps AI agents understand <em>what</em>&nbsp;happened, <em>why</em> it matters, <em>who</em> was involved, and <em>how</em> it fits into your organization's unique threat model. Effective AI-driven security operations require four critical layers of contextual data across alerts, identity, asset, and enrichments, helping AI agents understand individual security events and how these events fit into your organization's broader risk landscape.</p><p>This blog post will explore the data layers that bring AI from a basic alert processor into an intelligent security analyst. We'll examine how historical alert patterns provide crucial learning opportunities, why identity and asset context separate real threats from false positives, and how enrichment data helps AI agents make the nuanced decisions that effective security operations demand. </p><p>Most security teams are already collecting this data, so let's integrate it for AI-driven security operations that work to our advantage.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/p/context-engineering-ai-security-operations?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/p/context-engineering-ai-security-operations?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>The Context Challenge</h2><p>As teams introduce AI SOC analysts to automate triage and investigation workflows, the question becomes: how do we ensure these agents have access to the rich contextual intelligence that makes them as informed as their human counterparts?</p><p>Consider how your analysts approach a suspicious login alert. They don't just look at the raw events; they start building context by checking the user's recent activity patterns, cross-referencing the source IP against threat intelligence feeds, examining similar alerts from the same time window, and factoring in insider knowledge about ongoing projects. This contextual reasoning separates a 30-second false positive dismissal from a 30-minute investigation that goes nowhere.</p><p><strong>Agent failures are context failures, not model failures,</strong> though they can also result from poor tooling integration, limited data access, or insufficient breadth in external connections. When AI-powered security tools produce shallow analysis, miss obvious patterns, or generate recommendations that feel disconnected from your environment, the underlying AI model isn't always the limiting factor. Modern large language models excel at complex reasoning when provided with comprehensive context and enough chain of thought. The challenge lies in gathering, structuring, and delivering that context effectively.</p><h2>Alert and Signals History</h2><p>The fastest way to triage an alert is to check if we have a record of it in historical patterns. As much as we strive for novelty in detection development, what typically ends up happening is continuous alerts from "the usual suspects" (<em>looking at you, Jim from Marketing</em> &#8211; kidding). But when an alert is genuinely unique, answering "have we seen this one before?" becomes crucial for determining whether it's malicious and needs established response procedures. This <strong>environmental learning</strong>&#8212;understanding whether patterns represent first-time occurrences versus recurring themes specific to your infrastructure&#8212;helps AI agents distinguish between suspicious geographic access and routine activity from your distributed remote workforce, or between genuine anomalies and normal business evolution.</p><p><strong>Signals analysis</strong> takes this context check a layer deeper, where all alert events are proactively analyzed rather than just checking for identical alert matches. This can be particularly useful for examining indicator attributes across all alerts, such as checking if an IP address has appeared in other events, whether specific user attributes correlate with multiple alert types, or if attack techniques are used consistently across different timeframes. AI agents can easily take this history into context by checking for the same alert through 30-60-90 days, alerts from the same actor across detections, or all alerts around the same time period. It's not a perfect science (e.g., how far do we go back? how much data do we need to add into context?). However, these will typically yield more indicator clues to aid in additional data gathering and increase confidence.</p><p>&#129302; <em>"This suspicious login pattern has triggered 12 similar alerts over the past 6 months. Ten were false positives related to our mobile development team's remote testing environment, but two led to confirmed account compromises during our Q3 security incident."</em></p><p><strong>Outcome and quality tracking</strong> becomes particularly valuable when analysts mark alerts as false positives, confirm genuine issues, or escalate to incident response, which creates crucial learning signals for future triage decisions. AI agents can begin to recognize the common indicators that separate benign anomalies from genuine security concerns, but only when they have access to this historical resolution data.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Detection at Scale&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Detection at Scale</span></a></p><h2>Identity Intelligence</h2><p>While alert history provides the foundation for pattern recognition, identities provide organizational context about the human or non-human (service accounts) causing the alert. Understanding&nbsp;<em>who</em>&nbsp;is involved in an alert&#8212;their role, typical behaviors, access patterns, and organizational context&#8212;is often the difference between a real attack and an admin who ran an overly privileged command in production or someone from HR downloading many sensitive files from Google Drive.</p><p><strong>User profiling</strong> enables AI agents to contextualize behaviors to understand whether unusual activity is genuinely suspicious or consistent with someone's job function. A DevOps engineer making widespread production changes during scheduled maintenance represents regular operational activity, while a marketing coordinator performing the same actions should trigger immediate investigation. Without this organizational context, AI agents default to taking all behaviors at face value, either overgeneralizing or missing legitimate issues.</p><p>&#129302; <em>"This user is a Senior Site Reliability Engineer based in our Seattle office. While the 2:47 AM login time is outside normal business hours, it correlates with a P1 incident escalation in our ticketing system and matches their historical pattern during infrastructure emergencies."</em></p><p><strong>Team and organizational dynamics</strong> add another layer of contextual understanding that helps AI agents reason about lateral movement, privilege escalation, and insider threat scenarios. When multiple users from the same team exhibit similar behavioral changes simultaneously, this might indicate a targeted campaign or reflect organizational changes like new project assignments or initiatives.</p><p>AI agents need access to current organizational data that reflects these changes in real-time, rather than static user profiles that become outdated and lead to incorrect assessments.</p><h3>Asset Intelligence: The Technical Context</h3><p>Just as identity context helps AI agents understand <em>who</em> is involved in security events, asset intelligence provides crucial insight into <em>what</em> systems are accessed and their relative importance to business operations. This technical context transforms generic security alerts into risk-prioritized investigations, aligning with business impact.</p><p><strong>Asset classification and business criticality</strong> enable AI agents to understand the difference between a suspicious login to a development sandbox and identical activity targeting user data in production databases. A brute-force attack against a decommissioned test server might represent low-priority cleanup work, while the same attack pattern against customer-facing payment systems demands immediate escalation. AI agents with proper asset context can automatically adjust investigation priority and escalation procedures based on the criticality of affected systems.</p><p><strong>Infrastructure and deployment context</strong> help AI agents distinguish between legitimate cloud-native behaviors and potential security concerns. Auto-scaling events, serverless function executions, and container orchestration activities generate numerous security events that appear suspicious without proper infrastructure context. An AI agent that understands your Kubernetes deployment patterns can differentiate between regular pod lifecycle events and genuine lateral movement attempts while recognizing when cloud resource creation deviates from established automation patterns.</p><p>&#129302; <em>"This EC2 instance shows unusual outbound network connections to external IP addresses. However, the instance is tagged as 'ml-training-prod' and the connections align with our standard machine learning data pipeline that pulls from public datasets. The concerning factor is the timing&#8212;these connections typically occur during scheduled batch processing windows, but this activity is happening outside the defined maintenance schedule."</em></p><p><strong>Vulnerability and patch context</strong> provide AI agents with essential risk assessment capabilities that help prioritize security events based on exploitability. A network scan targeting systems with known unpatched vulnerabilities represents a more urgent threat than identical activity against fully updated infrastructure. AI agents with access to vulnerability management data can correlate attack patterns with specific CVEs, helping security teams understand whether observed activity represents opportunistic scanning or targeted exploitation of known weaknesses.</p><h3>Enrichment Intelligence: External Context That Matters</h3><p>The final layer of contextual intelligence comes from external data sources that provide AI agents with broader threat landscape awareness. This enrichment context helps transform isolated security events into comprehensive threat assessments incorporating global intelligence and external indicators.</p><p><strong>IP and domain reputation</strong> give AI agents the external perspective to assess whether network connections represent legitimate business activity or potential threats. Geographic location data, hosting provider information, and reputation scores help AI agents understand the difference between routine CDN connections and suspicious command-and-control communications. However, adequate enrichment goes beyond simple reputation scores, including contextual factors like recent registration dates, certificate anomalies, and infrastructure patterns that indicate potential threat actor infrastructure.</p><p><strong>File and hash intelligence</strong> enables AI agents to quickly assess the risk level of unknown binaries, documents, and other artifacts discovered during investigations. Rather than treating every unknown file as equally suspicious, AI agents with access to comprehensive threat intelligence can prioritize investigation efforts based on known malware families, campaign attribution, and behavioral analysis from sandbox environments. This context is particularly valuable for prioritizing incident response efforts when multiple potential threats require simultaneous attention.</p><p><strong>Campaign and technique correlation</strong> helps AI agents understand how individual security events fit into broader attack patterns and threat actor behaviors. When suspicious PowerShell execution correlates with techniques commonly used by specific threat groups, AI agents can provide analysts with relevant context about likely attack progression, typical dwell time, and effective containment strategies. This strategic context transforms reactive alert response into proactive threat hunting based on anticipated attacker behaviors.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.detectionatscale.com/subscribe?"><span>Subscribe now</span></a></p><h2>Making Context Work in Practice</h2><p>The four layers of contextual intelligence, historical alert patterns, identity awareness, asset classification, and external enrichment, enhance AI agents from basic alert processors into more sophisticated analysts. However, the real value emerges from the interconnections between these data layers and how they inform each other during investigations.</p><p>Consider a practical example: an AI agent receives an alert about unusual database queries from a service account. Historical context shows this is the first time this particular query pattern has been observed. Identity intelligence reveals the service account is associated with a financial reporting application that typically runs predictable batch processes. Asset context indicates the target database contains customer payment information, a high-value target requiring immediate attention. Enrichment data shows the queries originated from an IP address recently flagged in threat intelligence feeds associated with a financially motivated threat actor group.</p><p>Each context layer provides valuable information, but the combination creates a comprehensive assessment that enables rapid and informed decision-making. The AI agent can immediately escalate this incident as a likely attack against high-value financial data, providing analysts with the context needed for effective response rather than generic "unusual database activity" alerts.</p><p>The key to successful AI-driven security operations is ensuring your AI agents have structured, current, and immediately accessible data when security events unfold. This requires thoughtful integration and onboarding of the logs and integrations that can provide these angles of intelligence.</p><p><strong>Modern security operations teams that get this right find their AI agents becoming genuine force multipliers</strong>, handling routine triage with human-level contextual awareness while freeing analysts to focus on complex investigations and strategic security initiatives. Investing in proper context engineering pays dividends through faster incident response, more accurate threat prioritization, and security operations that scale with business growth rather than becoming bottlenecks.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.detectionatscale.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Detection at Scale! Subscribe for new posts! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>During the day, I&#8217;m the Founder @ Panther Labs, building intelligent AI agents and infrastructure to automate and accelerate triage and investigation times for security teams while improving accuracy and quality. If you want to learn more about how Panther incorporates these layers of intelligence into its AI SOC analyst agent, <a href="https://panther.com/">check out our homepage </a></em><a href="https://panther.com/">dem</a><em><a href="https://panther.com/">o</a> or request a demo! Panther is trusted by leading security teams like Coinbase, Asana, Discord, and more.</em></p><p><strong>Related Reading</strong></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f7931e85-54ec-48c4-ac9c-5a0edf3728d3&quot;,&quot;caption&quot;:&quot;Welcome to Detection at Scale&#8212;a weekly newsletter exploring practical SIEM strategies in the era of generative AI and large-scale security monitoring. Enjoy!&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Teaching Security AI Agents to Navigate Your Organization&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:85379436,&quot;name&quot;:&quot;Jack Naglieri&quot;,&quot;bio&quot;:&quot;Founder &amp; CTO @ Panther.com, solving high-scale security monitoring. Former Security @ Airbnb, Yahoo, and Verisign.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4424d74c-16df-4a59-95b3-c650104799e9_1239x1239.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-04T14:14:36.705Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95272c32-b1c9-4e97-8216-0d753f7bfdd7_1200x623.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.detectionatscale.com/p/teaching-ai-agents-your-organization&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:158336646,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:4,&quot;comment_count&quot;:0,&quot;publication_id&quot;:820616,&quot;publication_name&quot;:&quot;Detection at Scale&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Shfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2813e58b-c21b-47c3-b4b4-cd13dd8d115d_512x512.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item></channel></rss>