Secure, Govern, and Operate AI at Engineering Scale
Modern AI infrastructure outgrows traditional access and security models. Whether you’re running GPU training clusters or deploying digital twins that autonomously interact with infrastructure, you can’t rely on static credentials.
Teleport treats every actor — agents, LLM tools, bots, MCP tools, and digital twins — as a first-class identity. This turns agentic AI from “uncontrolled automation” into trustworthy, governed automation, delivering the identity, access, and security foundation your AI environment demands.
When agents can access sensitive data and perform state-changing actions, knowing which agent did what, and being able to do meaningful access control, is very important. I believe fundamentals like identity and access management are going to more important in an AI-heavy world.
AppSec
MegaManSec/Gixy-Next
By Joshua Rogers: An actively maintained fork of Yandex’s Gixy that statically analyzes nginx.conf files to detect security misconfigurations, hardening gaps, and performance issues. It detects issues like HTTP splitting, SSRF, host spoofing, path traversal, alias traversal, and more.
Clang Hardening Cheat Sheet – Ten Years Later
Daniel Janson and Béatrice Creusillet provide an update to Quarkslab’s 2016 Clang Hardening Cheat Sheet, covering additional hardening options recommended by the OpenSSF, as well as more specialized options that mitigate newer classes of exploits. Including: general protections when using the standard C/C++ libraries or loading libraries, mitigations against stack-based memory corruption, defenses against code reuse attacks like Return-Oriented Programming (ROP) or Jump-Oriented Programming (JOP) attacks, as well as defenses against speculative execution attacks.
How We Scaled Code Repository Management at DNSimple
Simone Carletti describes how DNSimple went from manually managing GitHub repositories to a fully automated Terraform-based system that automatically runs terraform plan on pull requests and terraform apply on merge. They structure GitHub repositories as Terraform map variables with topics for categorization (language, team ownership, policies), manage templates and CODEOWNERS files as github_repository_file resources, and more. The system now manages hundreds of repos with centralized permissions, full Git history for rollbacks, and enables bulk changes across all repos by modifying a single template file.
I like this a lot – git history of all repo changes, roll out new security policies/secure defaults to all repos in one place, strong asset inventory fundamentals (which repos does the platform team maintain? What Go projects do we have? Which repos have vulnerability alerts enabled?).
Arcjet adds security controls directly into your application code. Protect APIs and endpoints from abuse with rate limiting, bot detection, and request validation, without proxies, IP allowlists, or complex infrastructure.
Making secure defaults easy and developer friendly is in my book! Arcjet also has open source SDKs so you can give it a try for free.
Cloud Security
DenizParlak/heimdall
By Deniz Parlak: An AWS security scanner that discovers privilege escalation paths across 10+ AWS services, featuring 50+ IAM privilege escalation patterns and 85+ attack chain patterns with MITRE ATT&CK mapping. Heimdall detects both direct and multi-hop attack paths (EC2, RDS, S3, Lambda, KMS, Secrets Manager, STS, SNS, SQS, DynamoDB).
Note: not sure how much of this is just vibed, but including for AWS privilege escalation tool completeness.
The Cloud-Native Detection Engineering Handbook
Ved K walks through a thorough detection engineering lifecycle for cloud environments, covering nine phases from threat research to continuous improvement. The post describes how to prioritize TTPs using exploitability/risk/ROI scoring, validate telemetry coverage, and implement a three-tier data architecture (Bronze/Silver/Gold) using ECS or OCSF normalization to write cloud-agnostic detections once instead of maintaining provider-specific rules for GCP/AWS/Azure, and more.
Tons of detail, great resource
Blue Team
(Anti-)Anti-Rootkit Techniques – Part I: UnKovering mapped rootkits
Sven Rath (@eversinc33) discusses manual driver mapping (writing your malware into kernel space), and three rootkit detection techniques, including scanning device objects in the Windows Object Manager for DriverEntry pointers to unbacked memory regions, queuing an APC to all system threads to identify stack frames pointing to unmapped memory, and leveraging Non-Maskable Interrupts (NMIs) to hopefully catch a rootkit thread running on a CPU by walking the stack to find unbacked memory pointers.
To test these detection strategies and related evasions, Sven developed unKover, a Windows anti-rootkit/anti-cheat driver that can detect drivers mapped to kernel memory.
GachiLoader: Defeating Node.js Malware with API Tracing
Check Point’s Sven Rath and Jaromir Horejsi describe GachiLoader, an obfuscated Node.js malware distributed via the YouTube Ghost Network, a network of compromised YouTube accounts promoting fake game cheats and cracked software. They released Nodejs-Tracer, a tracer for Node.js scripts to dynamically analyze NodeJS malware, defeat common anti-analysis tricks and significantly reduce manual analysis effort.
Some of the GachiLoader variants drop a second-stage loader implementing “Vectored Overloading,” a novel PE injection technique that tricks the Windows loader into loading a malicious PE from memory instead of a legitimate DLL. Proof-of-concept here.
Red Team
Making CloudFlare Workers Work for Red Teams
Andy Gill describes using CloudFlare Workers and Pages to create a Conditional Access Payload Delivery (CAPD) system that serves files only when requests include a valid pre-shared authorization header, returning generic 503 errors otherwise. This way red teams can serve payloads to targets without detection.
Andy shares a number of potential improvements (multiple campaigns, rotating payloads) and detection opportunities (monitoring for unusual authorization headers to *.pages.dev domains in proxy logs, non-browser processes connecting to CloudFlare infrastructure, binary content types from static hosting platforms, etc.).
TokenFlare: Serverless AiTM Phishing in Under 60 Seconds
JUMPSEC’s Sunny Chau announces TokenFlare, an open-source serverless Adversary-in-the-Middle (AiTM) phishing framework for Entra ID/M365 that deploys working infrastructure in under a minute using CloudFlare Workers. “Working AiTM infrastructure, with SSL, bot protection, and credential capture to your webhook of choice.” It supports Conditional Access Policy bypasses via User-Agent spoofing, and includes built-in bot blocking based on real-world campaign data.
AI + Security
-
zoicware/RemoveWindowsAI – Force remove Copilot, Recall and more in Windows 11.
-
Part 1 of Pramod Gosavi’s reflections on AI in SOC, covering companies and market dynamics from SIEM 1.0 to what may happen in SIEM 3.0
-
Rethinking SOC Capacity: How AI Changes the Human Cost Curve – Traditional SOC scaling is broken: increasing alert volume requires increasing headcount, creating a “Human Cost Curve” that eventually breaks under the weight of modern-scale threats. In this post Prophet Security breaks down the math: how analysts only have ~5.6 investigation hours/day, and how AI decouples capacity from headcount.*
trailofbits/skills
Trail of Bits’ Claude Code skills for security research, vulnerability detection, and audit workflows Some neat skills around: verifying fix commits address findings without introducing bugs, building deep architectural context, doing a security-focused differential review of code changes, performing static analysis with CodeQL or Semgrep, a Semgrep rule creator, variant analysis (finding similar vulnerabilities across codebases), and more.
A Personal AI Maturity Model (PAIMM)
Cool post by Daniel Miessler sharing a 9-level maturity model for AI evolution from Chatbots (Tiers 1-3) to Agents (Tiers 4-6) to Assistants (Tiers 7-9), tracking progression across 6 dimensions: context, personality, tool use, awareness, proactivity, and multitask scale.
Some of Daniel’s predictions: there will be a shift with Assistants in being reactive to proactively helping you, they’ll continuously monitor your state, advocate for you, and help you achieve your goals, voice will overtake typing as the primary interface, they’ll have access to cameras/audio to have full visibility into your state, and more. I like the vignette section on what this might look like across protecting you and your loved ones, detecting and filtering influence campaigns, work, monitoring your mental state and energy, etc.
For a deep dive into Daniel’s personal AI setup and how to build your own, see this webinar we did together and his slides here.
Streamlining Security Investigations with Agents
Dominic Marks describes how Slack’s Security Engineering team has developed an AI agent system to optimize security alert investigations. Their initial prototype was a simple prompt + coding agent CLI + MCPs to various systems, but they now have a structured multi-agent approach. The system employs three persona categories (Director, Expert, and Critic agents) working in a coordinated investigation loop across multiple phases (Discovery, Trace, and Conclude), with each agent performing specific tasks (Access, Cloud, Code, and Threat experts). Each agent/task pair is modeled with a carefully defined structured output, and the application orchestrates the model invocations, propagating just the right context at each stage.
Their architecture includes a Hub for API and storage, Workers for processing investigations, and a Dashboard for real-time monitoring. The post walks through an interesting example of the agent identifying a credential exposure that wasn’t the focus of the current investigation.
Great AI + security engineering post, thoughtful and useful architectural details, highly recommend
![[tl;dr sec] #311 - Slack's Security Agents, Cloud-Native Detection Engineering, Trail of Bits' Claude Skills 2 [tl;dr sec] #311 - Slack's Security Agents, Cloud-Native Detection Engineering, Trail of Bits' Claude Skills](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/422c25b6-c605-47ad-b2ed-6af0b60544c7/image.png?t=1768443943)
Wrapping Up
Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.
If you find this newsletter useful and know other people who would too, I’d really appreciate if you’d forward it to them
P.S. Feel free to connect with me on LinkedIn
![[tl;dr sec] #311 - Slack's Security Agents, Cloud-Native Detection Engineering, Trail of Bits' Claude Skills 1 [tl;dr sec] #311 - Slack's Security Agents, Cloud-Native Detection Engineering, Trail of Bits' Claude Skills](https://cybernoz.com/aff/hostinger.png)