[tl;dr sec] #189 – CISA on Defending CI/CD, Backdooring NPM via S3, AI + Reverse Engineering


I hope you’ve been doing well!

🎇 4th of July

I spent my 4th of July, ironically, with a group of Australians, who taught me what a “bubbler” is.

We did fulfill the American past time of smoking meats though, while listening to the classic Zucc Smokin Meats.

And in typical Bay Area fashion, while on one side of the fire pit people were talking about how people met their partners, on the other side people were discussing low level database architecture trade-offs and if LLMs truly “understand” things.

You know, just normal friends hanging out stuff 😂 

Regardless of where you’re based, I hope you had the chance to relax!

🆕 Original Content: AI <> Cybersecurity

I’ve started collecting a number of AI applications to cybersecurity resources in this post.

Currently the post has ~8 OSS reverse engineering tools that leverage LLMs, and some meta analysis I wrote about trends we see and (to my knowledge) currently unexplored applications of AI/ML to reversing that seem promising.

I’m going to be adding sections on a bunch of other topics like AppSec, cloud security, pen testing, etc.

Let me know what you’d like me to add to the post!

🐐 Sacrifice to the Inbox Gods

A few longtime readers have reached out saying that their email provider has filtered tl;dr sec or it’s gone to spam, when it didn’t used to.

If you work at Google or Microsoft on a related team, or know someone who does, I’d like to shamelessly ask for your help in getting tl;dr sec on the naughty “nice” list, or generally how to help people receive it.

Feel free to reply to this directly 🙏 

📣 How to automate the detection and prioritization of complex behavioral threats with AWS CloudTrail and Kubernetes audit logs

Monitoring AWS CloudTrail and Kubernetes audit logs are a critical part of maintaining security in your AWS cloud because it provides visibility into your account activity across your infrastructure. Because this data contains all actions performed by all authenticated users, identification of the attackers becomes extremely hard.

Learn from Jeff Vogt, Field CTO at Lacework (and former Senior DevOps Engineer), on how to automate the detection and prioritization of threats from your CloudTrail data so that you can easily (and quickly!) identify attacks such as compromised credentials, potential AWS defense evasion, cloud ransomware, and cloud-based cryptomining.

📜 In this newsletter…

  • AppSec: Why authorization is hard and an authz maturity model

  • Web Security: Burp’s new custom scripting engine, param analyzer Burp extension, overview of new Sec-Fetch HTTP headers

  • Cloud Security: How to tighten IAM policies

  • Container Security: Kubernetes Bill of Materials, executing arbitrary code in RO filesystems, Analyzing Volatile Memory on a Google Kubernetes Engine Node

  • Supply Chain: Backdooring NPM module via compromised S3 bucket, confidential computing project from Google that has adopted SLSA, 10K’s of GitHub repos potentially vulnerable to RepoJacking, CISA/NSA’s guide for hardening CI/CD

  • Blue Team: Automated audit log analysis tool for Google Workspace

  • Red Team: Tool for mTLS based on pre-shared connection key, REST-driven utility used to smuggle files in/out of networks defended by IDS

  • Machine Learning + Security: Cross Plugin Request Forgery example, hacking Auto-GPT and escaping its docker container

  • Machine Learning: Control your cloud via ChatGPT plugin, towards a generalist agent for the web

  • Misc: The Agony and Ecstasy of the World’s Biggest Tom Cruise Impersonator, it’s better to be born rich than smart

AppSec

Graham also describes three classes of solutions to solving the complexity of authorization rules: language-specific libraries, Zanzibar clones, and domain-specific languages.

Web Security

PortSwigger/paramalyzer
A Burp extension for parameter analysis of large-scale web application penetration tests, assisting in: identifying sensitive data, identifying hash algorithms, decoding parameters, and determining which parameters are reflected in the response.

Container Security

ksoclabs/kbom
KSOC has published their Kubernetes Bill of Materials (KBOM) standard, which offers an initial specification in JSON and has been designed for extensibility across various cloud service providers (CSPs) as well as DIY Kubernetes setups.

Executing Arbitrary Code & Executables in Read-Only FileSystems
WithSecure’s Golan Myers discusses various methods for achieving remote code execution in read-only file systems, specifically within Kubernetes environments where writable folders are designated as noexec. For example: using Bash’s built in /dev/tcp utility and hijacking an existing process, reading and writing from temporary file systems, etc. Golan concludes with mitigation options: SELinux or detecting with Falco.

Supply Chain

Hijacking S3 Buckets: New Attack Technique
Checkmarx’s Guy Nachshon delves into a novel attack observed against the NPM package bignum, where the attacker hijacked the S3 bucket used to serve binaries and replaced them with malicious versions that stole users’ credentials.

Since S3 buckets must have globally unique names, when a bucket is deleted, that name becomes available again, allowing attackers to put malicious contents in that bucket.

[tl;dr sec] #189 - CISA on Defending CI/CD, Backdooring NPM via S3, AI + Reverse Engineering

Blue Team

invictus-ir/ALFA
By Greg Charitonos and BertJanCyber: Automated Audit Log Forensic Analysis (ALFA) for Google Workspace is a tool to acquire all Google Workspace audit logs and perform automated forensic analysis on the audit logs using statistics and the MITRE ATT&CK Cloud Framework.

Machine Learning + Security

ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data
Johann Rehberger discusses the first exploitable Cross Plugin Request Forgery that affects ChatGPT plugins. User visits site with malicious prompt injection text —> ChatGPT follows malicious instructions (e.g summarize user’s email using Zapier plugin and leak it to the attacker). tl;dr: ChatGPT could be tricked to access any sensitive info or perform sensitive functionality, like CSRF. Good ol’ Confused Deputy.

Lukas explains the attack chain used to overcome restrictions imposed by Auto-GPT’s architecture and deceive users into unintentionally approving potentially malicious commands.

[tl;dr sec] #189 - CISA on Defending CI/CD, Backdooring NPM via S3, AI + Reverse Engineering

Misc

It’s also reflective of a parasocial, celeb-obsessed culture in which people are hungry for personalized attention from their favorite stars.

As A-listers like Cruise become increasingly sequestered and untouchable, doppelgangers such as Ferrante step in to provide the one-on-one connections with fans that neither the real Cruise, nor creepily realistic deepfake simulations, will.

Churn is increasingly a rare-earth element in the U.S. Per a Georgetown analysis, “It’s better to be born rich than smart … The most talented disadvantaged children have a lower chance of academic and early career success than the least talented affluent children.”

The people dealt the best cards can’t see their hands. The myth of the “self-made man” is rife among U.S. citizens who’ve never faced a draft or registered a devaluation in their currency.

Tech has raised a cohort of people who simultaneously credit their character for their success and blame a rigged market for their failures. The real cage match in tech is entitlement vs. empathy. The former is winning, and that results in a staggering accumulation of power that’s amoral, focused only on the aggregation of more power regardless of what happens to people with less.

Prof Galloway

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I’d really appreciate if you’d forward it to them 🙏



Source link