[tl;dr sec] #190 – Securely Build on AI, CISA Pen Test repo, Joining Google’s Red Team


I hope you’ve been doing well!

🏋️ Our Gym

If you’ve been wanting to improve your fitness but haven’t been sure where to start, I think I found the right gym for you:

Or their rival gym: (e)MACS Gainz 💪 

Many companies are quickly slapping together new product features leveraging AI platforms like OpenAI.

Unsurprisingly, there are a lot of potential security risks.

Fortunately, my bud Rami McCarthy has published the most detailed and thorough guide I’ve seen about how to build product features that use AI securely. It includes:

  • An overview of the risks and attack surface, with examples

  • Security controls and mitigation strategies

  • More references than you can shake a stick at

Relatedly, you can also see below for some great tips from Hyperproof.

📣 ChatGPT: The Dos and Don’ts for Your Company’s Security 🤖

As generative AI tools like ChatGPT continue to evolve and impact various industries, compliance experts are left wondering about the potential security implications for their businesses. Join Hyperproof’s webinar as we discuss the dos and don’ts of working with ChatGPT and similar technologies.

  • 🦺 How to ensure the safe and secure implementation of AI technologies

  • 🛑 What security threats to anticipate

  • Enterprise risk management frameworks that can help manage this new risk

  • 📜 Security policies to examine before incorporating ChatGPT or similar AI tech

📜 In this newsletter…

  • AppSec: CVSS v4 calculator, the latest work from SEI, template pen test findings from CISA, Gartner recommends secure defaults

  • Web Security: Burp extension to find DNS vulns in web apps

  • Cloud Security: Tool to find potential subdomain takeovers, avoiding Cedar bugs, tracked connections persist across security group changes, tool to determine cloud creds’ permissions

  • Container Security: End of life scanner, verifying container image signatures within CRI, the challenges of checking image sigs, the security benefits of shrinking containers

  • Politics / Privacy: Lol @ Threads’ privacy policy, supporting Ukraine

  • Career: 1.5X your salary via negotiation, getting hired on Google’s red team, drop out, what does Senior mean at FAANG?

  • Machine Learning + Security: How to securely use genAI in multitenant cloud apps, how foundational models reshape cybersecurity tooling, surgically backdooring a model

  • Machine Learning: AWS Docs GPT, LlamaIndex, index then query all your data sources, Code Interpreter Plugin released

  • Misc: Some humor, tech execs have mental health challenges, Captain America the musical

AppSec

The Latest Work from the SEI: Rust, DevSecOps, AI, and Pen Testing
Vanderbilt’s Douglas Schmidt summarizes some recent publications from CMU’s Software Engineering Institute in the areas of supply chain attacks, penetration testing, model-based design for cyber-physical systems, Rust, the UEFI, DevSecOps, network flow data, and artificial intelligence.

cisagov/pen-testing-findings
By CISA: A collection of Active Directory, phishing, mobile technology, system, service, web application, and wireless technology weaknesses that may be discovered during a penetration test.

The repo is intended to help assessors use standardized language and names for findings, and to save time on report generation.

Instead of having individual product teams implement security tools and practices at their own discretion, platform teams must provide “secure paved roads.” This ensures consistency and reduces the cognitive load of implementing security controls. The idea is to make the secure path the default path to production.

Manjunath Bhat

[tl;dr sec] #190 - Securely Build on AI, CISA Pen Test repo, Joining Google's Red Team

📣 AWS Security Foundations for Dummies

Data, applications, and services are all moving to the cloud. This means you have to take a new approach to protecting your business and customers against cyberattacks. Keep up with the speed of the cloud and unlock everything you need to know to protect your AWS environment.

Learn the most important principles for effective AWS security in this user-friendly book.

Web Security

[tl;dr sec] #190 - Securely Build on AI, CISA Pen Test repo, Joining Google's Red Team

Cloud Security

Cedar: Avoiding the cracks
Ian Mckay examines six potential issues in advanced evaluation scenarios when using Cedar, a new policy language by AWS for defining authorization policy: non-unique entity identifiers, invalid statement, the dangers of short-circuiting, etc. Ian highlights the potential pitfalls of each and recommends a solution.

Connection Tracking
Nick Frichette discusses an interesting side effect from Security Groups tracked connections: they persist even after a rule change. Thus if a hacker gains command execution on an EC2 instance and initiates a reverse shell prior to the rule change, the connection will persist.

AbstractClass/CloudPrivs
By Connor MacLeod: A brute force tool specifically designed to determine the privileges associated with a given set of cloud credentials by leveraging the Boto3 SDK to dynamically generate a comprehensive list of all available services and regions for each service. Additionally, this tool provides support for expanding the list of cloud providers and creating custom tests when invoking AWS functions.

Container Security

xeol-io/xeol
By Xeol: An end-of-life (EOL) package scanner for container images, systems, and SBOMs. There’s also a GitHub Action to easily run it in CI.

Shrink to Secure: Kubernetes and Compact Containers
Giuseppe Santoro highlights the significance of reducing the size of your Kubernetes containers for improved security, explores three techniques (multi-stage Docker builds, Buildpacks, Melange and Apko), compares them, and suggests a selection of security tools to help you detect vulnerabilities in your applications.

Politics / Privacy

All the red flags in the Threads privacy policy
It seems like it’d be harder to list the things Threads doesn’t collect than the things it does 😅 An impressive array across PII about you, your employment, body, web activity, location, and more.

Fourth of July in Ukraine
Joe Sullivan spent his 4th of July week giving laptops to kids in Ukraine whose homes were destroyed and delivering medical equipment to soldiers returned from fighting. Amazing.

Career

How I Got Hired On Google’s Red Team
Nicely detailed post by Graham Helton on his job search process, researching roles, interviewing, negotiating, standing out, resumes, and why job postings are amorphous wishlists and that you should treat them as such.

Society is your enemy, it attacks you by making you need money, and if you are better than the average person at sacrificing comfort for long-term goals, you can work toward a position where you need relatively little money and have more free time.

[tl;dr sec] #190 - Securely Build on AI, CISA Pen Test repo, Joining Google's Red Team

Machine Learning + Security

Microsoft also has docs on architectural approaches for AI and ML in multitenant apps here.

How foundation models reshape cybersecurity tooling
Innovation Endeavors’ Harpi Singh and Dhruv Iyer do a nice round-up of current applications of LLMs to cybersecurity (search, code writing, vulnerability explanation, incident response and threat intelligence) across a number of vendors, and discuss promising opportunities: penetration testing, security reviews, and security-as-code generation.

Beyond fake news, you basically could compromise anything the model is being used for: security decisions, make a bad medical diagnosis, etc.

As it’s (currently) not possible to “audit the brain” of LLMs for how they’d behave in every possible situation (like you could do a code review of normal software), it seems like model provenance (like SLSA but for models) is probably our main recourse right now. And in fact, the authors are currently working on an open-source tool to provide cryptographic proof of model provenance.

[tl;dr sec] #190 - Securely Build on AI, CISA Pen Test repo, Joining Google's Red Team

Machine Learning

LlamaIndex
A simple, flexible data framework for connecting custom data sources to large language models.

danswer-ai/danswer
Ask natural language questions against internal documents and get back reliable answers backed by quotes and references from the source material. Connect common tools such as Slack, GitHub, Confluence, etc.

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I’d really appreciate if you’d forward it to them 🙏



Source link