AI has been a major focus of the Gartner Security and Risk Management Summit in National Harbor, Maryland this week, and the consensus has been that while large language models (LLMs) have so far overpromised and under-delivered, there are still AI threats and defensive use cases that cybersecurity pros need to be aware of.
Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that hacker uses of AI so far include improved phishing and social engineering – with deepfakes a particular concern.
But D’Hoinne and Director Analyst Kevin Schmidt agreed in a joint panel that there haven’t been any novel attack technique arising from AI yet, just improvements on existing attack techniques like business email compromise (BEC) or voice scams.
AI security tools likewise remain underdeveloped, with AI assistants perhaps the most promising cybersecurity application so far, able to potentially help with patching, mitigations, alerts and interactive threat intelligence. D’Hoinne cautions that the tools should be used as an adjunct to security staffers so they don’t lose their ability to think critically.
AI Prompt Engineering for Cybersecurity: Precision Matters
Using AI assistants and LLMs for cybersecurity use cases was the focus of a separate presentation by Schmidt, who cautioned that AI prompt engineering needs to be very specific for security uses to overcome the limitations of LLMs, and even then the answer may only get you 70%-80% toward your goal. Outputs need to be validated, and junior staff will require the oversight of senior staff, who will more quickly be able to determine the significance of the output. Schmidt also cautioned that chatbots like ChatGPT should only be used for noncritical data.
Schmidt gave examples of good and bad AI security prompts for helping security operations teams.
“Create a query in my
He gave an example of a better way to craft a SIEM query: “Create a detection rule in
That prompt should produce something like the following output:
Analyzing firewall logs was another example. Schmidt gave the following as an example of an ineffective prompt: “Analyze the firewall logs for any unusual patterns or anomalies.”
A better prompt would be: “Analyze the firewall logs from the past 24 hours and identify any unusual patterns or anomalies. Summarize your findings in a report format suitable for a security team briefing.”
That produced the following output:
Another example involved XDR tools. Instead of a weak prompt like “Summarize the top two most critical security alerts in a vendor’s XDR,” Schmidt recommended something along these lines: “Summarize the top two most critical security alerts in a vendor’s XDR, including the alert ID, description, severity and affected entities. This will be used for the monthly security review report. Provide the response in tabular form.”
That prompt produced the following output:
Other Examples of AI Security Prompts
Schmidt gave two more examples of good AI prompts, one on incident investigation and another on web application vulnerabilities.
For security incident investigations, an effective prompt might be “Provide a detailed explanation of incident DB2024-001. Include the timeline of events, methods used by the attacker and the impact on the organization. This information is needed for an internal investigation report. Produce the output in tabular form.”
That prompt should lead to something like the following output:
For web application vulnerabilities, Schmidt recommended the following approach: “Identify and list the top five vulnerabilities in our web application that could be exploited by attackers. Provide a brief description of each vulnerability and suggest mitigation steps. This will be used to prioritize our security patching efforts. Produce this in tabular format.”
That should produce something like this output:
Tools for AI Security Assistants
Schmidt listed some of the GenAI tools that security teams might use, ranging from chatbots to SecOps AI assistants – such as CrowdStrike Charlotte AI, Microsoft Copilot for Security, SentinelOne Purple AI and Splunk AI – and startups such as AirMDR, Crogl, Dropzone and Radiant Security (see Schmidt’s slide below).