The emergence of Large Language Models (LLMs) is transforming NLP, enhancing performance across NLG, NLU, and information retrieval tasks.
They are primarily excellent in text-related tasks like generation, summarization, translation, and reasoning, demonstrating remarkable mastership.
A group of cybersecurity analysts (Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, and Farimah Farahmandi) from the Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA recently affirmed that LLM models like ChatGPT can patch the security gaps in SoC functions.
Implementing AI-Powered Email security solutions “Trustifi” can secure your business from today’s most dangerous email threats, such as Email Tracking, Blocking, Modifying, Phishing, Account Take Over, Business Email Compromise, Malware & Ransomware
LLM-like Models
The growing prevalence of system-on-chip (SoC) technology in various devices raises security concerns due to complex interactions among integrated IP cores, making SoCs vulnerable to threats like information leakage and access control violations.
The presence of third-party IPs, time-to-market pressures, and scalability issues challenge security verification for complex SoC designs. Current solutions struggle to keep up with evolving hardware threats and diverse designs.
Exploring LLMs in SoC security represents a promising opportunity to tackle complexity, diversity, and innovation.
LLMs have the potential to redefine security across domains through tailored learning, prompt engineering, and fidelity checks, with experts focusing on four key security tasks:-
- Vulnerability Insertion
- Security Assessment
- Security Verification
- Countermeasure Development
Complex modern SoCs are prone to hidden vulnerabilities, and addressing bugs in the RTL design stage is crucial for cost-effective security verification, reads the paper published.
The Transformer model, introducing attention mechanisms and eliminating the need for recurrent or convolutional layers, paved the way for the evolution of language models.
GPT-1, GPT-2, and GPT-3 pushed the boundaries of language modeling, while GPT-3.5 and GPT-4 further refined these capabilities, offering a range of models with varying token limits and optimizations.
From OpenAI’s ChatGPT to Google’s Bard and Baize to Anthropic’s Claude 2, Vicuna, and MosaicML’s MPT-Chat, recent advancements in LLMs highlight the pursuit of improved human-like text generation and extended capabilities.
Research questions
Here below, we have mentioned all the research questions:-
- Can GPT insert vulnerability into a hardware design based on natural language instructions?
- How can we ensure the soundness of the GPT-generated HDL designs?
- Can GPT perform security verification?
- Is GPT capable of identifying security threats?
- Can GPT identify coding weaknesses in HDL?
- Can GPT fix the security threats and generate a mitigated design?
- How should the prompt be to perform hardware security tasks?
- Can GPT handle large open-source designs?
GPT-3.5’s potential in embedding hardware vulnerabilities and CWEs is investigated due to the scarcity of databases in the hardware security domain.
In a study, security researchers assessed GPT-3.5 and GPT-4’s abilities to detect hardware Trojans in AES designs using different tests. GPT-3.5 showed limited knowledge and performance, while GPT-4 outperformed it with impressive accuracy.
GPT-4’s ability highlights its potential as a valuable tool for hardware security assessments, offering advantages over traditional machine learning approaches.
It addresses design dependencies and offers a more holistic analysis of hardware designs, improving Trojan detection.
Protect yourself from vulnerabilities using Patch Manager Plus to patch over 850 third-party applications quickly. Take advantage of the free trial to ensure 100% security.