Linux Kernel Zero-Day SMB Vulnerability Discovered via ChatGPT

Linux Kernel Zero-Day SMB Vulnerability Discovered via ChatGPT

Security researcher has discovered a zero-day vulnerability (CVE-2025-37899) in the Linux kernel’s SMB server implementation using OpenAI’s o3 language model.

The vulnerability, a use-after-free bug in the SMB ‘logoff’ command handler, could potentially allow remote attackers to execute arbitrary code with kernel privileges.

This discovery marks a significant advancement in AI-assisted vulnerability research, demonstrating how large language models can effectively identify complex memory safety issues that require understanding of concurrent execution paths.

– Advertisement –

The vulnerability exists in ksmbd, “a linux kernel server which implements SMB3 protocol in kernel space for sharing files over network”.

Specifically, the flaw occurs in the session logoff handler where sess->user is freed without proper synchronization between concurrent connections that might be using the same session object.

The vulnerability exploits a race condition where one worker thread processes an SMB2 LOGOFF command and frees the user structure, while another thread on a different connection continues using that now-freed memory.

This occurs because when a second transport binds to an existing session (in SMB 3.0 or later), a worker can receive a normal request that stores a pointer to the existing session but doesn’t take any reference on sess->user.

What makes this vulnerability particularly dangerous is that the logoff handler only waits for running requests on its own connection (ksmbd_conn_wait_idle(conn)) but doesn’t wait for other connections that might be using the same session.

This allows for classic use-after-free exploitation that could lead to kernel memory corruption and potentially arbitrary code execution with kernel privileges.

AI-Powered Vulnerability Detection

The researcher tested OpenAI’s o3 model on approximately 12,000 lines of code (~100k input tokens) and ran the experiment 100 times.

While the model found a previously known vulnerability (CVE-2025-37778) in only one run, it successfully identified the new zero-day vulnerability in other outputs.

What’s remarkable is the quality of the AI-generated vulnerability report, which not only identified the issue but provided a comprehensive explanation of the exploitation path.

The researcher noted that o3’s output “feels like a human-written bug report, condensed to just present the findings”.

The AI even identified that a previous fix approach (simply setting sess->user = NULL after freeing) would be insufficient due to session binding possibilities.

This discovery represents a significant milestone in the application of large language models to security research.

The researcher concluded that LLMs have now reached a capability level where they are “far more similar to a human code auditor than they are to symbolic execution, abstract interpretation or fuzzing”.

While the false positive rate remains a challenge (with a signal-to-noise ratio of ~1:50 in this experiment), the researcher emphasized that o3’s performance marks a turning point where AI assistance in vulnerability research becomes genuinely worthwhile.

According to the Report, Security professionals may now need to integrate these tools into their workflows, as the AI demonstrated the ability to not only find bugs but also to propose more comprehensive fixes than human researchers in some cases.

This breakthrough suggests that collaborative human-AI approaches could significantly enhance vulnerability detection capabilities, potentially securing critical infrastructure like the Linux kernel more effectively against sophisticated attacks.

Find this News Interesting! Follow us on Google News, LinkedIn, & X to Get Instant Updates!


Source link