Hackers Abuse Python eval/exec Calls to Run Malicious Code

Hackers Abuse Python eval/exec Calls to Run Malicious Code

Threat actors are increasingly abusing native evaluation and execution functions to conceal and execute malicious payloads within innocent-looking packages on PyPI.

Security researchers warn that while static analysis libraries such as hexora can detect many obfuscation techniques, attackers continue innovating ways to slip harmful code past simple scanners.

Supply chain attacks targeting Python packages have surged, with over 100 incidents reported on PyPI in the last five years.

In a typical scenario, an adversary uploads a trojanized library that retains its advertised functionality but injects hidden malicious routines.

The simplest malicious snippet looks innocuous:

python# Naïve abuse of exec and eval
exec("print('Hello from malicious code!')")
result = eval("2 + 2")

When the package is imported by users, these routines can steal credentials, spawn backdoors, or download additional malware—all without alerting end users.

Most security tools flag any direct use of Python’s evaluation or execution functions for manual review. To bypass these heuristics, attackers employ a variety of obfuscation tricks:

By replacing Latin letters with visually similar Unicode characters, the function name becomes unrecognizable to basic scanners.

This simple substitution defeats naïve pattern matching without altering the underlying behavior.

Importing or aliasing the builtins module obfuscates direct references. An attacker might assign the builtins namespace to a short variable name and invoke the execution function through that alias, bypassing searches for explicit module names.

Using dynamic attribute lookup routines, attackers assemble function names at runtime. They may slice, join, reverse, or concatenate string fragments to reconstruct the execution function’s name, making it invisible to static string-search heuristics.

Rather than a straightforward import statement, threat actors often leverage Python’s double-underscore import function or the importlib framework.

They may also access the builtins namespace via the system modules cache or global namespace mapping, preventing simple scanners from recognizing a typical import construct.

By compiling a code snippet into an executable code object and then instantiating it as a function, adversaries entirely sidestep direct calls to evaluation or execution functions.

This technique can be further obfuscated by hiding the compile invocation itself behind dynamic string operations.

Beyond function call obfuscation, malicious actors often encode or compress their payloads—using Base64, ROT13, zlib, or Python’s marshaling—then decode and execute them at runtime.

This layered concealment makes detection via simple abstract syntax tree parsing or pattern matching incomplete.

Static analysis tools like hexora evaluate string operations and flag actual execution calls with high confidence.

However, static methods alone may miss novel obfuscation techniques, while dynamic sandboxing can be resource-intensive and risk side effects.

Machine learning and large language model-based detectors offer additional coverage but suffer from false positives, false negatives, and scaling costs.

Security experts recommend a defense-in-depth approach: combine robust static analyzers, lightweight dynamic instrumentation, targeted machine learning models, and human review.

Only by layering detection techniques and maintaining proactive monitoring of PyPI can organizations stay ahead of increasingly sophisticated supply chain threats.

Find this News Interesting! Follow us on Google News, LinkedIn, and X to Get Instant Updates!


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.