AI Libraries Under Attack: How Remote Code Execution Vulnerabilities Threaten Machine Learning Models
Imagine a world where downloading a popular AI model could secretly install malware on your system. Sounds like a sci-fi nightmare, right? But here's where it gets controversial: this scenario isn't just fiction; it's a real vulnerability lurking in some widely used AI/ML libraries.
The Hidden Danger in Open-Source Libraries
Researchers at Palo Alto Networks have uncovered a critical issue in three prominent open-source AI/ML Python libraries: NVIDIA's NeMo, Salesforce's Uni2TS, and Apple's FlexTok. These libraries, hosted on GitHub, are integral to developing and deploying advanced AI models. However, they contain a flaw that allows for remote code execution (RCE) when loading a model file with malicious metadata. This means an attacker could embed harmful code within a seemingly innocent model, which would execute automatically upon loading.
The Libraries in Question
Let's take a closer look at these libraries:
- NeMo: A PyTorch-based framework by NVIDIA, designed for creating diverse AI/ML models and complex systems. It's widely used in research and has over 700 models on HuggingFace, some with millions of downloads.
- Uni2TS: A PyTorch library by Salesforce, powering their Morai model for time series analysis. This library has been downloaded hundreds of thousands of times.
- FlexTok: Developed by Apple and EPFL VILAB, this Python framework enables AI models to process images efficiently. While less popular, it still poses a significant risk.
The Root of the Problem
The vulnerability stems from how these libraries handle metadata. They use a third-party tool called Hydra to instantiate classes based on metadata, but they fail to sanitize this data properly. This oversight allows attackers to execute arbitrary code by manipulating the metadata.
A Ticking Time Bomb?
As of December 2025, no malicious exploits of these vulnerabilities have been detected in the wild. However, the potential for harm is immense. Attackers could easily modify popular models, adding malicious code under the guise of improvements or optimizations. Users, trusting these models, would unknowingly execute the harmful code.
The Race to Patch
Palo Alto Networks responsibly disclosed these vulnerabilities to the affected vendors in April 2025. Here's how they responded:
- NVIDIA: Released a fix in NeMo version 2.3.2, addressing CVE-2025-23304.
- Salesforce: Deployed a patch on July 31, 2025, fixing CVE-2026-22584.
- Apple & EPFL VILAB: Updated FlexTok to use safer configuration parsing and added an allowlist for classes passed to Hydra.
The Bigger Picture
This issue highlights a broader problem in the AI/ML ecosystem. As models become more complex, the libraries supporting them must prioritize security. The shift from pickle-based formats to safer alternatives like safetensors is a step in the right direction, but it's not enough. Libraries must implement robust validation and sanitization mechanisms to prevent code injection.
What Can You Do?
As a developer or user of AI/ML models, here are some best practices:
- Verify Sources: Only download models from trusted sources.
- Stay Updated: Keep your libraries and models up to date with the latest security patches.
- Monitor for Anomalies: Use tools like Prisma AIRS to detect and mitigate potential threats.
A Call to Action
This discovery raises important questions: Are we sacrificing security for innovation in AI? How can we balance the need for flexibility in model development with the imperative to protect users? Let's start a conversation. Share your thoughts in the comments below. Do you think the AI community is doing enough to address these risks? What additional measures should be taken to secure AI/ML models?
Further Reading
Stay informed, stay secure, and let's build a safer AI future together.