Menu
The New Stack·March 28, 2026

Securing AI Agents: Nvidia's NemoClaw and the Challenges of Agentic Computing

This article explores the security challenges inherent in the rapid adoption of AI agentic computing, exemplified by Nvidia's NemoClaw. It discusses three architectural layers — policy enforcement, privacy routing, and sandboxed execution — proposed to secure agents, highlighting their limitations in truly solving the underlying security problems. The piece emphasizes the shift towards needing experienced engineers to manage the complex risks of autonomous AI systems.

Read original on The New Stack

The Rise of Agentic Computing and its Security Implications

The adoption of large language model (LLM) based agents is accelerating rapidly, leading to a 10,000-fold increase in compute demand per user over two years. This explosion in autonomous AI behavior, while promising, introduces significant security and operational challenges. The article uses OpenClaw as an example of an unrestrained agent platform, prompting companies like Nvidia to develop guardrails such as NemoClaw. However, simply adding layers on top of inherently open systems may not address the fundamental security vulnerabilities of self-evolving, autonomous agents.

Nvidia's NemoClaw Security Architecture

Nvidia's NemoClaw attempts to provide security for agentic systems through three architectural components:

  • Policy Enforcement: This layer defines boundaries for agent actions, such as restricting filesystem or network access. The idea is for an agent to reason about blocks and propose policy updates for human approval. However, its effectiveness decreases as agents become more skilled, often learning after a security breach, or leading to constant human intervention that undermines autonomy.
  • Privacy Routing: This component manages where data is processed (locally or in the cloud) and which models are used, based on cost and privacy policies. It helps control expenses and protect intellectual property but doesn't prevent agents from exfiltrating sensitive data if prompted by a third party.
  • Sandboxed Execution: Essential for isolating agent processes and preventing a malicious agent from impacting others. It also provides a controlled environment for testing long-running or complex tasks by monitoring network traffic, offering a lower-risk testing ground.
⚠️

Policy Enforcement Limitations

The inherent inefficiency of policy enforcement for self-evolving agents: Constantly stopping agents to approve actions or relying on reactive policies diminishes autonomy and scales poorly with increasing agent capabilities.

While these layers offer some protection, the article argues they don't solve the "real problem" — the difficulty of securely managing truly autonomous, self-evolving systems. The increasing complexity and autonomy of agents necessitate a shift from basic coding skills to experienced engineers capable of identifying pitfalls and managing complex risk profiles across the entire workflow.

AI agentsLLM securityNemoClawOpenClawagentic computingsecurity architecturesandboxingpolicy enforcement

Comments

Loading comments...