Secure the Future of AI Agents

Make your AI application more secure. Guard your system against data exfiltration and system attacks by securing your Model Context Protocol (MCP) servers.

Zero-Trust Architecture
Least-Priviledge Execution
Sandboxed Environment
Real Incidents

When AI Agents Go Rogue

What happens, if your agentic AI software has unrestricted privileges on your machine?

Critical
Replit Production Database Deletion
Jul 2025

An AI agent with unrestricted database access deleted the entire production database because it "panicked".

Impact: Data Loss, Service Outage
High
Local Files Deleted
Jul 2025

Copilot messed up a tiny detail in a command of an engineer. Executed the CMD command and deleted the users entire "D:/" directory with all private photos and videos.

Impact: Data Loss
Critical
Gemini AI CLI Hijack
Jun 2025

An attack on the Gemini AI CLI allowed attackers to silently execute malicious commands on the users machine.

Impact: System Compromise, Supply Chain Risk

Don't let this happen to you.

GuardiAgent provides enterprise-grade sandboxing to isolate and protect your systems from rogue AI agents and compromised MCP servers.

Our Vision: Enterprise Security for AI Applications

Comprehensive protection that does not slow down your application or compromise your system.

MCP Server Sandboxing
Isolate Model Context Protocol servers in secure containers with restricted permissions and resource limits.
Zero-Trust Architecture
Every MCP server runs with minimal privileges. No implicit trust, only explicit permissions.
User-Consent First
The user decides what the software is allowed to access and do. No surprising network calls to dubious websites.
Ease-of-Access
Support engineers and maintainers of agentic AI and MCP software by automatically bootstrap the security manifest.
Verified MCP/Agent Marketplace
Coming soon: Download trusted, verified, and certified MCP servers and AI agent applications from our curated app store.
Easy Observability
Coming soon: Know what servers are running on your machine and have a trusted and secure execution environment for your use-cases.
For Developers

Build Secure AI Agents in Minutes

Simple APIs, compatible with current agent SDKs. Integrate the GuardiAgent security sandbox with little to no changes in your code.

Quick Start Integration
Wrap your MCP server with the GuardiAgent sandbox in seconds. Seamlessly integrate with the OpenAI Agent SDK.
import os

from agents import Agent
from mcp_sandbox_openai_sdk import (
  FSAccess,
  SandboxedMCPStdio
)

async def main():
  async with SandboxedMCPStdio(
    manifest=manifest,
    runtime_args=[os.path.abspath("./")],
    runtime_permissions=[
      FSAccess(os.path.abspath("./"))
    ],
  ) as server:
    agent = Agent(
      name="MCP Sandbox Test",
      model="gpt-5-mini",
      mcp_servers=[server],
    )
Architecture Overview (simple)
The guardian that secures the smallest common part in agentic AI: the MCP server
Sandbox Architecture
For Researchers

Evidence-Based AI Security

Our research demonstrates the critical need for MCP server isolation and the effectiveness of our sandboxing approach. Stay tuned with our latest findings.

Key Research Findings

Access Control PolicyIntroducing an access control policy mechanism for MCP servers, inspired by the Android permission model.Effective Policy GenerationDevelopers confirmed that our method of automatically generating permission manifests generates highly accurate results.Security and EfficiencyEffectively mitigate malicious behaviour in MCP servers such as: data exfiltration and external resource attacks.
Performance Impact Analysis - Sandboxed vs Native Execution
As shown, the sandbox is only 0.4ms slower than native execution

Our evaluation shows that GuardiAgent - the security sandbox of GuardiAgent - does not imply a performance overhead. The introduced overhead is negligible in practice.

The diagram shows the comparison between the runtime of the four most prevalent MCP server operations, when executed with and without the sandbox, on two hardware environments, i.e., macOS and Debian. The sandbox adds, on average, across all operations, 0.6 ms on macOS and 0.29 ms on Debian, both essentially negligible.

Stay Informed
Get the latest updates on AI agent security, research findings, and software releases.

I'm interested in:

By subscribing, you agree to receive emails from GuardiAgent. You can unsubscribe at any time. We respect your privacy.