A deterministic policy layer that mediates between probabilistic AI systems and deterministic Unix infrastructure. Every command — whether from an AI agent, a Claude Code session, or a human operator — passes through a policy gate before execution. Every decision is audited. Every flag is explained.
Large language models are increasingly used to generate shell commands. They are good at it. They are also probabilistic. Unix execution, by contrast, is deterministic and irreversible — a single misplaced flag or path can permanently alter system state.
AIShell-Gate exists to close that gap. It sits between an AI agent's proposed actions and the operating system, evaluating every command against declared policy before a single byte reaches the kernel. Unsafe commands are denied with a reason. Safe commands are allowed — with a confirmation level appropriate to their risk. No shell is ever invoked.
The separation between the two programs is the central security property of the system. The executor has no policy logic; the policy engine has no ability to execute. Neither component can reach across that boundary.
Every other MCP tool gives an AI access to one capability. AIShell-Gate gives it access to the operating system — the thing that contains everything else. The policy engine is what makes that access safe.
Submit a goal and list of commands to the policy engine without executing anything. Returns per-action decision, confirm level, resolved binary path, risk score, and reason. The right first step before any execution.
Submit a goal and list of commands for live execution. Runs a pre-flight policy evaluation first. Plans containing high-risk or confirmation-required actions are blocked and reported — the operator adjusts policy or proceeds manually.
aishell-mcp.json in your project root: {"exec_binary": "./aishell-gate-exec", "policy_binary": "./aishell-gate-policy", "preset": "ops_safe"}.mcp.json: {"mcpServers": {"aishell-gate": {"command": "python3", "args": ["./aishell-gate-mcp.py"]}}}evaluate_plan and execute_plan tools appear automatically. The AI calls them directly — no shell scripting, no manual pipe setup.Receives a proposed shell command, normalizes it, evaluates it against a layered policy stack, computes a risk score, and emits a structured JSON decision. It never executes anything. Its only output is the decision record: allow or deny, confirmation level, matched rule and layer, validated argument array, risk score, blast radius, and reason.
Accepts a JSON action plan from an AI agent, submits each command to the policy engine as a child process, reads the JSON decision back over a pipe, collects human confirmation where the policy requires it, and calls execve() with the validated argument vector. Contains no policy logic of its own.
Exposes AIShell-Gate to Claude Code, Cursor, and any MCP-compatible AI coding environment via stdio transport. Provides evaluate_plan (policy-gated inspection, never executes) and execute_plan (live execution with pre-flight check). Python 3, no additional dependencies. Configured via aishell-mcp.json.
AIShell-Gate is agent-agnostic. The same policy engine, audit chain, and confirmation model apply equally to Claude Code, Cursor, local inference models via pipeline script, and remote AI agents over SSH. The gate does not care what produced the plan — only what the plan contains.
Policy is a stack of three layers evaluated in order: base (organizational floor), project (workflow-specific rules), and user (personal preferences). A deny at any layer is final. The built-in presets — ops_safe, dev_sandbox, read_only, danger_zone — give teams a working starting posture without manually assembling policy files.
Every ALLOW decision carries a confirmation level: none (proceed immediately), plan (show the plan before running), action (explicit per-command approval), or typed (operator must type a code derived from the exact command). Risk scoring escalates levels automatically — commands scoring above 40, 70, or 90 are raised to plan, action, or typed regardless of what the matching rule says. Levels can only be raised, never lowered.
Every evaluation can be written to a tamper-evident JSON Lines audit log. Each entry carries a sequence number, session identifier, full decision context, and an SHA-256 hash linking it to the preceding entry. HMAC-SHA256 mode restricts verification to key-holders. Concurrent sessions write safely via advisory file locking.
The audit chain is a compliance evidence artifact. Every command an AI agent proposes — whether allowed, denied, or confirmed — is recorded with its full decision context: the matched policy rule, the layer that matched, the risk score, the confirmation level required, and the timestamp. The chain is cryptographically linked so any post-hoc modification is detectable. For organizations subject to HIPAA, PCI-DSS, CMMC, or FedRAMP audit requirements, this record demonstrates that AI-generated actions were evaluated against declared policy before execution — not after the fact.
execve() directly because you know what shells do to arguments. The result is not a clever new thing — it is the application of well-understood primitives to a new problem. That is exactly the right kind of design.
AIShell-Gate is aligned with the NIST AI Risk Management Framework (AI RMF 1.0) and AI 600-1. The two-binary reference monitor architecture reflects the access control framework defined in NIST SP 800-162. For organizations operating under HIPAA, PCI-DSS, CMMC, or FedRAMP requirements, AIShell-Gate provides a documented, auditable execution boundary between AI agents and production infrastructure — a control that sandbox-based approaches cannot fully replicate when AI agents must operate on real systems rather than isolated environments.
Sandboxes protect by isolation. AIShell-Gate protects by policy. When an AI agent must touch a real database, a real deployment pipeline, or a real server — because that is the point of the integration — isolation is not an option. Policy, confirmation, and a tamper-evident record of every decision are what remain. That is what AIShell-Gate provides.
The following documents are included with this release. All are available in the same directory.
ai-agent account, directory and permission setup, and operator confirmation relay configuration.aishell-mcp.json, .mcp.json, tool descriptions, confirmation model, and the three integration paths (MCP, direct pipeline, interactive). §20 of the Getting Started Guide.AIShell-Gate 1.0 beta is available by request. The policy engine, execution gateway, and operator confirmation relay are functionally complete. Documentation is available from the links above. The flag catalog covers 3,489 individual flag assessments across 346 Unix commands. The package includes the MCP server for Claude Code and Cursor integration. Both binaries support interactive human operator workflows in addition to AI agent pipelines.
The beta testing package contains both compiled binaries, all documentation, and the beta README. Approved requests will be contacted directly.
Beta scope: the beta is intended for local and single-session use by technically experienced Unix engineers, DevOps teams, and security engineers. Testing should be performed in controlled, non-production environments.
The Beta Tester Guide walks you through the system from first binary verification to a full multi-action AI pipeline — with copy-paste commands at every step and expected output so you know if it is working. It covers the policy engine in interactive mode, single command evaluation, preset comparison, executor plans with --dry-run, audit logging, custom policy files, jail root containment, and enterprise features.
Work through it in order. Stop at any stage where something breaks and report what you saw — a tester who reaches Stage 4 and hits an error is giving us something more useful than one who skips to Stage 9. The guide includes structured feedback questions after every stage.
Once you have worked through the tester guide, submit your structured feedback using the link below. All fields are optional except your name. Your responses go directly to us and will directly shape the 1.0 release.
→ Submit Feedback Beta tester feedback formPrefer email? Send your completed tester guide responses to info@aishell.org with subject Beta Feedback — [your name]. For security findings, use security@aishell.org privately.
AIShell-Gate is available through a channel partner program for MSPs, MSSPs, VARs, and system integrators who serve clients with AI governance, compliance, or Unix security requirements.
Two partner tiers — Authorized Reseller and Certified Partner — reflect different levels of support commitment and carry different economics. All partnerships are non-exclusive. AIShell Labs retains the right to sell directly.
If you have evaluated AIShell-Gate and think it belongs in your client conversations, email partners@aishell.org. We will send you the partner program document and schedule a conversation with the founder.