A map to the ways AIShell-Gate can be used. Read this before the other guides — it tells you which one to read.
Copyright © 2026 AIShell Labs LLC Winston-Salem NC USA. All Rights Reserved. Use of this software requires a valid license. — www.aishellgate.com · www.aishell.org
AIShell-Gate is a policy-gated execution system that serves several distinct audiences: AI coding environments, operators running their own commands, CI pipelines, remote AI agents over SSH. The same binaries serve all of them, which is the architectural virtue — one policy engine, one audit chain, one mental model — but it means a new user looking at the doc set can find themselves staring at eight configuration paths and wondering which one applies to them.
This guide sorts the paths into three levels. Pick your level, follow its pointer, skip the rest.
| Level | You are | Start here |
|---|---|---|
| Level 1 | Using Claude Code or Cursor | MCP integration |
| Level 2 | Running commands yourself, or driving from a script or local AI | Direct invocation |
| Level 3 | Deploying to a production host, shared hosts, or remote AI | Remote deployment |
You use Claude Code or Cursor. You want policy-gated execution without writing shell scripts. Ten minutes to a working setup.
AIShell-Gate ships with an MCP server that exposes evaluate_plan and execute_plan as tools to Claude Code, Cursor, and any MCP-compatible AI coding environment. Your editor's AI discovers these tools automatically on restart. From then on, whenever the AI wants to run commands, it builds a JSON plan and submits it through the tools. Every command passes through policy. Every decision is audited.
aishell-mcp.json pointing at the three binaries with absolute paths.aishell-gate entry to your editor's .mcp.json.No root access. No SSH configuration. No Unix accounts to create. Nothing runs as a daemon.
You run commands yourself, write scripts that call the gate, pipe JSON plans to it, or drive it from a local AI model. No SSH, no forced commands, no remote anything.
Direct invocation means running the AIShell-Gate binaries yourself on your own machine. The same binaries serve several distinct use cases:
| Use case | What you run | What it does |
|---|---|---|
| Educational assessment | ./aishell-gate-policy | Interactive prompt. Type any command; see the policy decision, risk score, and documented flag reasoning. Never executes. Training and learning tool. |
| Interactive execution | ./aishell-gate | Interactive prompt that also runs approved commands through execve(). Every command you type is evaluated, confirmed if required, and logged. Disciplined workflow for human operators. |
| JSON plan submission | echo '{...}' | ./aishell-gate | Pipe a structured plan in; commands are evaluated and executed in sequence. The core programmatic interface. |
| Single-command policy check | echo "cmd" | ./aishell-gate-policy --json | Get a machine-readable policy decision for a single command. No executor involved. Integration hook for CI pipelines, pre-commit hooks, regression test suites. |
| Local AI pipeline | Local model generates JSON, pipes to aishell-gate | Your local inference model (ollama, llama.cpp, any OpenAI-compatible endpoint) writes plans; the gate evaluates and executes them. The AI pipeline addendum documents this pattern. |
Every row above uses the same policy engine, the same confirmation gates, and the same audit chain. The only thing that varies is who produces the command (a human, a script, a local model, a hand-written JSON file) and how they deliver it (interactive prompt, pipe, heredoc).
If you are not sure which use case fits: the interactive policy engine is where to start. It teaches you how the system evaluates commands without running anything.
aishell-gate-policy's interactive mode is, by itself, a product worth using. For a junior operator learning Unix, it is a teaching tool that explains every flag in plain English at the moment the command is typed. For a senior operator in a regulated environment, it is a disciplined workflow with policy review and audit trail on every command — the same compliance properties the AI use case provides, applied to human activity. Both audiences are real. Neither requires any AI to be involved.
An AI agent on one machine sends plans over SSH to a gate on another machine. Or multiple AI agents share a host. Or you need a human operator to confirm sensitive actions from a different terminal than the AI's.
Remote deployment uses SSH with a forced command. An AI agent has an SSH key; that key's authorized_keys entry on the target host specifies aishell-gate as the forced command. Whatever the AI's SSH client requests, aishell-gate runs — reading the AI's JSON plan from stdin, evaluating against policy, optionally relaying confirmations through a separate operator terminal, and executing only what policy has approved.
This is the production deployment pattern. It is the same pattern git-shell, rsync, restic, and other Unix tools use for capability-scoped remote access. No new daemons, no new ports, no new infrastructure.
aishell-gate.aishell-gate-exec. Policy governs what is permitted; the OS enforces what is reachable.Two interfaces exist but are not part of the three-level model:
The launcher as a future broker. Today aishell-gate is a thin pre-flight wrapper. In v2 it becomes a multi-agent, multi-operator broker. Using the launcher now means your configuration will not need to change when v2 ships. See Getting Started Guide §4.
Calling the gate from within bash scripts. An existing shell script can invoke aishell-gate per command to get policy evaluation and audit trail without rewriting the script's composition logic. This works and is occasionally useful, but it is not a promoted interface — the plan model is the designed API, and bash-wrapper use is better understood as a migration path for legacy scripts than as a first-class pattern.