░█▀▀█ ▀█▀ ░█▀▀█ ░█ ░█▀▀▀█ ░█▀▀▀ ░█ ▄▀ ░█▀▀█ ░█ ░█▄▄▀ ░█ ░█ ░█ ░█ ░█▀▄ ░█ ░█ ▄█▄ ░█ ░█ ░█▄▄▄ ░█▄▄▄█ ░█▄▄▄ ░█ ░█
Credential isolation for AI coding agents.
Give agents access to CLI commands without exposing credentials. Commands proxy from a container-side shim to a host-side daemon. The container never holds a secret.
curl -fsSL https://raw.githubusercontent.com/calebfaruki/airlock/main/install.sh | sh
# "Just pass the key as an env var" docker run -e SSH_PRIVATE_KEY="$(cat ~/.ssh/id_ed25519)" agent-image # "Just inject the creds" docker run -e AWS_ACCESS_KEY_ID=AKIA... -e AWS_SECRET_ACCESS_KEY=... agent-image # "Just set the token" docker run -e GITHUB_TOKEN=$GITHUB_TOKEN agent-image
Your credentials are environment variables inside the container. The agent can read them, exfiltrate them, and use them for anything. No scoping. No audit trail. No deny rules.
Containers never hold secrets. A host-side daemon proxies CLI tools over a unix socket.
Docker Sandboxes and NemoClaw protect HTTP API keys — they intercept outbound requests and inject credentials. But many tools don't authenticate over HTTP. Git uses SSH keys. Terraform reads credential files. kubectl uses kubeconfig. These are files on your machine, not HTTP headers. No network proxy can intercept that.
Use HTTP proxies for API keys. Use airlock for everything else.
Only enabled commands load. Others don't exist.
Each container scoped to specific commands and credentials.
Per-tool flag and arg restrictions. Normalized matching. Glob patterns.
Each layer is independent. A mistake in one doesn't compromise the others.
Pre-exec and post-exec hooks are escape hatches for everything else. Restrict destinations, redact output, enforce approval workflows — whatever airlock's built-in rules can't express.
AI agents follow instructions from content they process. They can't reliably tell your instructions apart from malicious ones. A prompt injection in a README, a commit message, or an API response can hijack the agent and make it act with your full credentials. The only way to prevent this is to never give the agent credentials in the first place.
Credentials live on the host, never inside the container. Even a fully compromised agent — prompt-injected, jailbroken, running malicious code — cannot access keys that don't exist in its environment.
If the agent is tricked into running terraform destroy or docker run --privileged, deny rules reject it before execution. No model judgment involved. The rule either matches or it doesn't.
An agent that only needs git shouldn't have access to terraform, aws, or ssh. Airlock enforces this at the daemon level. Tools the agent doesn't need simply don't exist.
Maps to OWASP Gen AI Top 10 risks LLM02, LLM05, and LLM06. Airlock operates at the tool layer — model-layer risks require model-layer mitigations.
# Install curl -fsSL https://raw.githubusercontent.com/calebfaruki/airlock/main/install.sh | sh # Enable commands cat > ~/.config/airlock/config.toml << 'EOF' [commands] enable = ["git", "terraform"] EOF # Create a profile cat > ~/.config/airlock/profiles/default.toml << 'EOF' commands = ["git", "terraform"] [env] set = { AWS_PROFILE = "readonly" } EOF # Run your container docker run \ -v ~/.config/airlock/sockets/default.sock:/run/docker-airlock.sock \ your-agent-image
Each command is a TOML file defining deny rules, environment hardening, and execution constraints. Start with the built-ins. Need more or less security? Run airlock eject <command> to get a copy you can edit. Need a command we don't ship? Create a TOML file and add it to your enable list.
| Command | What the built-in blocks |
|---|---|
| git | Config injection, hook execution, credential subcommands, upload-pack |
| terraform | destroy, apply -auto-approve (sequence deny), force-unlock |
| aws | terminate-instances, delete-db-instance, and other destructive operations |
| docker | Root volume mounts, socket mounts, namespace escapes, capability grants |
| ssh | Port forwarding, agent forwarding, tunneling, config overrides |