▀▀█▀▀ ▀█▀ ░█▀▀█ ░█ ░█ ▀▀█▀▀ ░█▀▀▄ ░█▀▀▀ ░█▀▀█ ░█▀▄▀█
░█ ░█ ░█ ▄▄ ░█▀▀▀█ ░█ ░█▀▀█ ░█▀▀ ░█▀▀█ ░█░█░█
░█ ▄█▄ ░█▄▄█ ░█ ░█ ░█ ░█▄▄▀ ░█▄▄▄ ░█ ░█ ░█ ░█
LLM proxy for containerized AI agents.
Proxies LLM API calls from container to host over a unix socket. Credentials stay on the host. The agent sends messages, tightbeam attaches the API key and manages conversation history. The agent never sees the key, the model, or the provider.
curl -fsSL https://raw.githubusercontent.com/calebfaruki/tightbeam/main/install.sh | sh
# "Just pass the API key" docker run -e ANTHROPIC_API_KEY=sk-ant-... agent-image # "Just set the token" docker run -e OPENAI_API_KEY=sk-... agent-image
Your API key is an environment variable inside the container. The agent can read it, exfiltrate it, and use it for anything — any model, any prompt, any volume of requests. No scoping, no audit trail, no conversation ownership.
Credentials stay on the host. A daemon proxies LLM calls over a unix socket.
Every agent framework — LangChain, CrewAI, OpenAI Agents SDK, Claude Code — gives the agent the API key directly. The agent holds the credential, manages its own conversation history, and calls the LLM over HTTPS. MCP sandboxes like NemoClaw and Docker MCP Toolkit isolate tool servers, but the agent still holds the LLM key itself. Nobody separates the agent from the API credential.
Use MCP sandboxes for tool server isolation. Use Tightbeam for LLM API isolation.
The runtime connects once. The entire tool loop — turn, tool execution, results in the next turn — runs on a single persistent socket connection until the session ends.
# Runtime sends turn {"jsonrpc":"2.0","id":1,"method":"turn","params":{"messages":[...],"tools":[...]}} # Daemon streams response {"jsonrpc":"2.0","method":"output","params":{"stream":"content","data":{"type":"text","text":"I'll run ls"}}} {"jsonrpc":"2.0","id":1,"result":{"stop_reason":"tool_use","tool_calls":[{"id":"tc-1","name":"bash","input":{"command":"ls"}}]}} # Runtime executes tool, sends result in next turn {"jsonrpc":"2.0","id":2,"method":"turn","params":{"messages":[{"role":"tool","tool_call_id":"tc-1","content":"main.rs\nlib.rs"}]}} # Daemon streams continuation {"jsonrpc":"2.0","method":"output","params":{"stream":"content","data":{"type":"text","text":"The directory contains..."}}} {"jsonrpc":"2.0","id":2,"result":{"stop_reason":"end_turn","content":"The directory contains main.rs and lib.rs."}}
Tightbeam and Airlock solve the same problem at different layers. Both are Rust binaries, JSON-RPC 2.0 over unix sockets, same daemon lifecycle pattern.
Use Airlock for CLI tools. Use Tightbeam for LLM APIs. Run both.
# Install curl -fsSL https://raw.githubusercontent.com/calebfaruki/tightbeam/main/install.sh | sh # Create a registry (LLM providers and MCP servers) cat > ~/.config/tightbeam/registry.toml << 'EOF' [llm.claude-sonnet] provider = "anthropic" model = "claude-sonnet-4-20250514" api_key_env = "ANTHROPIC_API_KEY" EOF # Create an agent profile cat > ~/.config/tightbeam/agents/my-agent.toml << 'EOF' [llm.claude-sonnet] EOF # Run your container docker run \ -v ~/.config/tightbeam/sockets/my-agent.sock:/run/docker-tightbeam.sock \ your-agent-image
AI agents follow instructions from content they process. They can't reliably tell your instructions apart from malicious ones. A prompt injection in a file, a commit message, or an API response can hijack the agent and make it act with your full API key. The only way to prevent this is to never give the agent the key in the first place.
API keys live on the host, never inside the container. Even a fully compromised agent — prompt-injected, jailbroken, running malicious code — cannot access keys that don't exist in its environment. The agent can't escalate to a different model, provider, or credential set.
Per-agent sockets, per-agent credentials, per-agent conversation logs. A compromised agent's session can't affect other agents. The blast radius of any single failure is one socket.
Every message, tool call, and LLM response is written to NDJSON logs on the host. The agent can't tamper with, suppress, or erase its own history. Forensic review doesn't depend on the agent's cooperation.
Maps to OWASP Agentic Top 10 risks ASI03 (credential isolation prevents identity and privilege abuse), ASI08 (per-agent isolation contains cascading failures), and ASI09 (host-side logs for audit and forensics). Tightbeam operates at the infrastructure layer — model-layer risks require model-layer mitigations.