ConnectOnionConnectOnion
Docshost()

host()

Make your agent accessible over the network. One function call. HTTP, WebSocket, and P2P relay.

Why host()? Turn local agents into network services. HTTP API, WebSocket, P2P relay - all with one function call.

60-Second Quick Start

Create an agent and call host(agent) - that's it:

host_agent.py
1from connectonion import Agent, host 2 3agent = Agent("translator", tools=[translate]) 4 5# Make it network-accessible 6host(agent)
Python REPL
Interactive
╭─────────────────────────────────────────────────────────╮
│ Agent 'translator' is now hosted │
├─────────────────────────────────────────────────────────┤
│ │
│ Address: 0x3d4017c3e843895a92b70aa74d1b7ebc9c98... │
│ │
│ HTTP Endpoints: │
│ POST http://localhost:8000/input │
│ GET http://localhost:8000/sessions/{session_id} │
│ GET http://localhost:8000/health │
│ WS ws://localhost:8000/ws │
│ │
│ Interactive UI: │
│ http://localhost:8000/docs │
│ │
│ P2P Relay: │
│ wss://oo.openonion.ai/ws/announce │
│ │
╰─────────────────────────────────────────────────────────╯
 
Waiting for tasks...

What You Get

HTTP API → POST /input, GET /sessions, GET /health
WebSocket → Real-time streaming at /ws
Interactive UI → Test your agent at /docs
P2P Relay → Connect from anywhere via relay

Worker Isolation

Each request gets a fresh deep copy of your agent:

No shared state between concurrent requests
Stateful tools work correctly (browser, file handles)
Complete isolation - one request can't affect another

HTTP API

POST /input - Submit a Task

code
1curl -X POST http://localhost:8000/input \ 2 -H "Content-Type: application/json" \ 3 -d '{"prompt": "Translate hello to Spanish"}'
Python REPL
Interactive
{
"session_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "done",
"result": "Hola",
"duration_ms": 1250,
"session": {...}
}

Multi-turn Conversations

Pass the session from the response to continue:

main.py
1# First request 2response = requests.post("http://localhost:8000/input", json={ 3 "prompt": "My name is John" 4}) 5session = response.json()["session"] 6 7# Second request - pass session back 8response = requests.post("http://localhost:8000/input", json={ 9 "prompt": "What is my name?", 10 "session": session # Agent remembers! 11}) 12print(response.json()["result"]) # "Your name is John"

GET /sessions/{session_id} - Fetch Results

code
1curl http://localhost:8000/sessions/550e8400-e29b-41d4-a716-446655440000
Python REPL
Interactive
{
"session_id": "550e8400-...",
"status": "done",
"result": "Hola",
"duration_ms": 1250
}

GET /sessions - List Sessions

code
1curl http://localhost:8000/sessions
Python REPL
Interactive
{
"sessions": [
{"session_id": "abc-123", "status": "done", "created": 1702234567},
{"session_id": "def-456", "status": "running", "created": 1702234570}
]
}

GET /health - Health Check

code
1curl http://localhost:8000/health
Python REPL
Interactive
{
"status": "healthy",
"agent": "translator",
"uptime": 3600
}

GET /info - Agent Info

code
1curl http://localhost:8000/info
Python REPL
Interactive
{
"name": "translator",
"address": "0x3d4017c3...",
"tools": ["translate", "detect_language"],
"trust": "careful",
"version": "0.5.10"
}

WebSocket API

Real-time communication with streaming support:

JS
app.js
1const ws = new WebSocket("ws://localhost:8000/ws"); 2 3ws.send(JSON.stringify({ 4 type: "INPUT", 5 prompt: "Translate hello to Spanish" 6})); 7 8ws.onmessage = (event) => { 9 const msg = JSON.parse(event.data); 10 if (msg.type === "OUTPUT") { 11 console.log("Result:", msg.result); 12 } else if (msg.type === "STREAM") { 13 process.stdout.write(msg.chunk); 14 } 15};

INPUT → Agent

Send prompts to the agent

OUTPUT ← Agent

Receive final results

STREAM ← Agent

Streaming chunks

ERROR ← Agent

Error messages

Trust & Access Control

Control who can access your agent:

Trust Levels

main.py
1host(agent, trust="open") # Accept all (development) 2host(agent, trust="careful") # Recommend signature (default) 3host(agent, trust="strict") # Require signature (production)

Access Lists

main.py
1host(agent, 2 blacklist=["0xbad..."], # Always reject 3 whitelist=["0xgood..."] # Always accept 4)

Natural Language Policy

main.py
1host(agent, trust=""" 2I trust requests that: 3- Come from known contacts with good history 4- Have valid signatures 5- Are on my whitelist OR from local network 6""")

Configuration

All Parameters

main.py
1host( 2 agent, 3 trust="careful", # Trust level/policy/agent 4 blacklist=None, # Addresses to reject 5 whitelist=None, # Addresses to accept 6 port=8000, # HTTP port 7 workers=1, # Worker processes 8 result_ttl=86400, # Result storage (24h) 9 relay_url="wss://...", # P2P relay 10 reload=False # Auto-reload on changes 11)

Development vs Production

Development

main.py
1host(agent, reload=True, trust="open")

Production

main.py
1host(agent, workers=4, trust="strict")

host.yaml Configuration

Store configuration in a YAML file instead of code parameters. Generated by co init or co create.

Basic Setup

.co/host.yaml
1# .co/host.yaml 2summary: I translate text between 100+ languages 3examples: 4 - "Translate 'hello' to Spanish" 5 - "What language is '你好' in?" 6 7trust: careful 8port: 8000
agent.py
1from connectonion import Agent, host 2 3def create_agent(): 4 return Agent("translator", tools=[translate]) 5 6host(create_agent) # Reads .co/host.yaml automatically

Configuration Priority

Settings are loaded in order (highest priority first):

1.Code parameters - host(agent, port=9000)
2.Config file - .co/host.yaml

Agent Metadata

Used by /info endpoint and ANNOUNCE messages for agent discovery:

code
1# Natural language description 2summary: I translate text between 100+ languages with cultural context 3 4# 2-5 example prompts 5examples: 6 - "Translate 'hello' to Spanish" 7 - "What language is '你好' in?" 8 - "Translate this paragraph to French"

Trust Levels

LevelBehaviorUse Case
openAccept all requestsDevelopment
carefulRecommend signature, accept unsignedStaging/Default
strictRequire valid signatureProduction
code
1# Simple trust level 2trust: careful # "open", "careful", or "strict"

Advanced Trust Configuration

code
1trust: 2 # Who has access (checked in order) 3 allow: 4 - whitelisted # Addresses in whitelist.txt 5 - contact # Previously promoted contacts 6 7 # Who is blocked 8 deny: 9 - blocked # Addresses in blacklist.txt 10 11 # How strangers become contacts (onboarding) 12 onboard: 13 invite_code: 14 - OpenOnion 15 - BETA2024 16 payment: 10 # Minimum credits required 17 18 # What to do with strangers without credentials 19 # Options: "allow", "deny", "ask" (ask = use LLM to evaluate) 20 default: ask

Access Control Lists

code
1# .co/host.yaml - Custom paths 2whitelist: ./security/allowed-addresses.txt 3blacklist: ./security/blocked-users.txt
.co/whitelist.txt
1# .co/whitelist.txt - One address per line 2# Trusted partners 3 40xgood123abc... # Partner company 50xtrusted456def... 60xfriend789ghi...

Server Settings

code
1# HTTP port (default: 8000) 2port: 8000 3 4# Number of worker processes (default: 1) 5workers: 1 6 7# Result storage TTL in seconds (default: 86400 = 24 hours) 8result_ttl: 86400 9 10# P2P relay for agent discovery 11relay_url: wss://oo.openonion.ai/ws/announce 12 13# Auto-reload on code changes - development only (default: false) 14reload: false 15 16# Path to .co directory for agent identity (default: ~/.co/) 17co_dir: null

File Upload Limits

Control file upload sizes for /input endpoint and /ws WebSocket:

code
1# Maximum file size in MB (default: 10) 2# Good for screenshots, docs, images 3max_file_size: 10 4 5# Maximum number of files in one request (default: 10) 6max_files_per_request: 10

Image Processing

code
1max_file_size: 5 2max_files_per_request: 20

Video Analysis

code
1max_file_size: 500 2max_files_per_request: 5

Document Processing

code
1max_file_size: 10 2max_files_per_request: 50

Complete Example

.co/host.yaml
1# .co/host.yaml - Production configuration 2 3summary: Production translation service with 100+ languages 4examples: 5 - "Translate 'hello' to Spanish" 6 - "What language is '你好' in?" 7 - "Translate this document to French" 8 9trust: 10 allow: 11 - whitelisted 12 - contact 13 deny: 14 - blocked 15 onboard: 16 invite_code: [OpenOnion, BETA2024] 17 payment: 10 18 default: ask 19 20port: 8000 21workers: 4 22result_ttl: 3600 # 1 hour 23relay_url: wss://oo.openonion.ai/ws/announce 24max_file_size: 10 25max_files_per_request: 10 26 27whitelist: whitelist.txt 28blacklist: blacklist.txt

Best Practices

✅ DO: Commit host.yaml to version control
❌ DON'T: Put secrets in host.yaml - use .env instead
✅ DO: Start simple, add complexity as needed
❌ DON'T: Commit whitelist.txt or blacklist.txt to git

API Reference

ParameterTypeDefaultDescription
agentAgentrequiredThe agent to host
truststr | Agent"careful"Trust level, policy, or agent
blacklistlistNoneAddresses to always reject
whitelistlistNoneAddresses to always accept
portint8000HTTP server port
workersint1Number of worker processes
result_ttlint86400Result storage TTL (24h default)
relay_urlstrproductionP2P relay server URL
reloadboolFalseAuto-reload on code changes

Deployment

With Uvicorn/Gunicorn

main.py
1# myagent.py 2from connectonion import Agent, host 3 4agent = Agent("translator", tools=[translate]) 5app = host.app(agent) # Export ASGI app 6 7if __name__ == "__main__": 8 host(agent)
code
1# Run with uvicorn 2uvicorn myagent:app --workers 4 3 4# Or gunicorn 5gunicorn myagent:app -w 4 -k uvicorn.workers.UvicornWorker

Docker

code
1FROM python:3.11-slim 2RUN pip install connectonion 3COPY myagent.py . 4CMD ["python", "myagent.py"]

Docker Compose

code
1# docker-compose.yml 2services: 3 agent: 4 build: . 5 ports: 6 - "8000:8000" 7 environment: 8 - CONNECTONION_ENV=production 9 - OPENAI_API_KEY=${OPENAI_API_KEY}

systemd Service

code
1# /etc/systemd/system/myagent.service 2[Unit] 3Description=My ConnectOnion Agent 4After=network.target 5 6[Service] 7User=app 8WorkingDirectory=/app 9ExecStart=/usr/bin/python myagent.py 10Restart=always 11RestartSec=5 12 13[Install] 14WantedBy=multi-user.target
code
1sudo systemctl enable myagent 2sudo systemctl start myagent

Ready to Host Your Agents?

Just call host(agent) and your agent goes live!

Enjoying ConnectOnion?

⭐ Star us on GitHub = ☕ Coffee chat with our founder. We love meeting builders.