Networkhost()
Docshost()

host()

Make your agent accessible over the network. One function call. HTTP, WebSocket, and P2P relay.

Why host()? Turn local agents into network services. HTTP API, WebSocket, P2P relay - all with one function call.

60-Second Quick Start

Create an agent and call host(agent) - that's it:

host_agent.py
from connectonion import Agent, host agent = Agent("translator", tools=[translate]) # Make it network-accessible host(agent)
output
╭─────────────────────────────────────────────────────────╮
│ Agent 'translator' is now hosted │
├─────────────────────────────────────────────────────────┤
│ │
│ Address: 0x3d4017c3e843895a92b70aa74d1b7ebc9c98... │
│ │
│ HTTP Endpoints: │
│ POST http://localhost:8000/input │
│ GET http://localhost:8000/sessions/{session_id} │
│ GET http://localhost:8000/health │
│ WS ws://localhost:8000/ws │
│ │
│ Interactive UI: │
│ http://localhost:8000/docs │
│ │
│ P2P Relay: │
│ wss://oo.openonion.ai/ws/announce │
│ │
╰─────────────────────────────────────────────────────────╯
 
Waiting for tasks...

What You Get

HTTP API → POST /input, GET /sessions, GET /health
WebSocket → Real-time streaming at /ws
Interactive UI → Test your agent at /docs
P2P Relay → Connect from anywhere via relay

Worker Isolation

Each request gets a fresh deep copy of your agent:

No shared state between concurrent requests
Stateful tools work correctly (browser, file handles)
Complete isolation - one request can't affect another

HTTP API

POST /input - Submit a Task

code
curl -X POST http://localhost:8000/input \ -H "Content-Type: application/json" \ -d '{"prompt": "Translate hello to Spanish"}'
output
{
"session_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "done",
"result": "Hola",
"duration_ms": 1250,
"session": {...}
}

Multi-turn Conversations

Pass the session from the response to continue:

main.py
# First request response = requests.post("http://localhost:8000/input", json={ "prompt": "My name is John" }) session = response.json()["session"] # Second request - pass session back response = requests.post("http://localhost:8000/input", json={ "prompt": "What is my name?", "session": session # Agent remembers! }) print(response.json()["result"]) # "Your name is John"

Sending Images & Files

Both HTTP and WebSocket accept images and files alongside text prompts:

code
# Send with files curl -X POST http://localhost:8000/input \ -H "Content-Type: application/json" \ -d '{ "prompt": "Summarize this document", "files": [ {"name": "report.pdf", "data": "data:application/pdf;base64,JVBERi..."} ] }' # Send with images curl -X POST http://localhost:8000/input \ -H "Content-Type: application/json" \ -d '{ "prompt": "What do you see?", "images": ["data:image/png;base64,iVBORw0KGgo..."] }'

Images are passed directly to the LLM as visual content (multimodal).

Files are decoded from base64, saved to .co/uploads/, and the agent reads them via tools like read_file.

Limits: Default 10MB per file, 10 files per request. Configure in .co/host.yaml or via host() params.

GET /sessions/{session_id} - Fetch Results

code
curl http://localhost:8000/sessions/550e8400-e29b-41d4-a716-446655440000
output
{
"session_id": "550e8400-...",
"status": "done",
"result": "Hola",
"duration_ms": 1250
}

GET /sessions - List Sessions

code
curl http://localhost:8000/sessions
output
{
"sessions": [
{"session_id": "abc-123", "status": "done", "created": 1702234567},
{"session_id": "def-456", "status": "running", "created": 1702234570}
]
}

GET /health - Health Check

code
curl http://localhost:8000/health
output
{
"status": "healthy",
"agent": "translator",
"uptime": 3600
}

GET /info - Agent Info

code
curl http://localhost:8000/info
output
{
"name": "translator",
"address": "0x3d4017c3...",
"tools": ["translate", "detect_language"],
"trust": "careful",
"version": "0.5.10",
"accepted_inputs": {
"text": true,
"images": true,
"files": {
"max_file_size_mb": 10,
"max_files_per_request": 10
}
}
}

The accepted_inputs field tells clients what input types the agent supports and file size limits.

WebSocket API

Real-time communication with streaming support:

JS
app.js
const ws = new WebSocket("ws://localhost:8000/ws"); // Step 1: CONNECT — authenticate + find/create session ws.send(JSON.stringify({ type: "CONNECT", payload: { to: "0xAgent...", timestamp: Date.now() / 1000 }, from: "0xYourKey", signature: "0x...", session_id: savedId // optional — omit for new session })); // → { type: "CONNECTED", session_id: "...", status: "new" } // Step 2: INPUT — send prompts (after CONNECT) ws.send(JSON.stringify({ type: "INPUT", prompt: "Translate hello to Spanish" })); ws.onmessage = (event) => { const msg = JSON.parse(event.data); if (msg.type === "CONNECTED") console.log("Session:", msg.session_id); else if (msg.type === "OUTPUT") console.log("Result:", msg.result); else if (msg.type === "PING") ws.send(JSON.stringify({ type: "PONG" })); };

CONNECT → Server

Authenticate + find/create session (one message for new and resume)

INPUT → Agent

Send prompts (after CONNECT)

OUTPUT ← Agent

Receive final results

STREAM ← Agent

Streaming chunks

ERROR ← Agent

Error messages

Trust & Access Control

Control who can access your agent:

Trust Levels

main.py
host(agent, trust="open") # Accept all (development) host(agent, trust="careful") # Recommend signature (default) host(agent, trust="strict") # Require signature (production)

Access Lists

main.py
host(agent, blacklist=["0xbad..."], # Always reject whitelist=["0xgood..."] # Always accept )

Natural Language Policy

main.py
host(agent, trust=""" I trust requests that: - Come from known contacts with good history - Have valid signatures - Are on my whitelist OR from local network """)

Configuration

All Parameters

main.py
host( agent, trust="careful", # Trust level/policy/agent blacklist=None, # Addresses to reject whitelist=None, # Addresses to accept port=8000, # HTTP port workers=1, # Worker processes result_ttl=86400, # Result storage (24h) relay_url="wss://...", # P2P relay reload=False # Auto-reload on changes )

Development vs Production

Development

main.py
host(agent, reload=True, trust="open")

Production

main.py
host(agent, workers=4, trust="strict")

host.yaml Configuration

Store configuration in a YAML file instead of code parameters. Generated by co init or co create.

Basic Setup

.co/host.yaml
# .co/host.yaml summary: I translate text between 100+ languages examples: - "Translate 'hello' to Spanish" - "What language is '你好' in?" trust: careful port: 8000
agent.py
from connectonion import Agent, host def create_agent(): return Agent("translator", tools=[translate]) host(create_agent) # Reads .co/host.yaml automatically

Configuration Priority

Settings are loaded in order (highest priority first):

1.Code parameters - host(agent, port=9000)
2.Config file - .co/host.yaml

Agent Metadata

Used by /info endpoint and ANNOUNCE messages for agent discovery:

code
# Natural language description summary: I translate text between 100+ languages with cultural context # 2-5 example prompts examples: - "Translate 'hello' to Spanish" - "What language is '你好' in?" - "Translate this paragraph to French"

Trust Levels

LevelBehaviorUse Case
openAccept all requestsDevelopment
carefulRecommend signature, accept unsignedStaging/Default
strictRequire valid signatureProduction
code
# Simple trust level trust: careful # "open", "careful", or "strict"

Advanced Trust Configuration

code
trust: # Who has access (checked in order) allow: - whitelisted # Addresses in whitelist.txt - contact # Previously promoted contacts # Who is blocked deny: - blocked # Addresses in blacklist.txt # How strangers become contacts (onboarding) onboard: invite_code: - OpenOnion - BETA2024 payment: 10 # Minimum credits required # What to do with strangers without credentials # Options: "allow", "deny", "ask" (ask = use LLM to evaluate) default: ask

Access Control Lists

code
# .co/host.yaml - Custom paths whitelist: ./security/allowed-addresses.txt blacklist: ./security/blocked-users.txt
.co/whitelist.txt
# .co/whitelist.txt - One address per line # Trusted partners 0xgood123abc... # Partner company 0xtrusted456def... 0xfriend789ghi...

Server Settings

code
# HTTP port (default: 8000) port: 8000 # Number of worker processes (default: 1) workers: 1 # Result storage TTL in seconds (default: 86400 = 24 hours) result_ttl: 86400 # P2P relay for agent discovery relay_url: wss://oo.openonion.ai/ws/announce # Auto-reload on code changes - development only (default: false) reload: false # Path to .co directory for agent identity (default: ~/.co/) co_dir: null

File Upload Limits

Control file upload sizes for /input endpoint and /ws WebSocket:

code
# Maximum file size in MB (default: 10) # Good for screenshots, docs, images max_file_size: 10 # Maximum number of files in one request (default: 10) max_files_per_request: 10

Image Processing

code
max_file_size: 5 max_files_per_request: 20

Video Analysis

code
max_file_size: 500 max_files_per_request: 5

Document Processing

code
max_file_size: 10 max_files_per_request: 50

Complete Example

.co/host.yaml
# .co/host.yaml - Production configuration summary: Production translation service with 100+ languages examples: - "Translate 'hello' to Spanish" - "What language is '你好' in?" - "Translate this document to French" trust: allow: - whitelisted - contact deny: - blocked onboard: invite_code: [OpenOnion, BETA2024] payment: 10 default: ask port: 8000 workers: 4 result_ttl: 3600 # 1 hour relay_url: wss://oo.openonion.ai/ws/announce max_file_size: 10 max_files_per_request: 10 whitelist: whitelist.txt blacklist: blacklist.txt

Best Practices

✓ DO: Commit host.yaml to version control
✗ DON'T: Put secrets in host.yaml — use .env instead
✓ DO: Start simple, add complexity as needed
✗ DON'T: Commit whitelist.txt or blacklist.txt to git

API Reference

ParameterTypeDefaultDescription
agentAgentrequiredThe agent to host
truststr | Agent"careful"Trust level, policy, or agent
blacklistlistNoneAddresses to always reject
whitelistlistNoneAddresses to always accept
portint8000HTTP server port
workersint1Number of worker processes
result_ttlint86400Result storage TTL (24h default)
relay_urlstrproductionP2P relay server URL
reloadboolFalseAuto-reload on code changes

Deployment

With Uvicorn/Gunicorn

main.py
# myagent.py from connectonion import Agent, host agent = Agent("translator", tools=[translate]) app = host.app(agent) # Export ASGI app if __name__ == "__main__": host(agent)
code
# Run with uvicorn uvicorn myagent:app --workers 4 # Or gunicorn gunicorn myagent:app -w 4 -k uvicorn.workers.UvicornWorker

Docker

code
FROM python:3.11-slim RUN pip install connectonion COPY myagent.py . CMD ["python", "myagent.py"]

Docker Compose

code
# docker-compose.yml services: agent: build: . ports: - "8000:8000" environment: - CONNECTONION_ENV=production - OPENAI_API_KEY=${OPENAI_API_KEY}

systemd Service

code
# /etc/systemd/system/myagent.service [Unit] Description=My ConnectOnion Agent After=network.target [Service] User=app WorkingDirectory=/app ExecStart=/usr/bin/python myagent.py Restart=always RestartSec=5 [Install] WantedBy=multi-user.target
code
sudo systemctl enable myagent sudo systemctl start myagent

Ready to Host Your Agents?

Just call host(agent) and your agent goes live.

Star us on GitHub

If ConnectOnion saves you time, a ⭐ goes a long way — and earns you a coffee chat with our founder.