Agent APILLM Function
DocsLLM Function

One-shot LLM Calls

Make direct LLM calls with optional structured output. Supports OpenAI, Google Gemini, and Anthropic models through a unified interface.

Quick Start

main.py
from connectonion import llm_do # OpenAI (default) answer = llm_do("What's 2+2?") print(answer) # Google Gemini answer = llm_do("What's 2+2?", model="gemini-2.5-flash") # Anthropic Claude answer = llm_do("What's 2+2?", model="claude-haiku-4-5")
output
>>> answer = llm_do("What's 2+2?")
>>> print(answer)
4

That's it! One function for any LLM task across multiple providers.

With Structured Output

main.py
from pydantic import BaseModel class Analysis(BaseModel): sentiment: str confidence: float keywords: list[str] result = llm_do( "I absolutely love this product! Best purchase ever!", output=Analysis ) print(result.sentiment) print(result.confidence) print(result.keywords)
output
>>> print(result.sentiment)
'positive'
>>> print(result.confidence)
0.98
>>> print(result.keywords)
['love', 'best', 'ever']

Real Examples

Extract Data from Text

main.py
from pydantic import BaseModel class Invoice(BaseModel): invoice_number: str total_amount: float due_date: str invoice_text = """ Invoice #INV-2024-001 Total: $1,234.56 Due: January 15, 2024 """ invoice = llm_do(invoice_text, output=Invoice) print(invoice.total_amount)
output
>>> print(invoice.total_amount)
1234.56

Use Custom Prompts

main.py
# With prompt file summary = llm_do( long_article, system_prompt="prompts/summarizer.md" # Loads from file ) # With inline prompt translation = llm_do( "Hello world", system_prompt="You are a translator. Translate to Spanish only." ) print(translation)
output
>>> print(translation)
Hola mundo

Quick Analysis Tool

main.py
def analyze_feedback(text: str) -> str: """Analyze customer feedback with structured output.""" class FeedbackAnalysis(BaseModel): category: str # bug, feature, praise, complaint priority: str # high, medium, low summary: str action_required: bool analysis = llm_do(text, output=FeedbackAnalysis) if analysis.action_required: return f"🚨 {analysis.priority.upper()}: {analysis.summary}" return f"📝 {analysis.category}: {analysis.summary}" # Use in an agent from connectonion import Agent agent = Agent("support", tools=[analyze_feedback])
output
>>> result = analyze_feedback("The app crashes when I try to upload files!")
>>> print(result)
🚨 HIGH: Application crashes during file upload process

Supported Models

main.py
# OpenAI models llm_do("Hello", model="gpt-5") llm_do("Hello", model="gpt-5-mini") llm_do("Hello", model="gpt-5-nano") # Google Gemini models llm_do("Hello", model="gemini-2.5-pro") llm_do("Hello", model="gemini-2.5-flash") # Anthropic Claude models llm_do("Hello", model="claude-sonnet-4-5") llm_do("Hello", model="claude-haiku-4-5") llm_do("Hello", model="claude-opus-4-5")
output
>>> llm_do("Hello", model="gpt-5")
'Hello! How can I assist you today?'
 
>>> llm_do("Hello", model="gemini-2.5-flash")
'Hello there! How can I help you?'
 
>>> llm_do("Hello", model="claude-haiku-4-5")
'Hello! How may I assist you today?'

Structured Output Compatibility

When using output= with Pydantic models, note these compatibility differences:

ProviderStructured Output Support
OpenAIAll models
Google GeminiAll models
Anthropic ClaudeOnly 4.5/4.1 series (claude-sonnet-4-5, claude-opus-4-5, claude-opus-4-1, claude-haiku-4-5)

Note: Legacy Claude models (claude-sonnet-4, claude-opus-4) do NOT support structured outputs. Use Claude 4.5 or 4.1 series for structured output tasks.

main.py
from pydantic import BaseModel class Answer(BaseModel): result: int # Works with all providers llm_do("What is 2+2?", output=Answer, model="co/gpt-4o-mini") # ✅ llm_do("What is 2+2?", output=Answer, model="co/gemini-2.5-flash") # ✅ llm_do("What is 2+2?", output=Answer, model="co/claude-sonnet-4-5") # ✅ # Legacy Claude models do NOT support structured output # llm_do("What is 2+2?", output=Answer, model="co/claude-sonnet-4") # ❌
output
>>> llm_do("What is 2+2?", output=Answer, model="co/claude-sonnet-4-5")
Answer(result=4)

Parameters

ParameterTypeDefaultDescription
inputstrrequiredThe input text/question
outputBaseModelNonePydantic model for structured output
promptstr|PathNoneSystem prompt (string or file path)
modelstr"co/gemini-2.5-flash"Model to use (supports OpenAI, Gemini, Claude)
temperaturefloat0.1Randomness (0=deterministic, 2=creative)

What You Get

One-shot execution - Single LLM round, no loops
Type safety - Full IDE autocomplete with Pydantic
Flexible prompts - Inline strings or external files
Smart defaults - Fast model, low temperature
Clean errors - Clear messages when things go wrong

Common Patterns

Data Extraction

main.py
from pydantic import BaseModel class Person(BaseModel): name: str age: int occupation: str person = llm_do("John Doe, 30, software engineer", output=Person) print(f"Name: {person.name}") print(f"Age: {person.age}") print(f"Job: {person.occupation}")
output
>>> print(f"Name: {person.name}")
Name: John Doe
>>> print(f"Age: {person.age}")
Age: 30
>>> print(f"Job: {person.occupation}")
Job: software engineer

Quick Decisions

main.py
is_urgent = llm_do("Customer says: My server is down!") if "urgent" in is_urgent.lower(): escalate()
output
>>> is_urgent = llm_do("Customer says: My server is down!")
>>> print(is_urgent)
This appears to be an urgent issue that requires immediate attention.

Format Conversion

main.py
class JSONData(BaseModel): data: dict json_result = llm_do("Convert to JSON: name=John age=30", output=JSONData) print(json_result.data)
output
>>> print(json_result.data)
{'name': 'John', 'age': 30}

Validation

main.py
def validate_input(user_text: str) -> bool: result = llm_do( f"Is this valid SQL? Reply yes/no only: {user_text}", temperature=0 # Maximum consistency ) return result.strip().lower() == "yes"
output
>>> validate_input("SELECT * FROM users WHERE id = 1")
True
>>> validate_input("DROP TABLE; DELETE everything")
False

Tips

  1. 1.Use low temperature (0-0.3) for consistent results
  2. 2.Provide examples in your prompt for better accuracy
  3. 3.Use Pydantic models for anything structured
  4. 4.Cache prompts in files for reusability

Comparison with Agent

Featurellm_do()Agent()
PurposeOne-shot callsMulti-step workflows
ToolsNoYes
IterationsAlways 1Up to max_iterations
StateStatelessMaintains history
Best forQuick tasksComplex automation
main.py
# Use llm_do() for simple tasks answer = llm_do("What's the capital of France?") # Use Agent for multi-step workflows agent = Agent("assistant", tools=[search, calculate]) result = agent.input("Find the population and calculate density")
output
>>> answer = llm_do("What's the capital of France?")
>>> print(answer)
The capital of France is Paris.
 
>>> result = agent.input("Find the population and calculate density")
>>> print(result)
I'll help you find the population and calculate the density. Let me search for the current data...

Error Handling

main.py
from connectonion import llm_do from pydantic import ValidationError try: result = llm_do("Analyze this", output=ComplexModel) except ValidationError as e: print(f"Output didn't match model: {e}") except Exception as e: print(f"LLM call failed: {e}")
output
>>> try:
... result = llm_do("Analyze this", output=ComplexModel)
... except ValidationError as e:
... print(f"Output didn't match model: {e}")
Output didn't match model: 2 validation errors for ComplexModel...

Next Steps

Star us on GitHub

If ConnectOnion saves you time, a ⭐ goes a long way — and earns you a coffee chat with our founder.