Tracing
In addition to automatic instrumentation, tmam provides two tools for adding custom spans to your own code: the @trace decorator and the start_trace() context manager.
Use these to trace agent reasoning loops, custom tools, business logic, or any function that isn't covered by auto-instrumentation.
The @trace Decorator
Wrap any function with @trace to create a span automatically when it is called.
Basic usage
from tmam import trace
@trace
def my_function(input_text: str) -> str:
# ... your logic
return "result"
With a role
The role parameter classifies the span type in the dashboard:
from tmam import trace
@trace(role="agent")
def run_agent(task: str) -> str:
...
@trace(role="tool")
def search_web(query: str) -> list:
...
@trace(role="llm")
def call_llm(prompt: str) -> str:
...
Available roles: "agent", "tool", "llm", "memory", "embedding", "vectordb", "framework", or any custom string.
Async functions
The decorator supports both sync and async functions:
@trace(role="tool")
async def fetch_data(url: str) -> dict:
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.json()
What gets captured automatically
- Function name and the
roleyou provided - Input arguments (
function.args,function.kwargs) - Return value (
gen_ai.content.completion/ai.result) - Exceptions with full stack traces
service.name(yourapplication_name)deployment.environment
The start_trace() Context Manager
For more control — setting custom attributes mid-execution, tracing a block of code, or building agent loops — use start_trace():
from tmam import start_trace
with start_trace("my-operation", role="agent") as span:
# ... your logic
# Optionally set attributes on the span
span.set_attribute("custom.key", "value")
span.set_attribute("items.processed", 42)
Building a multi-step agent trace
from tmam import start_trace
from openai import OpenAI
client = OpenAI()
with start_trace("research-agent", role="agent") as agent_span:
agent_span.set_attribute("agent.task", "research quantum computing")
with start_trace("fetch-context", role="tool") as tool_span:
context = fetch_relevant_docs("quantum computing")
tool_span.set_attribute("docs.retrieved", len(context))
# LLM call is auto-instrumented inside the agent span
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a research assistant."},
{"role": "user", "content": f"Context: {context}\n\nSummarize quantum computing."},
]
)
The resulting trace in the dashboard shows the nested hierarchy: research-agent → fetch-context → gpt-4o call.
The TracedSpan Object
The span yielded by start_trace() is a TracedSpan that wraps an OpenTelemetry span:
with start_trace("my-op") as span:
if span: # span can be None if tracing is not recording
span.set_attribute("key", "value")
span.add_event("checkpoint", {"step": 1})
span.set_status("ok") # or "error"
span.record_exception(e) # attach an exception
Available methods
| Method | Description |
|---|---|
span.set_attribute(key, value) | Set a custom attribute on this span |
span.add_event(name, attributes) | Add a point-in-time event |
span.record_exception(exception) | Attach an exception to the span |
span.set_status("ok" / "error") | Mark the span outcome |