Prerequisites
Before writing code, make sure you have the right project-level credentials.
Project API key
Generate from the Sepurux app workspace settings.
Project ID
Use a project-scoped UUID for all /v1 requests.
Campaign ID
Create one campaign first, then reuse it in SDK and CI.
Install
Install the recorder for your language. Each SDK auto-uploads traces on exit.
pip install sepuruxEnvironment Setup
Define these variables once in your local shell, .env file, or CI secrets.
SEPURUX_API_BASE_URL=https://app.sepurux.dev/api/backend
SEPURUX_API_KEY=<project_api_key>
SEPURUX_PROJECT_ID=<project_uuid>
SEPURUX_CAMPAIGN_ID=<campaign_uuid>Record Your First Trace
Instrument one workflow in code and confirm trace capture in Sepurux.
from sepurux import SepuruxClient
client = SepuruxClient.from_env()
with client.trace("customer_refund_flow", {"ticket_id": "t-101"}) as trace:
trace.model_step("classify_ticket", {"ticket_id": "t-101"})
trace.tool_call("payments.refund", {"payment_id": "pay_123", "amount": 4200})
trace.tool_result("payments.refund", {"refund_id": "rf_123", "status": "queued"})
print(trace.trace_id)OpenAI Integration
instrument_openai wraps any openai.OpenAI client and records every chat.completions.create call automatically — no manual trace.model_step() needed.
import openai
from sepurux import SepuruxClient
from sepurux.integrations.openai import instrument_openai
client = SepuruxClient.from_env()
openai_client = openai.OpenAI()
with client.trace(
"customer_refund_flow",
{"ticket_id": "t-101"},
campaign_id=os.environ["SEPURUX_CAMPAIGN_ID"],
) as trace:
ai = instrument_openai(openai_client, recorder=trace)
response = ai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Classify this support ticket"}],
)
print(trace.trace_id)
print(trace.run_id)pip install openai. Records llm_call with model, messages, output, and latency. Tool calls returned by the model are automatically recorded as tool_call events.LangChain Integration
SepuruxCallbackHandler records LLM calls, tool calls, and agent outputs from any LangChain chain or agent automatically — no manual instrumentation.
from sepurux.integrations.langchain import SepuruxCallbackHandler
handler = SepuruxCallbackHandler(
"customer_refund_flow",
{"ticket_id": "t-101"},
campaign_id=os.environ["SEPURUX_CAMPAIGN_ID"],
)
# Pass to any LangChain chain, agent, or LLM
result = agent_executor.invoke(
{"input": "Process refund for ticket t-101"},
config={"callbacks": [handler]},
)
# Upload trace and start reliability run
handler.finish()
print(handler.trace_id)
print(handler.run_id)
# Or use as a context manager — finish() is called automatically
with SepuruxCallbackHandler("refund_flow", campaign_id="...") as handler:
result = chain.invoke(inputs, config={"callbacks": [handler]})pip install langchain-core. Records LLM calls with latency, tool call/result pairs, and final agent output.Create a Mutation Pack
Use the Basic Builder in the app to generate valid mutation packs without hand-writing JSON.
# In the app UI:
# 1) Open Mutation Packs -> Create Mutation Pack
# 2) Select Basic Builder and choose a preset + intensity
# 3) Optional: paste trace_id and click "Load Tools"
# 4) Save, then copy mutation_pack_id from the packs tableRun Checks from Recorded Traces
Pass campaign_id and mutation_pack when recording to auto-enqueue a reliability run, then promote the same run pattern to CI.
import os
from sepurux import SepuruxClient
client = SepuruxClient.from_env()
with client.trace(
"customer_refund_flow",
{"ticket_id": "t-101"},
campaign_id=os.environ["SEPURUX_CAMPAIGN_ID"],
mutation_pack="sepurux.core.reliability",
) as trace:
trace.model_step("classify_ticket", {"ticket_id": "t-101"})
trace.tool_call("payments.refund", {"payment_id": "pay_123", "amount": 4200})
trace.tool_result("payments.refund", {"refund_id": "rf_123", "status": "queued"})
print(trace.trace_id)
print(trace.run_id)Next step: Gate in CI
Once this run is stable, add the same trace + campaign path to GitHub workflows so release gating uses production-like scenarios.
Open CI/CD guide