Appearance
Lota Studio
Lota Studio is a self-contained developer tool for testing and debugging Lota SDK runtimes. It provides a web interface for chatting with agents, browsing tools, inspecting runtime state, and managing workstreams -- all without needing a full host application.
Studio is built on Hono (server) and Vite/React (client), running on port 4100.
Getting Started
Prerequisites
Studio requires the same infrastructure as any Lota SDK runtime:
- SurrealDB instance
- Redis instance
- Bifrost AI gateway
- Firecrawl API key
Start the infrastructure stack from the repository root:
bash
cd infrastructure
docker compose up -dRunning Studio
bash
cd studio
bun install
bun dev # Starts on http://localhost:4100The dev server starts the Hono backend on port 4100. The Vite client dev server proxies API requests to it.
Environment Variables
Studio reads the standard SDK environment variables:
| Variable | Description |
|---|---|
SURREALDB_URL | SurrealDB WebSocket URL |
SURREALDB_NAMESPACE | SurrealDB namespace |
SURREALDB_USER | SurrealDB username |
SURREALDB_PASSWORD | SurrealDB password |
REDIS_URL | Redis connection URL |
AI_GATEWAY_URL | Bifrost gateway URL |
AI_GATEWAY_KEY | Bifrost virtual key |
FIRECRAWL_API_KEY | Firecrawl API key |
S3_ENDPOINT | S3 endpoint (optional for Studio) |
S3_BUCKET | S3 bucket (defaults to studio) |
S3_ACCESS_KEY_ID | S3 access key (optional for Studio) |
S3_SECRET_ACCESS_KEY | S3 secret key (optional for Studio) |
Features
Direct Chat
Chat directly with an AI model without going through the workstream turn lifecycle. Supports model selection, system prompt customization, reasoning effort control, and tool selection. Useful for testing prompts in isolation.
Runtime-Aware Chat
Chat through the full SDK runtime pipeline -- context assembly, memory retrieval, agent resolution, and workstream message persistence. This mirrors how a production host would process a turn.
Tool Browser
Browse all available tools with their schemas, and run any tool directly with custom input. Useful for verifying tool behavior independently of agent execution.
Agent Inspector
List all configured system agents with their descriptions and prompt excerpts. View full prompts and run individual agents with custom tasks.
Workstream Management
Create, list, rename, delete, and reset workstreams. Browse message history for any workstream. Studio automatically bootstraps direct workstreams for each agent in the roster.
Debug Interface
- List and search organization memories
- Execute raw SurrealDB queries (SELECT and INFO only)
- List all workstreams (direct and group) with full state
- Reset all messages and memories
Model Browser
List all available models from the Bifrost gateway, organized by provider.
API Endpoints
Chat
| Method | Path | Description |
|---|---|---|
POST | /api/chat | Direct chat with a model. Accepts messages, model, optional systemPrompt, tools, and reasoningEffort. Returns a streaming UI message response. |
POST | /api/chat/runtime | Runtime-aware chat. Accepts messages and workstreamId. Processes the turn through the full SDK pipeline. |
Tools
| Method | Path | Description |
|---|---|---|
GET | /api/tools | List all available tools with their names and schemas. |
POST | /api/tools/run | Execute a tool directly. Accepts tool (name) and optional input. Returns the tool output. |
Models
| Method | Path | Description |
|---|---|---|
GET | /api/models | List available models from the gateway, grouped by provider. |
Runtime
| Method | Path | Description |
|---|---|---|
GET | /api/runtime | Check runtime connection status. Returns { connected: boolean }. |
POST | /api/runtime/connect | Connect the Studio runtime to the database and Redis. |
POST | /api/runtime/disconnect | Disconnect the Studio runtime. |
Workstreams
| Method | Path | Description |
|---|---|---|
GET | /api/workstreams | List group workstreams. |
GET | /api/workstreams/direct | List direct (1:1 agent) workstreams. |
POST | /api/workstreams | Create a new group workstream. Accepts optional title. |
GET | /api/workstreams/:id | Get a single workstream by ID. |
PATCH | /api/workstreams/:id | Rename a workstream. Accepts title. |
DELETE | /api/workstreams/:id | Delete a workstream. |
GET | /api/workstreams/:id/messages | List messages for a workstream. |
POST | /api/workstreams/:id/reset | Clear all messages for a workstream. |
Agents
| Method | Path | Description |
|---|---|---|
GET | /api/agents | List all system agents with descriptions and prompt excerpts. |
GET | /api/agents/:id | Get a single agent with its full prompt. |
POST | /api/agents/:id/run | Run an agent with a custom task. Accepts task and optional model. Returns a streaming text response. |
Debug
| Method | Path | Description |
|---|---|---|
GET | /api/debug/memories | List organization memories. Optional agent query param for agent-scoped results. |
GET | /api/debug/memories/search | Search memories by query. Requires q param, optional agent param. |
GET | /api/debug/workstreams | List all workstreams (direct + group) with full state. |
POST | /api/debug/query | Execute a raw SurrealDB query. Only SELECT and INFO statements are allowed. |
POST | /api/debug/reset-all | Delete all messages, memories, memory relations, and memory history. |
Configuration
Studio uses createLotaRuntime internally with a minimal configuration. It registers a built-in set of agents (chief, ceo, cto, cpo, cmo, cfo, mentor) and configures workstream bootstrap to provision direct workstreams for each.
Studio creates a fixed identity on startup:
- User:
user:lord(name: "Lord", email:lord@lota.studio) - Organization:
organization:studio_org(name: "Lota Studio")
This identity is used for all workstream operations. Studio also starts a title generation worker so workstream titles are automatically generated from the first user message.
Use Cases
Testing agent prompts and tools -- Use direct chat to iterate on system prompts without going through the full runtime pipeline. Select different models and reasoning effort levels to compare behavior.
Debugging memory retrieval -- Use the debug endpoints to list and search memories, verifying that the memory extraction and retrieval pipeline produces the expected results.
Verifying tool behavior -- Run tools directly via the tool browser with custom input. Check that tool schemas, validation, and execution work correctly before wiring them into agent configurations.
Inspecting workstream state -- Browse workstream message history, verify that turns are persisted correctly, and inspect the full workstream lifecycle from creation to archival.
End-to-end runtime testing -- Use runtime-aware chat to exercise the complete turn pipeline: context assembly, memory search, agent resolution, tool execution, and message persistence.
Comparing models -- Switch between models in direct chat to evaluate response quality, reasoning capabilities, and tool-use behavior across providers.