Skip to content

Lota Studio

Lota Studio is a self-contained developer tool for testing and debugging Lota SDK runtimes. It provides a web interface for chatting with agents, browsing tools, inspecting runtime state, and managing workstreams -- all without needing a full host application.

Studio is built on Hono (server) and Vite/React (client), running on port 4100.

Getting Started

Prerequisites

Studio requires the same infrastructure as any Lota SDK runtime:

  • SurrealDB instance
  • Redis instance
  • Bifrost AI gateway
  • Firecrawl API key

Start the infrastructure stack from the repository root:

bash
cd infrastructure
docker compose up -d

Running Studio

bash
cd studio
bun install
bun dev  # Starts on http://localhost:4100

The dev server starts the Hono backend on port 4100. The Vite client dev server proxies API requests to it.

Environment Variables

Studio reads the standard SDK environment variables:

VariableDescription
SURREALDB_URLSurrealDB WebSocket URL
SURREALDB_NAMESPACESurrealDB namespace
SURREALDB_USERSurrealDB username
SURREALDB_PASSWORDSurrealDB password
REDIS_URLRedis connection URL
AI_GATEWAY_URLBifrost gateway URL
AI_GATEWAY_KEYBifrost virtual key
FIRECRAWL_API_KEYFirecrawl API key
S3_ENDPOINTS3 endpoint (optional for Studio)
S3_BUCKETS3 bucket (defaults to studio)
S3_ACCESS_KEY_IDS3 access key (optional for Studio)
S3_SECRET_ACCESS_KEYS3 secret key (optional for Studio)

Features

Direct Chat

Chat directly with an AI model without going through the workstream turn lifecycle. Supports model selection, system prompt customization, reasoning effort control, and tool selection. Useful for testing prompts in isolation.

Runtime-Aware Chat

Chat through the full SDK runtime pipeline -- context assembly, memory retrieval, agent resolution, and workstream message persistence. This mirrors how a production host would process a turn.

Tool Browser

Browse all available tools with their schemas, and run any tool directly with custom input. Useful for verifying tool behavior independently of agent execution.

Agent Inspector

List all configured system agents with their descriptions and prompt excerpts. View full prompts and run individual agents with custom tasks.

Workstream Management

Create, list, rename, delete, and reset workstreams. Browse message history for any workstream. Studio automatically bootstraps direct workstreams for each agent in the roster.

Debug Interface

  • List and search organization memories
  • Execute raw SurrealDB queries (SELECT and INFO only)
  • List all workstreams (direct and group) with full state
  • Reset all messages and memories

Model Browser

List all available models from the Bifrost gateway, organized by provider.

API Endpoints

Chat

MethodPathDescription
POST/api/chatDirect chat with a model. Accepts messages, model, optional systemPrompt, tools, and reasoningEffort. Returns a streaming UI message response.
POST/api/chat/runtimeRuntime-aware chat. Accepts messages and workstreamId. Processes the turn through the full SDK pipeline.

Tools

MethodPathDescription
GET/api/toolsList all available tools with their names and schemas.
POST/api/tools/runExecute a tool directly. Accepts tool (name) and optional input. Returns the tool output.

Models

MethodPathDescription
GET/api/modelsList available models from the gateway, grouped by provider.

Runtime

MethodPathDescription
GET/api/runtimeCheck runtime connection status. Returns { connected: boolean }.
POST/api/runtime/connectConnect the Studio runtime to the database and Redis.
POST/api/runtime/disconnectDisconnect the Studio runtime.

Workstreams

MethodPathDescription
GET/api/workstreamsList group workstreams.
GET/api/workstreams/directList direct (1:1 agent) workstreams.
POST/api/workstreamsCreate a new group workstream. Accepts optional title.
GET/api/workstreams/:idGet a single workstream by ID.
PATCH/api/workstreams/:idRename a workstream. Accepts title.
DELETE/api/workstreams/:idDelete a workstream.
GET/api/workstreams/:id/messagesList messages for a workstream.
POST/api/workstreams/:id/resetClear all messages for a workstream.

Agents

MethodPathDescription
GET/api/agentsList all system agents with descriptions and prompt excerpts.
GET/api/agents/:idGet a single agent with its full prompt.
POST/api/agents/:id/runRun an agent with a custom task. Accepts task and optional model. Returns a streaming text response.

Debug

MethodPathDescription
GET/api/debug/memoriesList organization memories. Optional agent query param for agent-scoped results.
GET/api/debug/memories/searchSearch memories by query. Requires q param, optional agent param.
GET/api/debug/workstreamsList all workstreams (direct + group) with full state.
POST/api/debug/queryExecute a raw SurrealDB query. Only SELECT and INFO statements are allowed.
POST/api/debug/reset-allDelete all messages, memories, memory relations, and memory history.

Configuration

Studio uses createLotaRuntime internally with a minimal configuration. It registers a built-in set of agents (chief, ceo, cto, cpo, cmo, cfo, mentor) and configures workstream bootstrap to provision direct workstreams for each.

Studio creates a fixed identity on startup:

  • User: user:lord (name: "Lord", email: lord@lota.studio)
  • Organization: organization:studio_org (name: "Lota Studio")

This identity is used for all workstream operations. Studio also starts a title generation worker so workstream titles are automatically generated from the first user message.

Use Cases

Testing agent prompts and tools -- Use direct chat to iterate on system prompts without going through the full runtime pipeline. Select different models and reasoning effort levels to compare behavior.

Debugging memory retrieval -- Use the debug endpoints to list and search memories, verifying that the memory extraction and retrieval pipeline produces the expected results.

Verifying tool behavior -- Run tools directly via the tool browser with custom input. Check that tool schemas, validation, and execution work correctly before wiring them into agent configurations.

Inspecting workstream state -- Browse workstream message history, verify that turns are persisted correctly, and inspect the full workstream lifecycle from creation to archival.

End-to-end runtime testing -- Use runtime-aware chat to exercise the complete turn pipeline: context assembly, memory search, agent resolution, tool execution, and message persistence.

Comparing models -- Switch between models in direct chat to evaluate response quality, reasoning capabilities, and tool-use behavior across providers.