Appearance
UI Package
@lota-sdk/ui provides framework-agnostic (React-oriented) hooks and utilities for building chat UIs on top of the Lota SDK runtime. It contains no product styling -- only headless helpers for chat transport, message management, tool rendering, and session lifecycle.
Installation
bash
bun add @lota-sdk/uiPeer dependencies: react, @ai-sdk/react, ai, @tanstack/react-query, @lota-sdk/shared.
Package Structure
@lota-sdk/ui
chat/ # Transport, message utilities, attachment handling
runtime/ # React hooks for threads, sessions, and chat state
tools/ # Tool registry and execution plan view modelsAll exports are re-exported from the package root, but you can also import from subpaths:
ts
import { createChatTransport } from '@lota-sdk/ui/chat'
import { useThreadChat } from '@lota-sdk/ui/runtime'
import { createToolRegistry } from '@lota-sdk/ui/tools'Chat Transport
Set up the chat connection using createChatTransport. This wraps the Vercel AI SDK DefaultChatTransport:
ts
import { createChatTransport } from '@lota-sdk/ui/chat'
const transport = createChatTransport({
api: '/api/chat',
headers: () => ({ Authorization: `Bearer ${token}` }),
})The transport is passed to chat hooks and handles HTTP streaming between client and server.
Options
createChatTransport accepts the same options as the AI SDK HttpChatTransportInitOptions:
| Field | Type | Description |
|---|---|---|
api | string | Chat API endpoint URL |
headers | HeadersInit | () => HeadersInit | Request headers, typically used for auth tokens |
body | Record<string, unknown> | Extra fields merged into every request body |
credentials | RequestCredentials | Fetch credentials mode |
fetch | typeof fetch | Custom fetch implementation |
Chat Hooks
useThreadChat / useLotaChat
The primary chat hook wrapping the Vercel AI SDK useChat. Manages message sending, streaming, optimistic updates, stop/regenerate, and tool approval responses.
useLotaChat is an alias for useThreadChat -- they are identical.
ts
import { useThreadChat } from '@lota-sdk/ui/runtime'
const chat = useThreadChat({
threadId: 'workstream:abc123',
initialMessages: [],
transport,
onMessagesSettled: () => refetchMessages(),
setThreadRunningState: (isRunning) => setRunning(isRunning),
notifyError: (msg, err) => toast.error(msg),
stopRemoteRun: (threadId) => api.stopRun(threadId),
experimentalThrottle: 50,
})Options:
| Field | Type | Description |
|---|---|---|
threadId | string | null | Active workstream ID. Null disables the chat. |
initialMessages | UIMessage[] | Seed messages loaded from the server. |
transport | ChatTransport | Transport created by createChatTransport. |
messageMetadataSchema | Schema | Optional Zod schema for message metadata validation. |
dataPartSchemas | Schema | Optional schemas for custom data parts. |
onMessagesSettled | () => void | Called when streaming finishes or errors. |
setThreadRunningState | (isRunning: boolean) => void | Callback to track whether the thread is actively streaming. |
notifyError | (message: string, error: Error) => void | Error notification callback. |
stopRemoteRun | (threadId: string) => Promise<unknown> | Server-side stop handler. Called alongside client-side stream abort. |
experimentalThrottle | number | Throttle interval in ms for streaming updates. Defaults to 50. |
sendAutomaticallyWhen | function | Condition for automatic message sending (e.g., tool approval responses). Defaults to lastAssistantMessageIsCompleteWithApprovalResponses. |
resume | boolean | Whether to resume an in-progress stream on mount. |
Return value:
| Field | Type | Description |
|---|---|---|
messages | UIMessage[] | Current messages including streamed content. |
status | ChatStatus | 'ready', 'streaming', 'submitted', or 'error'. |
error | Error | undefined | Latest error, if any. |
clearError | () => void | Dismiss the current error. |
sendMessage | function | Send a new user message. |
regenerate | function | Regenerate the last assistant response. |
stop | () => Promise<void> | Stop streaming (both client and server side). |
setMessages | function | Imperatively replace the message list. |
addToolApprovalResponse | function | Approve or reject a tool invocation. |
useThreadManagement
Manages workstream CRUD operations: listing, creating, archiving, unarchiving, deleting, and renaming threads. Handles auto-selection of the current thread and pagination.
ts
import { useThreadManagement } from '@lota-sdk/ui/runtime'
const management = useThreadManagement({
threads,
coreThreads,
archivedThreads,
currentThreadId,
selectedAgentId,
isLoadingThreads,
isRefetchingThreads,
hasMoreThreads,
isFetchingNextThreadsPage,
fetchNextThreadsPageRaw: () => fetchNextPage(),
createThread: () => api.createWorkstream(),
renameThread: ({ threadId, title }) => api.rename(threadId, title),
archiveThread: (id) => api.archive(id),
unarchiveThread: (id) => api.unarchive(id),
deleteThread: (id) => api.delete(id),
setCurrentThreadId,
setSelectedAgentId,
defaultThreadTitle: 'New Chat',
primaryDirectAgentId: 'chief',
})Key behaviors:
- Automatically selects the first thread (or the primary direct agent thread) when no thread is selected.
- Reuses an existing empty group thread instead of creating duplicates.
- On archive/delete of the current thread, switches to the next available thread.
- Resolves placeholder thread IDs (e.g.,
__placeholder_chief) to real direct workstream IDs. - Supports infinite scroll pagination via
fetchNextThreadsPage.
Return value:
| Field | Description |
|---|---|
handleCreateNewThread | Creates a new group thread or reuses an existing draft. |
handleSwitchThread | Switches to a given thread, resolving placeholders. |
handleRenameThread | Renames a mutable thread. |
handleArchiveThread | Archives a thread and switches away if it was active. |
handleUnarchiveThread | Restores an archived thread. |
handleDeleteThread | Deletes a thread and switches away if it was active. |
fetchNextThreadsPage | Loads the next page of threads. |
useActiveThreadSession
Composes useThreadChat and useSessionMessages into a single session manager. Handles message synchronization between streamed content and persisted server state, file uploads, validation, and loading states.
ts
import { useActiveThreadSession } from '@lota-sdk/ui/runtime'
const session = useActiveThreadSession({
threadId,
thread,
sessionMessagesQueryKey: ['messages', threadId],
fetchLatestPage: (id) => api.getMessages(id),
fetchOlderPage: (id, cursor) => api.getMessages(id, cursor),
pollIntervalMs: 3000,
pollEnabled: true,
threadChat: { transport, messageMetadataSchema },
buildUserMessage: ({ textContent, fileParts }) => ({ /* ... */ }),
uploadFiles: ({ threadId, files }) => api.upload(threadId, files),
notifyError: (msg, err) => toast.error(msg),
})Return value:
| Field | Type | Description |
|---|---|---|
messages | UIMessage[] | Merged stream + persisted messages. |
status | ChatStatus | Current chat status. |
sendMessage | function | Validates, uploads files, and sends a message. |
regenerate | function | Regenerates the last assistant response. |
onStop | function | Stops the active stream. |
threadLoadingState | RuntimeLoadingState | Granular loading state (idle, loading, ready, running). |
hasOlderMessages | boolean | Whether more history pages are available. |
loadOlderMessages | function | Fetches the next page of older messages. |
addToolApprovalResponse | function | Approve or reject a pending tool call. |
useSessionMessages
Paginated message history backed by @tanstack/react-query infinite queries. Supports polling, cursor-based pagination, and deduplication.
ts
import { useSessionMessages } from '@lota-sdk/ui/runtime'
const { messages, isLoading, hasOlderMessages, loadOlderMessages, refreshLatestPage } =
useSessionMessages({
sessionId: threadId,
queryKey: ['messages', threadId],
fetchLatestPage: (id) => api.getLatestMessages(id),
fetchOlderPage: (id, cursor) => api.getOlderMessages(id, cursor),
pollIntervalMs: 3000,
pollEnabled: true,
})The fetchLatestPage and fetchOlderPage functions must return a SessionMessagesPageResponse:
ts
interface SessionMessagesPageResponse {
messages: unknown[]
hasMore: boolean
prevCursor: string | null
}Message Utilities
messages.ts
Helpers for normalizing and merging message arrays:
ts
import { normalizeChatMessages, mergeChatMessages } from '@lota-sdk/ui/chat'
// Sort messages by createdAt timestamp, ensuring consistent ordering
const sorted = normalizeChatMessages(rawMessages)
// Merge multiple message lists, deduplicating by ID and sorting
const merged = mergeChatMessages(streamedMessages, persistedMessages)message-parts.ts
Helpers for iterating and grouping AI SDK message parts:
ts
import {
readToolPart,
consumeReasoningGroup,
consumeTextGroup,
} from '@lota-sdk/ui/chat'
// Extract a tool part from a message part (returns null if not a tool part)
const toolPart = readToolPart(part)
// Group consecutive reasoning parts into a single block
const reasoning = consumeReasoningGroup(entries, startIndex, isStreaming)
// Returns: { nextEntryIndex, stablePartIndex, text, isStreaming }
// Group consecutive text parts into a single block
const textGroup = consumeTextGroup(entries, startIndex, { sanitizeText })
// Returns: { nextEntryIndex, stablePartIndex, text }These are useful when rendering message content where consecutive text or reasoning parts should be visually grouped.
Tool Registry
Register tool UI components for rendering tool invocations in chat:
ts
import { createToolRegistry } from '@lota-sdk/ui/tools'
const registry = createToolRegistry({
userQuestions: { component: UserQuestionsView, renderMode: 'standalone' },
consultTeam: { component: ConsultTeamView, renderMode: 'framed' },
createExecutionPlan: { component: PlanView, renderMode: 'standalone' },
})Render Modes
| Mode | Description |
|---|---|
framed | The tool UI is rendered inside a container frame provided by the host. This is the default when renderMode is not specified. Suitable for compact tool views that fit within the chat message flow. |
standalone | The tool UI manages its own layout and takes full width. Use for complex tools like execution plans or multi-step forms that need more space. |
Registry API
ts
// Check if a tool has a registered UI component
registry.isRegisteredTool('userQuestions') // true
// Get the component for a tool
const Component = registry.getToolComponent('userQuestions')
// Resolve the render mode for a tool (defaults to 'framed')
const mode = registry.resolveToolRenderMode('consultTeam') // 'framed'Execution Plan View Models
Helpers for rendering execution plan tool results:
ts
import {
buildExecutionPlanToolViewModel,
getExecutionPlanStatusLabel,
getExecutionPlanOwnerLabel,
getExecutionPlanActionLabel,
getLatestExecutionPlanResult,
} from '@lota-sdk/ui/tools'buildExecutionPlanToolViewModel
Builds a complete view model from raw tool input/output:
ts
const vm = buildExecutionPlanToolViewModel({
input: toolInvocation.args,
output: toolInvocation.result,
isRunning: status === 'streaming',
})
// vm.title — Plan title (from result or input fallback)
// vm.actionLabel — e.g. "Execution plan created", "Updating execution plan"
// vm.statusLabel — e.g. "Draft", "Executing", "Completed"
// vm.progressSummary — e.g. "3/7 complete"
// vm.hasPlan — Whether a plan exists in the result
// vm.plan — The full plan object (or null)
// vm.latestEvent — Most recent event messageStatus Labels
| Status | Label |
|---|---|
draft | Draft |
executing | Executing |
blocked | Blocked |
completed | Completed |
aborted | Aborted |
| (other) | Idle |
Owner Labels
getExecutionPlanOwnerLabel resolves task ownership to a display name:
ts
const label = getExecutionPlanOwnerLabel(task, { ceo: 'CEO', cto: 'CTO' })
// Returns the display name for agent owners, "User" for user owners,
// or the raw ownerRef as fallbackAttachment Handling
uploadComposerFiles handles the full attachment upload pipeline -- reading file data, validating size and type, and uploading to the server:
ts
import { uploadComposerFiles } from '@lota-sdk/ui/chat'
const fileParts = await uploadComposerFiles({
files: composerFiles,
uploadFile: async (file) => {
const response = await api.uploadAttachment(file)
return { attachment: response }
},
maxFileSizeBytes: 10 * 1024 * 1024, // Optional, has sensible defaults
maxTotalSizeBytes: 50 * 1024 * 1024, // Optional, has sensible defaults
})The function:
- Converts
FileUIPartobjects toFileinstances. - Validates each file against the supported attachment types.
- Checks individual file size and total batch size limits.
- Uploads each file and returns
FileUIPartobjects with storage metadata inproviderMetadata.lota.
Returned file parts include attachmentStorageKey and attachmentSizeBytes in their provider metadata for downstream reference.