Skip to content

UI Package

@lota-sdk/ui provides framework-agnostic (React-oriented) hooks and utilities for building chat UIs on top of the Lota SDK runtime. It contains no product styling -- only headless helpers for chat transport, message management, tool rendering, and session lifecycle.

Installation

bash
bun add @lota-sdk/ui

Peer dependencies: react, @ai-sdk/react, ai, @tanstack/react-query, @lota-sdk/shared.

Package Structure

@lota-sdk/ui
  chat/          # Transport, message utilities, attachment handling
  runtime/       # React hooks for threads, sessions, and chat state
  tools/         # Tool registry and execution plan view models

All exports are re-exported from the package root, but you can also import from subpaths:

ts
import { createChatTransport } from '@lota-sdk/ui/chat'
import { useThreadChat } from '@lota-sdk/ui/runtime'
import { createToolRegistry } from '@lota-sdk/ui/tools'

Chat Transport

Set up the chat connection using createChatTransport. This wraps the Vercel AI SDK DefaultChatTransport:

ts
import { createChatTransport } from '@lota-sdk/ui/chat'

const transport = createChatTransport({
  api: '/api/chat',
  headers: () => ({ Authorization: `Bearer ${token}` }),
})

The transport is passed to chat hooks and handles HTTP streaming between client and server.

Options

createChatTransport accepts the same options as the AI SDK HttpChatTransportInitOptions:

FieldTypeDescription
apistringChat API endpoint URL
headersHeadersInit | () => HeadersInitRequest headers, typically used for auth tokens
bodyRecord<string, unknown>Extra fields merged into every request body
credentialsRequestCredentialsFetch credentials mode
fetchtypeof fetchCustom fetch implementation

Chat Hooks

useThreadChat / useLotaChat

The primary chat hook wrapping the Vercel AI SDK useChat. Manages message sending, streaming, optimistic updates, stop/regenerate, and tool approval responses.

useLotaChat is an alias for useThreadChat -- they are identical.

ts
import { useThreadChat } from '@lota-sdk/ui/runtime'

const chat = useThreadChat({
  threadId: 'workstream:abc123',
  initialMessages: [],
  transport,
  onMessagesSettled: () => refetchMessages(),
  setThreadRunningState: (isRunning) => setRunning(isRunning),
  notifyError: (msg, err) => toast.error(msg),
  stopRemoteRun: (threadId) => api.stopRun(threadId),
  experimentalThrottle: 50,
})

Options:

FieldTypeDescription
threadIdstring | nullActive workstream ID. Null disables the chat.
initialMessagesUIMessage[]Seed messages loaded from the server.
transportChatTransportTransport created by createChatTransport.
messageMetadataSchemaSchemaOptional Zod schema for message metadata validation.
dataPartSchemasSchemaOptional schemas for custom data parts.
onMessagesSettled() => voidCalled when streaming finishes or errors.
setThreadRunningState(isRunning: boolean) => voidCallback to track whether the thread is actively streaming.
notifyError(message: string, error: Error) => voidError notification callback.
stopRemoteRun(threadId: string) => Promise<unknown>Server-side stop handler. Called alongside client-side stream abort.
experimentalThrottlenumberThrottle interval in ms for streaming updates. Defaults to 50.
sendAutomaticallyWhenfunctionCondition for automatic message sending (e.g., tool approval responses). Defaults to lastAssistantMessageIsCompleteWithApprovalResponses.
resumebooleanWhether to resume an in-progress stream on mount.

Return value:

FieldTypeDescription
messagesUIMessage[]Current messages including streamed content.
statusChatStatus'ready', 'streaming', 'submitted', or 'error'.
errorError | undefinedLatest error, if any.
clearError() => voidDismiss the current error.
sendMessagefunctionSend a new user message.
regeneratefunctionRegenerate the last assistant response.
stop() => Promise<void>Stop streaming (both client and server side).
setMessagesfunctionImperatively replace the message list.
addToolApprovalResponsefunctionApprove or reject a tool invocation.

useThreadManagement

Manages workstream CRUD operations: listing, creating, archiving, unarchiving, deleting, and renaming threads. Handles auto-selection of the current thread and pagination.

ts
import { useThreadManagement } from '@lota-sdk/ui/runtime'

const management = useThreadManagement({
  threads,
  coreThreads,
  archivedThreads,
  currentThreadId,
  selectedAgentId,
  isLoadingThreads,
  isRefetchingThreads,
  hasMoreThreads,
  isFetchingNextThreadsPage,
  fetchNextThreadsPageRaw: () => fetchNextPage(),
  createThread: () => api.createWorkstream(),
  renameThread: ({ threadId, title }) => api.rename(threadId, title),
  archiveThread: (id) => api.archive(id),
  unarchiveThread: (id) => api.unarchive(id),
  deleteThread: (id) => api.delete(id),
  setCurrentThreadId,
  setSelectedAgentId,
  defaultThreadTitle: 'New Chat',
  primaryDirectAgentId: 'chief',
})

Key behaviors:

  • Automatically selects the first thread (or the primary direct agent thread) when no thread is selected.
  • Reuses an existing empty group thread instead of creating duplicates.
  • On archive/delete of the current thread, switches to the next available thread.
  • Resolves placeholder thread IDs (e.g., __placeholder_chief) to real direct workstream IDs.
  • Supports infinite scroll pagination via fetchNextThreadsPage.

Return value:

FieldDescription
handleCreateNewThreadCreates a new group thread or reuses an existing draft.
handleSwitchThreadSwitches to a given thread, resolving placeholders.
handleRenameThreadRenames a mutable thread.
handleArchiveThreadArchives a thread and switches away if it was active.
handleUnarchiveThreadRestores an archived thread.
handleDeleteThreadDeletes a thread and switches away if it was active.
fetchNextThreadsPageLoads the next page of threads.

useActiveThreadSession

Composes useThreadChat and useSessionMessages into a single session manager. Handles message synchronization between streamed content and persisted server state, file uploads, validation, and loading states.

ts
import { useActiveThreadSession } from '@lota-sdk/ui/runtime'

const session = useActiveThreadSession({
  threadId,
  thread,
  sessionMessagesQueryKey: ['messages', threadId],
  fetchLatestPage: (id) => api.getMessages(id),
  fetchOlderPage: (id, cursor) => api.getMessages(id, cursor),
  pollIntervalMs: 3000,
  pollEnabled: true,
  threadChat: { transport, messageMetadataSchema },
  buildUserMessage: ({ textContent, fileParts }) => ({ /* ... */ }),
  uploadFiles: ({ threadId, files }) => api.upload(threadId, files),
  notifyError: (msg, err) => toast.error(msg),
})

Return value:

FieldTypeDescription
messagesUIMessage[]Merged stream + persisted messages.
statusChatStatusCurrent chat status.
sendMessagefunctionValidates, uploads files, and sends a message.
regeneratefunctionRegenerates the last assistant response.
onStopfunctionStops the active stream.
threadLoadingStateRuntimeLoadingStateGranular loading state (idle, loading, ready, running).
hasOlderMessagesbooleanWhether more history pages are available.
loadOlderMessagesfunctionFetches the next page of older messages.
addToolApprovalResponsefunctionApprove or reject a pending tool call.

useSessionMessages

Paginated message history backed by @tanstack/react-query infinite queries. Supports polling, cursor-based pagination, and deduplication.

ts
import { useSessionMessages } from '@lota-sdk/ui/runtime'

const { messages, isLoading, hasOlderMessages, loadOlderMessages, refreshLatestPage } =
  useSessionMessages({
    sessionId: threadId,
    queryKey: ['messages', threadId],
    fetchLatestPage: (id) => api.getLatestMessages(id),
    fetchOlderPage: (id, cursor) => api.getOlderMessages(id, cursor),
    pollIntervalMs: 3000,
    pollEnabled: true,
  })

The fetchLatestPage and fetchOlderPage functions must return a SessionMessagesPageResponse:

ts
interface SessionMessagesPageResponse {
  messages: unknown[]
  hasMore: boolean
  prevCursor: string | null
}

Message Utilities

messages.ts

Helpers for normalizing and merging message arrays:

ts
import { normalizeChatMessages, mergeChatMessages } from '@lota-sdk/ui/chat'

// Sort messages by createdAt timestamp, ensuring consistent ordering
const sorted = normalizeChatMessages(rawMessages)

// Merge multiple message lists, deduplicating by ID and sorting
const merged = mergeChatMessages(streamedMessages, persistedMessages)

message-parts.ts

Helpers for iterating and grouping AI SDK message parts:

ts
import {
  readToolPart,
  consumeReasoningGroup,
  consumeTextGroup,
} from '@lota-sdk/ui/chat'

// Extract a tool part from a message part (returns null if not a tool part)
const toolPart = readToolPart(part)

// Group consecutive reasoning parts into a single block
const reasoning = consumeReasoningGroup(entries, startIndex, isStreaming)
// Returns: { nextEntryIndex, stablePartIndex, text, isStreaming }

// Group consecutive text parts into a single block
const textGroup = consumeTextGroup(entries, startIndex, { sanitizeText })
// Returns: { nextEntryIndex, stablePartIndex, text }

These are useful when rendering message content where consecutive text or reasoning parts should be visually grouped.

Tool Registry

Register tool UI components for rendering tool invocations in chat:

ts
import { createToolRegistry } from '@lota-sdk/ui/tools'

const registry = createToolRegistry({
  userQuestions: { component: UserQuestionsView, renderMode: 'standalone' },
  consultTeam: { component: ConsultTeamView, renderMode: 'framed' },
  createExecutionPlan: { component: PlanView, renderMode: 'standalone' },
})

Render Modes

ModeDescription
framedThe tool UI is rendered inside a container frame provided by the host. This is the default when renderMode is not specified. Suitable for compact tool views that fit within the chat message flow.
standaloneThe tool UI manages its own layout and takes full width. Use for complex tools like execution plans or multi-step forms that need more space.

Registry API

ts
// Check if a tool has a registered UI component
registry.isRegisteredTool('userQuestions') // true

// Get the component for a tool
const Component = registry.getToolComponent('userQuestions')

// Resolve the render mode for a tool (defaults to 'framed')
const mode = registry.resolveToolRenderMode('consultTeam') // 'framed'

Execution Plan View Models

Helpers for rendering execution plan tool results:

ts
import {
  buildExecutionPlanToolViewModel,
  getExecutionPlanStatusLabel,
  getExecutionPlanOwnerLabel,
  getExecutionPlanActionLabel,
  getLatestExecutionPlanResult,
} from '@lota-sdk/ui/tools'

buildExecutionPlanToolViewModel

Builds a complete view model from raw tool input/output:

ts
const vm = buildExecutionPlanToolViewModel({
  input: toolInvocation.args,
  output: toolInvocation.result,
  isRunning: status === 'streaming',
})

// vm.title         — Plan title (from result or input fallback)
// vm.actionLabel   — e.g. "Execution plan created", "Updating execution plan"
// vm.statusLabel   — e.g. "Draft", "Executing", "Completed"
// vm.progressSummary — e.g. "3/7 complete"
// vm.hasPlan       — Whether a plan exists in the result
// vm.plan          — The full plan object (or null)
// vm.latestEvent   — Most recent event message

Status Labels

StatusLabel
draftDraft
executingExecuting
blockedBlocked
completedCompleted
abortedAborted
(other)Idle

Owner Labels

getExecutionPlanOwnerLabel resolves task ownership to a display name:

ts
const label = getExecutionPlanOwnerLabel(task, { ceo: 'CEO', cto: 'CTO' })
// Returns the display name for agent owners, "User" for user owners,
// or the raw ownerRef as fallback

Attachment Handling

uploadComposerFiles handles the full attachment upload pipeline -- reading file data, validating size and type, and uploading to the server:

ts
import { uploadComposerFiles } from '@lota-sdk/ui/chat'

const fileParts = await uploadComposerFiles({
  files: composerFiles,
  uploadFile: async (file) => {
    const response = await api.uploadAttachment(file)
    return { attachment: response }
  },
  maxFileSizeBytes: 10 * 1024 * 1024,  // Optional, has sensible defaults
  maxTotalSizeBytes: 50 * 1024 * 1024,  // Optional, has sensible defaults
})

The function:

  1. Converts FileUIPart objects to File instances.
  2. Validates each file against the supported attachment types.
  3. Checks individual file size and total batch size limits.
  4. Uploads each file and returns FileUIPart objects with storage metadata in providerMetadata.lota.

Returned file parts include attachmentStorageKey and attachmentSizeBytes in their provider metadata for downstream reference.