Skip to content

Inline UI Components

Agents can render structured UI elements during chat conversations using the show_ui_component tool. When your agent needs to display an interactive component — a connection prompt, a confirmation dialog, or a memory import flow — it calls this tool with a component type and props. Your frontend intercepts the tool call and renders the appropriate component inline in the conversation.

How it works

The show_ui_component tool is defined in UI_TOOLS. Import it and spread it alongside CONFIGURE_TOOLS when your agent has a frontend.

Two rendering paths:

  • Direct (you call Claude): Your agent receives show_ui_component as a tool call. You detect it client-side, render the component, and return a tool result. The call never reaches the Configure backend. Direct path flow:
  1. Your agent receives show_ui_component in its tool list via [...CONFIGURE_TOOLS, ...UI_TOOLS]
  2. During a conversation, the LLM decides to call show_ui_component with a component_type and optional props
  3. Your frontend detects the tool call, parses it, and renders the corresponding component
  4. You return a tool result to the LLM so the conversation continues

Available components

The component_type parameter accepts the following values:

ComponentDescriptionProps
connection_listButtons to connect integrations (Gmail, Calendar, etc.){ connectors: ["gmail", "calendar", "notion"] }
single_connectorA single integration connection button{ tool: "gmail", message: "Connect your Gmail to continue" }
memory_cardSummary card of the user's profile and preferences{}
confirmationConfirmation dialog for a pending action{ message: "Are you sure?", confirm_label: "Yes", cancel_label: "No" }
memory_importMemory import flow with export buttons for AI providers{ providers: "chatgpt,claude,gemini,grok" }

The agent is instructed to only show connection_list when the user explicitly asks to connect tools, or when a specific tool is needed to answer a question and that tool is not connected. It will not show connection prompts on greetings or casual messages.

Detecting UI tool calls

The SDK exports two utility functions for identifying and parsing UI tool calls. Use these in your tool-call handling loop to separate UI tools (which you render locally) from backend tools (which you dispatch to Configure).

isUITool(toolName) returns true if the tool name matches any tool in UI_TOOLS. Currently this is only show_ui_component, but using the function protects your code if additional UI tools are added later.

parseUIToolCall(toolName, toolInput) returns a normalized object with component (the component type string) and props (the props object), or null if the tool name is not show_ui_component.

typescript
import { isUITool, parseUIToolCall } from 'configure';

for (const block of response.content) {
  if (block.type !== 'tool_use') continue;

  if (isUITool(block.name)) {
    // Client-side — render a component, don't call the backend
    const parsed = parseUIToolCall(block.name, block.input);
    // parsed = { component: 'connection_list', props: { connectors: ['gmail', 'calendar'] } }
    if (parsed) {
      renderComponent(parsed.component, parsed.props);
    }
  } else {
    // Backend tool — dispatch to Configure
    const result = await configure.tools.execute(block.name, block.input);
    toolResults.push({ tool_use_id: block.id, content: result });
  }
}

Rendering components

How you render components depends on your frontend framework. The parsed result gives you a component string and a props object that you map to your UI.

tsx
function ChatUIComponent({
  component,
  props,
}: {
  component: string;
  props: Record<string, unknown>;
}) {
  switch (component) {
    case 'connection_list':
      return <ConnectionList connectors={props.connectors as string[]} />;
    case 'single_connector':
      return (
        <SingleConnector
          tool={props.tool as string}
          message={props.message as string}
        />
      );
    case 'memory_card':
      return <MemoryCard />;
    case 'confirmation':
      return <ConfirmationDialog message={props.message as string} />;
    case 'memory_import':
      return <MemoryImport providers={props.providers as string} />;
    default:
      return null;
  }
}

After rendering the component and receiving user interaction (e.g., the user connects Gmail or confirms an action), return a tool result to the LLM so the conversation can continue naturally:

typescript
if (isUITool(block.name)) {
  const parsed = parseUIToolCall(block.name, block.input);
  if (parsed) {
    renderComponent(parsed.component, parsed.props);

    toolResults.push({
      type: 'tool_result',
      tool_use_id: block.id,
      content: `Displayed ${parsed.component} component to user`,
    });
  }
}

The LLM will then follow up with appropriate text, such as "I've shown you the connection options above. Would you like to connect your Gmail?"

Headless mode

If your agent runs in a headless environment (CLI tool, background worker, API-only service) where there is no frontend to render components, simply use CONFIGURE_TOOLS without spreading UI_TOOLS:

typescript
import { CONFIGURE_TOOLS } from 'configure';

// CONFIGURE_TOOLS does not include show_ui_component.
// Only spread UI_TOOLS when your agent has a frontend.
const tools = CONFIGURE_TOOLS;

CONFIGURE_TOOLS does not include show_ui_component by default. To add UI tools for agents with a frontend, spread them alongside: [...CONFIGURE_TOOLS, ...UI_TOOLS].

Web Component tag mapping

If you use configure (Web Components), each component type maps to a custom element tag:

component_typeWeb Component Tag
connection_list<configure-connection-list>
single_connector<configure-single-connector>
memory_card<configure-memory-card>
confirmation<configure-confirmation>
memory_import<configure-memory-import>

Identity and memory infrastructure for AI agents