Skip to main content

RPC Mode

RPC mode enables headless operation of the coding agent via a JSON protocol over stdin/stdout. This is useful for embedding the agent in other applications, IDEs, or custom UIs.

Note for Node.js/TypeScript users: If you're building a Node.js application, consider using AgentSession directly from @mariozechner/pi-coding-agent instead of spawning a subprocess. See src/core/agent-session.ts for the API. For a subprocess-based TypeScript client, see src/modes/rpc/rpc-client.ts.

Starting RPC Mode

pi --mode rpc [options]

Common options:

  • --provider <name>: Set the LLM provider (anthropic, openai, google, etc.)
  • --model <pattern>: Model pattern or ID (supports provider/id and optional :<thinking>)
  • --no-session: Disable session persistence
  • --session-dir <path>: Custom session storage directory

Protocol Overview

  • Commands: JSON objects sent to stdin, one per line
  • Responses: JSON objects with type: "response" indicating command success/failure
  • Events: Agent events streamed to stdout as JSON lines

All commands support an optional id field for request/response correlation. If provided, the corresponding response will include the same id.

Commands

Prompting

prompt

Send a user prompt to the agent. Returns immediately; events stream asynchronously.

{"id": "req-1", "type": "prompt", "message": "Hello, world!"}

With images:

{"type": "prompt", "message": "What's in this image?", "images": [{"type": "image", "data": "base64-encoded-data", "mimeType": "image/png"}]}

During streaming: If the agent is already streaming, you must specify streamingBehavior to queue the message:

{"type": "prompt", "message": "New instruction", "streamingBehavior": "steer"}
  • "steer": Interrupt the agent mid-run. Message is delivered after current tool execution, remaining tools are skipped.
  • "followUp": Wait until the agent finishes. Message is delivered only when agent stops.

If the agent is streaming and no streamingBehavior is specified, the command returns an error.

Extension commands: If the message is an extension command (e.g., /mycommand), it executes immediately even during streaming. Extension commands manage their own LLM interaction via pi.sendMessage().

Input expansion: Skill commands (/skill:name) and prompt templates (/template) are expanded before sending/queueing.

Response:

{"id": "req-1", "type": "response", "command": "prompt", "success": true}

The images field is optional. Each image uses ImageContent format: {"type": "image", "data": "base64-encoded-data", "mimeType": "image/png"}.

steer

Queue a steering message to interrupt the agent mid-run. Delivered after current tool execution, remaining tools are skipped. Skill commands and prompt templates are expanded. Extension commands are not allowed (use prompt instead).

{"type": "steer", "message": "Stop and do this instead"}

With images:

{"type": "steer", "message": "Look at this instead", "images": [{"type": "image", "data": "base64-encoded-data", "mimeType": "image/png"}]}

The images field is optional. Each image uses ImageContent format (same as prompt).

Response:

{"type": "response", "command": "steer", "success": true}

See set_steering_mode for controlling how steering messages are processed.

follow_up

Queue a follow-up message to be processed after the agent finishes. Delivered only when agent has no more tool calls or steering messages. Skill commands and prompt templates are expanded. Extension commands are not allowed (use prompt instead).

{"type": "follow_up", "message": "After you're done, also do this"}

With images:

{"type": "follow_up", "message": "Also check this image", "images": [{"type": "image", "data": "base64-encoded-data", "mimeType": "image/png"}]}

The images field is optional. Each image uses ImageContent format (same as prompt).

Response:

{"type": "response", "command": "follow_up", "success": true}

See set_follow_up_mode for controlling how follow-up messages are processed.

abort

Abort the current agent operation.

{"type": "abort"}

Response:

{"type": "response", "command": "abort", "success": true}

new_session

Start a fresh session. Can be cancelled by a session_before_switch extension event handler.

{"type": "new_session"}

With optional parent session tracking:

{"type": "new_session", "parentSession": "/path/to/parent-session.jsonl"}

Response:

{"type": "response", "command": "new_session", "success": true, "data": {"cancelled": false}}

If an extension cancelled:

{"type": "response", "command": "new_session", "success": true, "data": {"cancelled": true}}

State

get_state

Get current session state.

{"type": "get_state"}

Response:

{
"type": "response",
"command": "get_state",
"success": true,
"data": {
"model": {...},
"thinkingLevel": "medium",
"isStreaming": false,
"isCompacting": false,
"steeringMode": "all",
"followUpMode": "one-at-a-time",
"sessionFile": "/path/to/session.jsonl",
"sessionId": "abc123",
"sessionName": "my-feature-work",
"autoCompactionEnabled": true,
"messageCount": 5,
"pendingMessageCount": 0
}
}

The model field is a full Model object or null. The sessionName field is the display name set via set_session_name, or omitted if not set.

get_messages

Get all messages in the conversation.

{"type": "get_messages"}

Response:

{
"type": "response",
"command": "get_messages",
"success": true,
"data": {"messages": [...]}
}

Messages are AgentMessage objects (see Message Types).

Model

set_model

Switch to a specific model.

{"type": "set_model", "provider": "anthropic", "modelId": "claude-sonnet-4-20250514"}

Response contains the full Model object:

{
"type": "response",
"command": "set_model",
"success": true,
"data": {...}
}

cycle_model

Cycle to the next available model. Returns null data if only one model available.

{"type": "cycle_model"}

Response:

{
"type": "response",
"command": "cycle_model",
"success": true,
"data": {
"model": {...},
"thinkingLevel": "medium",
"isScoped": false
}
}

The model field is a full Model object.

get_available_models

List all configured models.

{"type": "get_available_models"}

Response contains an array of full Model objects:

{
"type": "response",
"command": "get_available_models",
"success": true,
"data": {
"models": [...]
}
}

Thinking

set_thinking_level

Set the reasoning/thinking level for models that support it.

{"type": "set_thinking_level", "level": "high"}

Levels: "off", "minimal", "low", "medium", "high", "xhigh"

Note: "xhigh" is only supported by OpenAI codex-max models.

Response:

{"type": "response", "command": "set_thinking_level", "success": true}

cycle_thinking_level

Cycle through available thinking levels. Returns null data if model doesn't support thinking.

{"type": "cycle_thinking_level"}

Response:

{
"type": "response",
"command": "cycle_thinking_level",
"success": true,
"data": {"level": "high"}
}

Queue Modes

set_steering_mode

Control how steering messages (from steer) are delivered.

{"type": "set_steering_mode", "mode": "one-at-a-time"}

Modes:

  • "all": Deliver all steering messages at the next interruption point
  • "one-at-a-time": Deliver one steering message per interruption (default)

Response:

{"type": "response", "command": "set_steering_mode", "success": true}

set_follow_up_mode

Control how follow-up messages (from follow_up) are delivered.

{"type": "set_follow_up_mode", "mode": "one-at-a-time"}

Modes:

  • "all": Deliver all follow-up messages when agent finishes
  • "one-at-a-time": Deliver one follow-up message per agent completion (default)

Response:

{"type": "response", "command": "set_follow_up_mode", "success": true}

Compaction

compact

Manually compact conversation context to reduce token usage.

{"type": "compact"}

With custom instructions:

{"type": "compact", "customInstructions": "Focus on code changes"}

Response:

{
"type": "response",
"command": "compact",
"success": true,
"data": {
"summary": "Summary of conversation...",
"firstKeptEntryId": "abc123",
"tokensBefore": 150000,
"details": {}
}
}

set_auto_compaction

Enable or disable automatic compaction when context is nearly full.

{"type": "set_auto_compaction", "enabled": true}

Response:

{"type": "response", "command": "set_auto_compaction", "success": true}

Retry

set_auto_retry

Enable or disable automatic retry on transient errors (overloaded, rate limit, 5xx).

{"type": "set_auto_retry", "enabled": true}

Response:

{"type": "response", "command": "set_auto_retry", "success": true}

abort_retry

Abort an in-progress retry (cancel the delay and stop retrying).

{"type": "abort_retry"}

Response:

{"type": "response", "command": "abort_retry", "success": true}

Bash

bash

Execute a shell command and add output to conversation context.

{"type": "bash", "command": "ls -la"}

Response:

{
"type": "response",
"command": "bash",
"success": true,
"data": {
"output": "total 48\ndrwxr-xr-x ...",
"exitCode": 0,
"cancelled": false,
"truncated": false
}
}

If output was truncated, includes fullOutputPath:

{
"type": "response",
"command": "bash",
"success": true,
"data": {
"output": "truncated output...",
"exitCode": 0,
"cancelled": false,
"truncated": true,
"fullOutputPath": "/tmp/pi-bash-abc123.log"
}
}

How bash results reach the LLM:

The bash command executes immediately and returns a BashResult. Internally, a BashExecutionMessage is created and stored in the agent's message state. This message does NOT emit an event.

When the next prompt command is sent, all messages (including BashExecutionMessage) are transformed before being sent to the LLM. The BashExecutionMessage is converted to a UserMessage with this format:

Ran `ls -la`
\`\`\`
total 48
drwxr-xr-x ...
\`\`\`

This means:

  1. Bash output is included in the LLM context on the next prompt, not immediately
  2. Multiple bash commands can be executed before a prompt; all outputs will be included
  3. No event is emitted for the BashExecutionMessage itself

abort_bash

Abort a running bash command.

{"type": "abort_bash"}

Response:

{"type": "response", "command": "abort_bash", "success": true}

Session

get_session_stats

Get token usage and cost statistics.

{"type": "get_session_stats"}

Response:

{
"type": "response",
"command": "get_session_stats",
"success": true,
"data": {
"sessionFile": "/path/to/session.jsonl",
"sessionId": "abc123",
"userMessages": 5,
"assistantMessages": 5,
"toolCalls": 12,
"toolResults": 12,
"totalMessages": 22,
"tokens": {
"input": 50000,
"output": 10000,
"cacheRead": 40000,
"cacheWrite": 5000,
"total": 105000
},
"cost": 0.45
}
}

export_html

Export session to an HTML file.

{"type": "export_html"}

With custom path:

{"type": "export_html", "outputPath": "/tmp/session.html"}

Response:

{
"type": "response",
"command": "export_html",
"success": true,
"data": {"path": "/tmp/session.html"}
}

switch_session

Load a different session file. Can be cancelled by a session_before_switch extension event handler.

{"type": "switch_session", "sessionPath": "/path/to/session.jsonl"}

Response:

{"type": "response", "command": "switch_session", "success": true, "data": {"cancelled": false}}

If an extension cancelled the switch:

{"type": "response", "command": "switch_session", "success": true, "data": {"cancelled": true}}

fork

Create a new fork from a previous user message. Can be cancelled by a session_before_fork extension event handler. Returns the text of the message being forked from.

{"type": "fork", "entryId": "abc123"}

Response:

{
"type": "response",
"command": "fork",
"success": true,
"data": {"text": "The original prompt text...", "cancelled": false}
}

If an extension cancelled the fork:

{
"type": "response",
"command": "fork",
"success": true,
"data": {"text": "The original prompt text...", "cancelled": true}
}

get_fork_messages

Get user messages available for forking.

{"type": "get_fork_messages"}

Response:

{
"type": "response",
"command": "get_fork_messages",
"success": true,
"data": {
"messages": [
{"entryId": "abc123", "text": "First prompt..."},
{"entryId": "def456", "text": "Second prompt..."}
]
}
}

get_last_assistant_text

Get the text content of the last assistant message.

{"type": "get_last_assistant_text"}

Response:

{
"type": "response",
"command": "get_last_assistant_text",
"success": true,
"data": {"text": "The assistant's response..."}
}

Returns {"text": null} if no assistant messages exist.

set_session_name

Set a display name for the current session. The name appears in session listings and helps identify sessions.

{"type": "set_session_name", "name": "my-feature-work"}

Response:

{
"type": "response",
"command": "set_session_name",
"success": true
}

The current session name is available via get_state in the sessionName field.

Commands

get_commands

Get available commands (extension commands, prompt templates, and skills). These can be invoked via the prompt command by prefixing with /.

{"type": "get_commands"}

Response:

{
"type": "response",
"command": "get_commands",
"success": true,
"data": {
"commands": [
{"name": "session-name", "description": "Set or clear session name", "source": "extension", "path": "/home/user/.pi/agent/extensions/session.ts"},
{"name": "fix-tests", "description": "Fix failing tests", "source": "prompt", "location": "project", "path": "/home/user/myproject/.pi/agent/prompts/fix-tests.md"},
{"name": "skill:brave-search", "description": "Web search via Brave API", "source": "skill", "location": "user", "path": "/home/user/.pi/agent/skills/brave-search/SKILL.md"}
]
}
}

Each command has:

  • name: Command name (invoke with /name)
  • description: Human-readable description (optional for extension commands)
  • source: What kind of command:
    • "extension": Registered via pi.registerCommand() in an extension
    • "prompt": Loaded from a prompt template .md file
    • "skill": Loaded from a skill directory (name is prefixed with skill:)
  • location: Where it was loaded from (optional, not present for extensions):
    • "user": User-level (~/.pi/agent/)
    • "project": Project-level (./.pi/agent/)
    • "path": Explicit path via CLI or settings
  • path: Absolute file path to the command source (optional)

Note: Built-in TUI commands (/settings, /hotkeys, etc.) are not included. They are handled only in interactive mode and would not execute if sent via prompt.

Events

Events are streamed to stdout as JSON lines during agent operation. Events do NOT include an id field (only responses do).

Event Types

EventDescription
agent_startAgent begins processing
agent_endAgent completes (includes all generated messages)
turn_startNew turn begins
turn_endTurn completes (includes assistant message and tool results)
message_startMessage begins
message_updateStreaming update (text/thinking/toolcall deltas)
message_endMessage completes
tool_execution_startTool begins execution
tool_execution_updateTool execution progress (streaming output)
tool_execution_endTool completes
auto_compaction_startAuto-compaction begins
auto_compaction_endAuto-compaction completes
auto_retry_startAuto-retry begins (after transient error)
auto_retry_endAuto-retry completes (success or final failure)
extension_errorExtension threw an error

agent_start

Emitted when the agent begins processing a prompt.

{"type": "agent_start"}

agent_end

Emitted when the agent completes. Contains all messages generated during this run.

{
"type": "agent_end",
"messages": [...]
}

turn_start / turn_end

A turn consists of one assistant response plus any resulting tool calls and results.

{"type": "turn_start"}
{
"type": "turn_end",
"message": {...},
"toolResults": [...]
}

message_start / message_end

Emitted when a message begins and completes. The message field contains an AgentMessage.

{"type": "message_start", "message": {...}}
{"type": "message_end", "message": {...}}

message_update (Streaming)

Emitted during streaming of assistant messages. Contains both the partial message and a streaming delta event.

{
"type": "message_update",
"message": {...},
"assistantMessageEvent": {
"type": "text_delta",
"contentIndex": 0,
"delta": "Hello ",
"partial": {...}
}
}

The assistantMessageEvent field contains one of these delta types:

TypeDescription
startMessage generation started
text_startText content block started
text_deltaText content chunk
text_endText content block ended
thinking_startThinking block started
thinking_deltaThinking content chunk
thinking_endThinking block ended
toolcall_startTool call started
toolcall_deltaTool call arguments chunk
toolcall_endTool call ended (includes full toolCall object)
doneMessage complete (reason: "stop", "length", "toolUse")
errorError occurred (reason: "aborted", "error")

Example streaming a text response:

{"type":"message_update","message":{...},"assistantMessageEvent":{"type":"text_start","contentIndex":0,"partial":{...}}}
{"type":"message_update","message":{...},"assistantMessageEvent":{"type":"text_delta","contentIndex":0,"delta":"Hello","partial":{...}}}
{"type":"message_update","message":{...},"assistantMessageEvent":{"type":"text_delta","contentIndex":0,"delta":" world","partial":{...}}}
{"type":"message_update","message":{...},"assistantMessageEvent":{"type":"text_end","contentIndex":0,"content":"Hello world","partial":{...}}}

tool_execution_start / tool_execution_update / tool_execution_end

Emitted when a tool begins, streams progress, and completes execution.

{
"type": "tool_execution_start",
"toolCallId": "call_abc123",
"toolName": "bash",
"args": {"command": "ls -la"}
}

During execution, tool_execution_update events stream partial results (e.g., bash output as it arrives):

{
"type": "tool_execution_update",
"toolCallId": "call_abc123",
"toolName": "bash",
"args": {"command": "ls -la"},
"partialResult": {
"content": [{"type": "text", "text": "partial output so far..."}],
"details": {"truncation": null, "fullOutputPath": null}
}
}

When complete:

{
"type": "tool_execution_end",
"toolCallId": "call_abc123",
"toolName": "bash",
"result": {
"content": [{"type": "text", "text": "total 48\n..."}],
"details": {...}
},
"isError": false
}

Use toolCallId to correlate events. The partialResult in tool_execution_update contains the accumulated output so far (not just the delta), allowing clients to simply replace their display on each update.

auto_compaction_start / auto_compaction_end

Emitted when automatic compaction runs (when context is nearly full).

{"type": "auto_compaction_start", "reason": "threshold"}

The reason field is "threshold" (context getting large) or "overflow" (context exceeded limit).

{
"type": "auto_compaction_end",
"result": {
"summary": "Summary of conversation...",
"firstKeptEntryId": "abc123",
"tokensBefore": 150000,
"details": {}
},
"aborted": false,
"willRetry": false
}

If reason was "overflow" and compaction succeeds, willRetry is true and the agent will automatically retry the prompt.

If compaction was aborted, result is null and aborted is true.

If compaction failed (e.g., API quota exceeded), result is null, aborted is false, and errorMessage contains the error description.

auto_retry_start / auto_retry_end

Emitted when automatic retry is triggered after a transient error (overloaded, rate limit, 5xx).

{
"type": "auto_retry_start",
"attempt": 1,
"maxAttempts": 3,
"delayMs": 2000,
"errorMessage": "529 {\"type\":\"error\",\"error\":{\"type\":\"overloaded_error\",\"message\":\"Overloaded\"}}"
}
{
"type": "auto_retry_end",
"success": true,
"attempt": 2
}

On final failure (max retries exceeded):

{
"type": "auto_retry_end",
"success": false,
"attempt": 3,
"finalError": "529 overloaded_error: Overloaded"
}

extension_error

Emitted when an extension throws an error.

{
"type": "extension_error",
"extensionPath": "/path/to/extension.ts",
"event": "tool_call",
"error": "Error message..."
}

Extension UI Protocol

Extensions can request user interaction via ctx.ui.select(), ctx.ui.confirm(), etc. In RPC mode, these are translated into a request/response sub-protocol on top of the base command/event flow.

There are two categories of extension UI methods:

  • Dialog methods (select, confirm, input, editor): emit an extension_ui_request on stdout and block until the client sends back an extension_ui_response on stdin with the matching id.
  • Fire-and-forget methods (notify, setStatus, setWidget, setTitle, set_editor_text): emit an extension_ui_request on stdout but do not expect a response. The client can display the information or ignore it.

If a dialog method includes a timeout field, the agent-side will auto-resolve with a default value when the timeout expires. The client does not need to track timeouts.

Some ExtensionUIContext methods are not supported or degraded in RPC mode because they require direct TUI access:

  • custom() returns undefined
  • setWorkingMessage(), setFooter(), setHeader(), setEditorComponent(), setToolsExpanded() are no-ops
  • getEditorText() returns ""
  • getToolsExpanded() returns false
  • pasteToEditor() delegates to setEditorText() (no paste/collapse handling)
  • getAllThemes() returns []
  • getTheme() returns undefined
  • setTheme() returns { success: false, error: "..." }

Note: ctx.hasUI is true in RPC mode because the dialog and fire-and-forget methods are functional via the extension UI sub-protocol.

Extension UI Requests (stdout)

All requests have type: "extension_ui_request", a unique id, and a method field.

select

Prompt the user to choose from a list. Dialog methods with a timeout field include the timeout in milliseconds; the agent auto-resolves with undefined if the client doesn't respond in time.

{
"type": "extension_ui_request",
"id": "uuid-1",
"method": "select",
"title": "Allow dangerous command?",
"options": ["Allow", "Block"],
"timeout": 10000
}

Expected response: extension_ui_response with value (the selected option string) or cancelled: true.

confirm

Prompt the user for yes/no confirmation.

{
"type": "extension_ui_request",
"id": "uuid-2",
"method": "confirm",
"title": "Clear session?",
"message": "All messages will be lost.",
"timeout": 5000
}

Expected response: extension_ui_response with confirmed: true/false or cancelled: true.

input

Prompt the user for free-form text.

{
"type": "extension_ui_request",
"id": "uuid-3",
"method": "input",
"title": "Enter a value",
"placeholder": "type something..."
}

Expected response: extension_ui_response with value (the entered text) or cancelled: true.

editor

Open a multi-line text editor with optional prefilled content.

{
"type": "extension_ui_request",
"id": "uuid-4",
"method": "editor",
"title": "Edit some text",
"prefill": "Line 1\nLine 2\nLine 3"
}

Expected response: extension_ui_response with value (the edited text) or cancelled: true.

notify

Display a notification. Fire-and-forget, no response expected.

{
"type": "extension_ui_request",
"id": "uuid-5",
"method": "notify",
"message": "Command blocked by user",
"notifyType": "warning"
}

The notifyType field is "info", "warning", or "error". Defaults to "info" if omitted.

setStatus

Set or clear a status entry in the footer/status bar. Fire-and-forget.

{
"type": "extension_ui_request",
"id": "uuid-6",
"method": "setStatus",
"statusKey": "my-ext",
"statusText": "Turn 3 running..."
}

Send statusText: undefined (or omit it) to clear the status entry for that key.

setWidget

Set or clear a widget (block of text lines) displayed above or below the editor. Fire-and-forget.

{
"type": "extension_ui_request",
"id": "uuid-7",
"method": "setWidget",
"widgetKey": "my-ext",
"widgetLines": ["--- My Widget ---", "Line 1", "Line 2"],
"widgetPlacement": "aboveEditor"
}

Send widgetLines: undefined (or omit it) to clear the widget. The widgetPlacement field is "aboveEditor" (default) or "belowEditor". Only string arrays are supported in RPC mode; component factories are ignored.

setTitle

Set the terminal window/tab title. Fire-and-forget.

{
"type": "extension_ui_request",
"id": "uuid-8",
"method": "setTitle",
"title": "pi - my project"
}

set_editor_text

Set the text in the input editor. Fire-and-forget.

{
"type": "extension_ui_request",
"id": "uuid-9",
"method": "set_editor_text",
"text": "prefilled text for the user"
}

Extension UI Responses (stdin)

Responses are sent for dialog methods only (select, confirm, input, editor). The id must match the request.

Value response (select, input, editor)

{"type": "extension_ui_response", "id": "uuid-1", "value": "Allow"}

Confirmation response (confirm)

{"type": "extension_ui_response", "id": "uuid-2", "confirmed": true}

Cancellation response (any dialog)

Dismiss any dialog method. The extension receives undefined (for select/input/editor) or false (for confirm).

{"type": "extension_ui_response", "id": "uuid-3", "cancelled": true}

Error Handling

Failed commands return a response with success: false:

{
"type": "response",
"command": "set_model",
"success": false,
"error": "Model not found: invalid/model"
}

Parse errors:

{
"type": "response",
"command": "parse",
"success": false,
"error": "Failed to parse command: Unexpected token..."
}

Types

Source files:

  • packages/ai/src/types.ts - Model, UserMessage, AssistantMessage, ToolResultMessage
  • packages/agent/src/types.ts - AgentMessage, AgentEvent
  • src/core/messages.ts - BashExecutionMessage
  • src/modes/rpc/rpc-types.ts - RPC command/response types, extension UI request/response types

Model

{
"id": "claude-sonnet-4-20250514",
"name": "Claude Sonnet 4",
"api": "anthropic-messages",
"provider": "anthropic",
"baseUrl": "https://api.anthropic.com",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 200000,
"maxTokens": 16384,
"cost": {
"input": 3.0,
"output": 15.0,
"cacheRead": 0.3,
"cacheWrite": 3.75
}
}

UserMessage

{
"role": "user",
"content": "Hello!",
"timestamp": 1733234567890,
"attachments": []
}

The content field can be a string or an array of TextContent/ImageContent blocks.

AssistantMessage

{
"role": "assistant",
"content": [
{"type": "text", "text": "Hello! How can I help?"},
{"type": "thinking", "thinking": "User is greeting me..."},
{"type": "toolCall", "id": "call_123", "name": "bash", "arguments": {"command": "ls"}}
],
"api": "anthropic-messages",
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"usage": {
"input": 100,
"output": 50,
"cacheRead": 0,
"cacheWrite": 0,
"cost": {"input": 0.0003, "output": 0.00075, "cacheRead": 0, "cacheWrite": 0, "total": 0.00105}
},
"stopReason": "stop",
"timestamp": 1733234567890
}

Stop reasons: "stop", "length", "toolUse", "error", "aborted"

ToolResultMessage

{
"role": "toolResult",
"toolCallId": "call_123",
"toolName": "bash",
"content": [{"type": "text", "text": "total 48\ndrwxr-xr-x ..."}],
"isError": false,
"timestamp": 1733234567890
}

BashExecutionMessage

Created by the bash RPC command (not by LLM tool calls):

{
"role": "bashExecution",
"command": "ls -la",
"output": "total 48\ndrwxr-xr-x ...",
"exitCode": 0,
"cancelled": false,
"truncated": false,
"fullOutputPath": null,
"timestamp": 1733234567890
}

Attachment

{
"id": "img1",
"type": "image",
"fileName": "photo.jpg",
"mimeType": "image/jpeg",
"size": 102400,
"content": "base64-encoded-data...",
"extractedText": null,
"preview": null
}

Example: Basic Client (Python)

import subprocess
import json

proc = subprocess.Popen(
["pi", "--mode", "rpc", "--no-session"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
text=True
)

def send(cmd):
proc.stdin.write(json.dumps(cmd) + "\n")
proc.stdin.flush()

def read_events():
for line in proc.stdout:
yield json.loads(line)

# Send prompt
send({"type": "prompt", "message": "Hello!"})

# Process events
for event in read_events():
if event.get("type") == "message_update":
delta = event.get("assistantMessageEvent", {})
if delta.get("type") == "text_delta":
print(delta["delta"], end="", flush=True)

if event.get("type") == "agent_end":
print()
break

Example: Interactive Client (Node.js)

See test/rpc-example.ts for a complete interactive example, or src/modes/rpc/rpc-client.ts for a typed client implementation.

For a complete example of handling the extension UI protocol, see

../examples/rpc-extension-ui.ts
/**
* RPC Extension UI Example (TUI)
*
* A lightweight TUI chat client that spawns the agent in RPC mode.
* Demonstrates how to build a custom UI on top of the RPC protocol,
* including handling extension UI requests (select, confirm, input, editor).
*
* Usage: npx tsx examples/rpc-extension-ui.ts
*
* Slash commands:
* /select - demo select dialog
* /confirm - demo confirm dialog
* /input - demo input dialog
* /editor - demo editor dialog
*/

import { spawn } from "node:child_process";
import { dirname, join } from "node:path";
import * as readline from "node:readline";
import { fileURLToPath } from "node:url";
import { type Component, Container, Input, matchesKey, ProcessTerminal, SelectList, TUI } from "@mariozechner/pi-tui";

const __dirname = dirname(fileURLToPath(import.meta.url));

// ============================================================================
// ANSI helpers
// ============================================================================

const GREEN = "\x1b[32m";
const YELLOW = "\x1b[33m";
const BLUE = "\x1b[34m";
const MAGENTA = "\x1b[35m";
const RED = "\x1b[31m";
const DIM = "\x1b[2m";
const BOLD = "\x1b[1m";
const RESET = "\x1b[0m";

// ============================================================================
// Extension UI request type (subset of rpc-types.ts)
// ============================================================================

interface ExtensionUIRequest {
type: "extension_ui_request";
id: string;
method: string;
title?: string;
options?: string[];
message?: string;
placeholder?: string;
prefill?: string;
notifyType?: "info" | "warning" | "error";
statusKey?: string;
statusText?: string;
widgetKey?: string;
widgetLines?: string[];
text?: string;
}

// ============================================================================
// Output log: accumulates styled lines, renders the tail that fits
// ============================================================================

class OutputLog implements Component {
private lines: string[] = [];
private maxLines = 1000;
private visibleLines = 0;

setVisibleLines(n: number): void {
this.visibleLines = n;
}

append(line: string): void {
this.lines.push(line);
if (this.lines.length > this.maxLines) {
this.lines = this.lines.slice(-this.maxLines);
}
}

appendRaw(text: string): void {
if (this.lines.length === 0) {
this.lines.push(text);
} else {
this.lines[this.lines.length - 1] += text;
}
}

invalidate(): void {}

render(width: number): string[] {
if (this.lines.length === 0) return [""];
const n = this.visibleLines > 0 ? this.visibleLines : this.lines.length;
return this.lines.slice(-n).map((l) => l.slice(0, width));
}
}

// ============================================================================
// Loading indicator: "Agent: Working." -> ".." -> "..." -> "."
// ============================================================================

class LoadingIndicator implements Component {
private dots = 1;
private intervalId: NodeJS.Timeout | null = null;
private tui: TUI | null = null;

start(tui: TUI): void {
this.tui = tui;
this.dots = 1;
this.intervalId = setInterval(() => {
this.dots = (this.dots % 3) + 1;
this.tui?.requestRender();
}, 400);
}

stop(): void {
if (this.intervalId) {
clearInterval(this.intervalId);
this.intervalId = null;
}
}

invalidate(): void {}

render(_width: number): string[] {
return [`${BLUE}${BOLD}Agent:${RESET} ${DIM}Working${".".repeat(this.dots)}${RESET}`];
}
}

// ============================================================================
// Prompt input: label + single-line input
// ============================================================================

class PromptInput implements Component {
readonly input: Input;
onCtrlD?: () => void;

constructor() {
this.input = new Input();
}

handleInput(data: string): void {
if (matchesKey(data, "ctrl+d")) {
this.onCtrlD?.();
return;
}
this.input.handleInput(data);
}

invalidate(): void {
this.input.invalidate();
}

render(width: number): string[] {
return [`${GREEN}${BOLD}You:${RESET}`, ...this.input.render(width)];
}
}

// ============================================================================
// Dialog components: replace the prompt input during interactive requests
// ============================================================================

class SelectDialog implements Component {
private list: SelectList;
private title: string;
onSelect?: (value: string) => void;
onCancel?: () => void;

constructor(title: string, options: string[]) {
this.title = title;
const items = options.map((o) => ({ value: o, label: o }));
this.list = new SelectList(items, Math.min(items.length, 8), {
selectedPrefix: (t) => `${MAGENTA}${t}${RESET}`,
selectedText: (t) => `${MAGENTA}${t}${RESET}`,
description: (t) => `${DIM}${t}${RESET}`,
scrollInfo: (t) => `${DIM}${t}${RESET}`,
noMatch: (t) => `${YELLOW}${t}${RESET}`,
});
this.list.onSelect = (item) => this.onSelect?.(item.value);
this.list.onCancel = () => this.onCancel?.();
}

handleInput(data: string): void {
this.list.handleInput(data);
}

invalidate(): void {
this.list.invalidate();
}

render(width: number): string[] {
return [
`${MAGENTA}${BOLD}${this.title}${RESET}`,
...this.list.render(width),
`${DIM}Up/Down, Enter to select, Esc to cancel${RESET}`,
];
}
}

class InputDialog implements Component {
private dialogInput: Input;
private title: string;
onCtrlD?: () => void;

constructor(title: string, prefill?: string) {
this.title = title;
this.dialogInput = new Input();
if (prefill) this.dialogInput.setValue(prefill);
}

set onSubmit(fn: ((value: string) => void) | undefined) {
this.dialogInput.onSubmit = fn;
}

set onEscape(fn: (() => void) | undefined) {
this.dialogInput.onEscape = fn;
}

get inputComponent(): Input {
return this.dialogInput;
}

handleInput(data: string): void {
if (matchesKey(data, "ctrl+d")) {
this.onCtrlD?.();
return;
}
this.dialogInput.handleInput(data);
}

invalidate(): void {
this.dialogInput.invalidate();
}

render(width: number): string[] {
return [
`${MAGENTA}${BOLD}${this.title}${RESET}`,
...this.dialogInput.render(width),
`${DIM}Enter to submit, Esc to cancel${RESET}`,
];
}
}

// ============================================================================
// Main
// ============================================================================

async function main() {
const extensionPath = join(__dirname, "extensions/rpc-demo.ts");
const cliPath = join(__dirname, "../dist/cli.js");

const agent = spawn(
"node",
[cliPath, "--mode", "rpc", "--no-session", "--no-extension", "--extension", extensionPath],
{ stdio: ["pipe", "pipe", "pipe"] },
);

let stderr = "";
agent.stderr?.on("data", (data: Buffer) => {
stderr += data.toString();
});

await new Promise((resolve) => setTimeout(resolve, 500));
if (agent.exitCode !== null) {
console.error(`Agent exited immediately. Stderr:\n${stderr}`);
process.exit(1);
}

// -- TUI setup --

const terminal = new ProcessTerminal();
const tui = new TUI(terminal);

const outputLog = new OutputLog();
const loadingIndicator = new LoadingIndicator();
const promptInput = new PromptInput();

const root = new Container();
root.addChild(outputLog);
root.addChild(promptInput);

tui.addChild(root);
tui.setFocus(promptInput.input);

// -- Agent communication --

function send(obj: Record<string, unknown>): void {
agent.stdin!.write(`${JSON.stringify(obj)}\n`);
}

let isStreaming = false;
let hasTextOutput = false;

function exit(): void {
tui.stop();
agent.kill("SIGTERM");
process.exit(0);
}

// -- Bottom area management --
// The bottom of the screen is either the prompt input or a dialog.
// These helpers swap between them.

let activeDialog: Component | null = null;

function setBottomComponent(component: Component): void {
root.clear();
root.addChild(outputLog);
if (isStreaming) root.addChild(loadingIndicator);
root.addChild(component);
tui.setFocus(component);
tui.requestRender();
}

function showPrompt(): void {
activeDialog = null;
setBottomComponent(promptInput);
tui.setFocus(promptInput.input);
}

function showDialog(dialog: Component): void {
activeDialog = dialog;
setBottomComponent(dialog);
}

function showLoading(): void {
if (!isStreaming) {
isStreaming = true;
hasTextOutput = false;
root.clear();
root.addChild(outputLog);
root.addChild(loadingIndicator);
root.addChild(activeDialog ?? promptInput);
if (!activeDialog) tui.setFocus(promptInput.input);
loadingIndicator.start(tui);
tui.requestRender();
}
}

function hideLoading(): void {
loadingIndicator.stop();
root.clear();
root.addChild(outputLog);
root.addChild(activeDialog ?? promptInput);
if (!activeDialog) tui.setFocus(promptInput.input);
tui.requestRender();
}

// -- Extension UI dialog handling --

function showSelectDialog(title: string, options: string[], onDone: (value: string | undefined) => void): void {
const dialog = new SelectDialog(title, options);
dialog.onSelect = (value) => {
showPrompt();
onDone(value);
};
dialog.onCancel = () => {
showPrompt();
onDone(undefined);
};
showDialog(dialog);
}

function showInputDialog(title: string, prefill?: string, onDone?: (value: string | undefined) => void): void {
const dialog = new InputDialog(title, prefill);
dialog.onSubmit = (value) => {
showPrompt();
onDone?.(value.trim() || undefined);
};
dialog.onEscape = () => {
showPrompt();
onDone?.(undefined);
};
dialog.onCtrlD = exit;
showDialog(dialog);
tui.setFocus(dialog.inputComponent);
}

function handleExtensionUI(req: ExtensionUIRequest): void {
const { id, method } = req;

switch (method) {
// Dialog methods: replace prompt with interactive component
case "select": {
showSelectDialog(req.title ?? "Select", req.options ?? [], (value) => {
if (value !== undefined) {
send({ type: "extension_ui_response", id, value });
} else {
send({ type: "extension_ui_response", id, cancelled: true });
}
});
break;
}

case "confirm": {
const title = req.message ? `${req.title}: ${req.message}` : (req.title ?? "Confirm");
showSelectDialog(title, ["Yes", "No"], (value) => {
send({ type: "extension_ui_response", id, confirmed: value === "Yes" });
});
break;
}

case "input": {
const title = req.placeholder ? `${req.title} (${req.placeholder})` : (req.title ?? "Input");
showInputDialog(title, undefined, (value) => {
if (value !== undefined) {
send({ type: "extension_ui_response", id, value });
} else {
send({ type: "extension_ui_response", id, cancelled: true });
}
});
break;
}

case "editor": {
const prefill = req.prefill?.replace(/\n/g, " ");
showInputDialog(req.title ?? "Editor", prefill, (value) => {
if (value !== undefined) {
send({ type: "extension_ui_response", id, value });
} else {
send({ type: "extension_ui_response", id, cancelled: true });
}
});
break;
}

// Fire-and-forget methods: display as notification
case "notify": {
const notifyType = (req.notifyType as string) ?? "info";
const color = notifyType === "error" ? RED : notifyType === "warning" ? YELLOW : MAGENTA;
outputLog.append(`${color}${BOLD}Notification:${RESET} ${req.message}`);
tui.requestRender();
break;
}

case "setStatus":
outputLog.append(
`${MAGENTA}${BOLD}Notification:${RESET} ${DIM}[status: ${req.statusKey}]${RESET} ${req.statusText ?? "(cleared)"}`,
);
tui.requestRender();
break;

case "setWidget": {
const lines = req.widgetLines;
if (lines && lines.length > 0) {
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} ${DIM}[widget: ${req.widgetKey}]${RESET}`);
for (const wl of lines) {
outputLog.append(` ${DIM}${wl}${RESET}`);
}
tui.requestRender();
}
break;
}

case "set_editor_text":
promptInput.input.setValue((req.text as string) ?? "");
tui.requestRender();
break;
}
}

// -- Slash commands (local, not sent to agent) --

function handleSlashCommand(cmd: string): boolean {
switch (cmd) {
case "/select":
showSelectDialog("Pick a color", ["Red", "Green", "Blue", "Yellow"], (value) => {
if (value) {
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} You picked: ${value}`);
} else {
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} Selection cancelled`);
}
tui.requestRender();
});
return true;

case "/confirm":
showSelectDialog("Are you sure?", ["Yes", "No"], (value) => {
const confirmed = value === "Yes";
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} Confirmed: ${confirmed}`);
tui.requestRender();
});
return true;

case "/input":
showInputDialog("Enter your name", undefined, (value) => {
if (value) {
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} You entered: ${value}`);
} else {
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} Input cancelled`);
}
tui.requestRender();
});
return true;

case "/editor":
showInputDialog("Edit text", "Hello, world!", (value) => {
if (value) {
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} Submitted: ${value}`);
} else {
outputLog.append(`${MAGENTA}${BOLD}Notification:${RESET} Editor cancelled`);
}
tui.requestRender();
});
return true;

default:
return false;
}
}

// -- Process agent stdout --

const stdoutRl = readline.createInterface({ input: agent.stdout!, terminal: false });

stdoutRl.on("line", (line) => {
let data: Record<string, unknown>;
try {
data = JSON.parse(line);
} catch {
return;
}

if (data.type === "response" && !data.success) {
outputLog.append(`${RED}[error]${RESET} ${data.command}: ${data.error}`);
tui.requestRender();
return;
}

if (data.type === "agent_start") {
showLoading();
return;
}

if (data.type === "extension_ui_request") {
handleExtensionUI(data as unknown as ExtensionUIRequest);
return;
}

if (data.type === "message_update") {
const evt = data.assistantMessageEvent as Record<string, unknown> | undefined;
if (evt?.type === "text_delta") {
if (!hasTextOutput) {
hasTextOutput = true;
outputLog.append("");
outputLog.append(`${BLUE}${BOLD}Agent:${RESET}`);
}
const delta = evt.delta as string;
const parts = delta.split("\n");
for (let i = 0; i < parts.length; i++) {
if (i > 0) outputLog.append("");
if (parts[i]) outputLog.appendRaw(parts[i]);
}
tui.requestRender();
}
return;
}

if (data.type === "tool_execution_start") {
outputLog.append(`${DIM}[tool: ${data.toolName}]${RESET}`);
tui.requestRender();
return;
}

if (data.type === "tool_execution_end") {
const result = JSON.stringify(data.result).slice(0, 120);
outputLog.append(`${DIM}[result: ${result}...]${RESET}`);
tui.requestRender();
return;
}

if (data.type === "agent_end") {
isStreaming = false;
hideLoading();
outputLog.append("");
tui.requestRender();
return;
}
});

// -- User input --

promptInput.input.onSubmit = (value) => {
const trimmed = value.trim();
if (!trimmed) return;

promptInput.input.setValue("");

if (handleSlashCommand(trimmed)) {
outputLog.append(`${GREEN}${BOLD}You:${RESET} ${trimmed}`);
tui.requestRender();
return;
}

outputLog.append(`${GREEN}${BOLD}You:${RESET} ${trimmed}`);
send({ type: "prompt", message: trimmed });
tui.requestRender();
};

promptInput.onCtrlD = exit;

promptInput.input.onEscape = () => {
if (isStreaming) {
send({ type: "abort" });
outputLog.append(`${YELLOW}[aborted]${RESET}`);
tui.requestRender();
} else {
exit();
}
};

// -- Agent exit --

agent.on("exit", (code) => {
tui.stop();
if (stderr) console.error(stderr);
console.log(`Agent exited with code ${code}`);
process.exit(code ?? 0);
});

// -- Start --

outputLog.append(`${BOLD}RPC Chat${RESET}`);
outputLog.append(`${DIM}Type a message and press Enter. Esc to abort or exit. Ctrl+D to quit.${RESET}`);
outputLog.append(`${DIM}Slash commands: /select /confirm /input /editor${RESET}`);
outputLog.append("");

tui.start();
}

main().catch((err) => {
console.error(err);
process.exit(1);
});

which pairs with the
../examples/extensions/rpc-demo.ts
/**
* RPC Extension UI Demo
*
* Purpose-built extension that exercises all RPC-supported extension UI methods.
* Designed to be loaded alongside the rpc-extension-ui-example.ts script to
* demonstrate the full extension UI protocol.
*
* UI methods exercised:
* - select() - on tool_call for dangerous bash commands
* - confirm() - on session_before_switch
* - input() - via /rpc-input command
* - editor() - via /rpc-editor command
* - notify() - after each dialog completes
* - setStatus() - on turn_start/turn_end
* - setWidget() - on session_start
* - setTitle() - on session_start and session_switch
* - setEditorText() - via /rpc-prefill command
*/

import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";

export default function (pi: ExtensionAPI) {
let turnCount = 0;

// -- setTitle, setWidget, setStatus on session lifecycle --

pi.on("session_start", async (_event, ctx) => {
ctx.ui.setTitle("pi RPC Demo");
ctx.ui.setWidget("rpc-demo", ["--- RPC Extension UI Demo ---", "Loaded and ready."]);
ctx.ui.setStatus("rpc-demo", `Turns: ${turnCount}`);
});

pi.on("session_switch", async (_event, ctx) => {
turnCount = 0;
ctx.ui.setTitle("pi RPC Demo (new session)");
ctx.ui.setStatus("rpc-demo", `Turns: ${turnCount}`);
});

// -- setStatus on turn lifecycle --

pi.on("turn_start", async (_event, ctx) => {
turnCount++;
ctx.ui.setStatus("rpc-demo", `Turn ${turnCount} running...`);
});

pi.on("turn_end", async (_event, ctx) => {
ctx.ui.setStatus("rpc-demo", `Turn ${turnCount} done`);
});

// -- select on dangerous tool calls --

pi.on("tool_call", async (event, ctx) => {
if (event.toolName !== "bash") return undefined;

const command = event.input.command as string;
const isDangerous = /\brm\s+(-rf?|--recursive)/i.test(command) || /\bsudo\b/i.test(command);

if (isDangerous) {
if (!ctx.hasUI) {
return { block: true, reason: "Dangerous command blocked (no UI)" };
}

const choice = await ctx.ui.select(`Dangerous command: ${command}`, ["Allow", "Block"]);
if (choice !== "Allow") {
ctx.ui.notify("Command blocked by user", "warning");
return { block: true, reason: "Blocked by user" };
}
ctx.ui.notify("Command allowed", "info");
}

return undefined;
});

// -- confirm on session clear --

pi.on("session_before_switch", async (event, ctx) => {
if (event.reason !== "new") return;
if (!ctx.hasUI) return;

const confirmed = await ctx.ui.confirm("Clear session?", "All messages will be lost.");
if (!confirmed) {
ctx.ui.notify("Clear cancelled", "info");
return { cancel: true };
}
});

// -- input via command --

pi.registerCommand("rpc-input", {
description: "Prompt for text input (demonstrates ctx.ui.input in RPC)",
handler: async (_args, ctx) => {
const value = await ctx.ui.input("Enter a value", "type something...");
if (value) {
ctx.ui.notify(`You entered: ${value}`, "info");
} else {
ctx.ui.notify("Input cancelled", "info");
}
},
});

// -- editor via command --

pi.registerCommand("rpc-editor", {
description: "Open multi-line editor (demonstrates ctx.ui.editor in RPC)",
handler: async (_args, ctx) => {
const text = await ctx.ui.editor("Edit some text", "Line 1\nLine 2\nLine 3");
if (text) {
ctx.ui.notify(`Editor submitted (${text.split("\n").length} lines)`, "info");
} else {
ctx.ui.notify("Editor cancelled", "info");
}
},
});

// -- setEditorText via command --

pi.registerCommand("rpc-prefill", {
description: "Prefill the input editor (demonstrates ctx.ui.setEditorText in RPC)",
handler: async (_args, ctx) => {
ctx.ui.setEditorText("This text was set by the rpc-demo extension.");
ctx.ui.notify("Editor prefilled", "info");
},
});
}

extension.

const { spawn } = require("child_process");
const readline = require("readline");

const agent = spawn("pi", ["--mode", "rpc", "--no-session"]);

readline.createInterface({ input: agent.stdout }).on("line", (line) => {
const event = JSON.parse(line);

if (event.type === "message_update") {
const { assistantMessageEvent } = event;
if (assistantMessageEvent.type === "text_delta") {
process.stdout.write(assistantMessageEvent.delta);
}
}
});

// Send prompt
agent.stdin.write(JSON.stringify({ type: "prompt", message: "Hello" }) + "\n");

// Abort on Ctrl+C
process.on("SIGINT", () => {
agent.stdin.write(JSON.stringify({ type: "abort" }) + "\n");
});