Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Codex Backend

The Codex backend wraps the OpenAI Codex CLI (codex command) to execute prompts against OpenAI’s GPT models. Like the Claude backend, AZUREAL uses a non-interactive execution mode that exits after producing a response.


Command Structure

Every Codex invocation follows this pattern:

codex exec --json "<prompt>"

Like the Claude backend, AZUREAL does not use the Codex CLI’s native resume mechanism. Conversation continuity is handled entirely through context injection from the SQLite session store. Each prompt spawns a fresh process with the full context prepended.

Flag / ArgumentPurpose
execNon-interactive execution mode.
--jsonEmits structured JSON output for machine parsing.

Session ID Capture

When a Codex process starts a new thread, it emits a thread.started event containing a thread_id field. AZUREAL captures this ID and associates it with the active session slot.

The thread ID is used for display and diagnostics. It is not used for resumption – context injection replaces that role, just as with the Claude backend.


Permission Modes

Codex CLI supports two permission modes:

Dangerously Bypass Approvals and Sandbox

codex exec --json --dangerously-bypass-approvals-and-sandbox "<prompt>"

This flag disables all approval prompts and sandbox restrictions. The agent can read files, write files, execute commands, and perform any action without confirmation. This is the Codex equivalent of Claude’s --dangerously-skip-permissions flag.

Full Auto

codex exec --json --full-auto "<prompt>"

Full auto mode allows the agent to operate autonomously while still respecting sandbox boundaries. The agent can proceed without manual approval for standard operations, but destructive or out-of-scope actions may still be restricted. This is a middle ground between fully restricted and fully unrestricted operation.


Model Selection

The Codex backend serves six models:

ModelAlias
GPT-5.4gpt-5.4
GPT-5.3 Codexgpt-5.3-codex
GPT-5.2 Codexgpt-5.2-codex
GPT-5.2gpt-5.2
GPT-5.1 Codex Maxgpt-5.1-codex-max
GPT-5.1 Codex Minigpt-5.1-codex-mini

All models with names starting with gpt- are automatically routed to the Codex backend. See Model Switcher for the full model cycle.


Streaming and Event Parsing

The --json flag causes Codex CLI to emit structured JSON events. AZUREAL reads these events from the process output and converts them into the same AgentEvent and DisplayEvent types used by the Claude backend. The key events include:

  • thread.started – thread creation, carrying the thread_id for session identification.
  • Assistant text – incremental response text from the model.
  • Tool calls and results – file operations, command execution, and their outcomes.
  • Error – error conditions reported by the CLI.

Because both backends produce the same DisplayEvent values, the session pane, session store, and rendering pipeline handle Claude and Codex output identically. You can switch between Claude and Codex models mid-session and the conversation displays seamlessly.


Process Lifecycle

Each Codex process follows the same lifecycle as Claude:

  1. Spawn: A new codex exec process is started with the context-injected prompt.
  2. Stream: JSON events are read and parsed in real time.
  3. Exit: The process exits when the response is complete.
  4. Ingest: Events are appended to the SQLite store and temporary output files are cleaned up.

The process does not persist between prompts. See Session Lifecycle for the full end-to-end flow.