Skip to content

Search is only available in production builds. Try building and previewing the site to test it out locally.

Agent Harness

Every Agent Computer is a persistent cloud machine with a file system, terminal, browser, and installed skills. The agent harness is the AI layer that runs on top of this machine. It receives your task prompt, calls a language model for reasoning, and uses the computer’s tools to do the work.

Agent Harness runs on top of the Agent Computer — the harness calls an LLM while the computer provides terminal, browser, file system, git, skills, and Agent Context

The harness is swappable — you can run Claude Code on a task, then switch to Gemini CLI for the next one. The computer doesn’t care which agent runs on it. Files, git state, installed tools, and browser sessions persist across harness changes.

Each harness instance runs one executor — the specific AI coding agent that drives the work.

ExecutorAgentDescription
claudeClaude CodeAnthropic’s coding agent. Default executor.
geminiGemini CLIGoogle’s coding agent. Auto-routes between flash and pro models.
codexCodexOpenAI’s coding agent.
opencodeOpenCodeOpen-source multi-provider agent.

Every executor has full access to the same set of tools provided by the Agent Computer:

ToolWhat it does
TerminalFull shell access. Install packages, run builds, execute scripts.
File systemRead, write, create, delete files. Persistent across tasks.
GitClone repos, create branches, commit, push. GitHub integration built-in.
BrowserFull remote Chrome instance. Navigate, interact, take screenshots. Not headless.
SkillsInvoke installed skills — team-private, built-in, or public.
Agent ContextQuery your organization’s connected data sources with SQL.

The difference between executors is which language model powers the reasoning — the tools they can use are identical.

When a task runs, the executor calls an LLM. How that call is routed depends on the auth mode — see Authentication for the full breakdown of credits, BYOK, and OAuth from the user’s perspective.

Model routing — Credits mode routes through Rebyte Model Proxy to any provider, BYOK and OAuth modes go direct to the provider API

Credits mode — requests route through the Rebyte model proxy, which handles authentication with upstream providers. The claude executor can access models from any provider (Anthropic, OpenAI, Google, OpenRouter) through this proxy.

BYOK mode — requests go directly to the provider using the organization’s own API key. Each executor is restricted to its native provider’s models only (e.g., claude with BYOK can only use Anthropic models).

OAuth mode — available for Claude Code and Gemini CLI only. Each member connects their personal account; traffic goes direct to the provider, paid by the member’s own subscription. No credits consumed, no org API key shared.

ModelProviderExecutors
claude-opus-4.6Anthropicclaude, opencode
claude-sonnet-4.6Anthropicclaude, opencode
gpt-5.4OpenAIclaude, codex, opencode
gpt-5.3-codexOpenAIclaude, codex, opencode
gemini-3.1-proGoogleclaude, opencode
gemini-3-flashGoogleclaude, opencode
auto-gemini-3Googlegemini
minimax-m2.7Open Sourceclaude, opencode
kimi-k2.5Open Sourceclaude, opencode
glm-5Open Sourceclaude, opencode

Open Source models are routed via OpenRouter. All other models route directly to their provider’s API.

ExecutorDefault model
claudeclaude-sonnet-4.6
codexgpt-5.4
opencodegemini-3.1-pro
geminiauto-gemini-3

Users can override the model per task.

Organization admins manage executor and model access through Settings > Integrations > Code Agents.

SettingEffect
enabledEnable/disable an executor for the entire org
authMethodPick one of credits, api_key (BYOK), or oauth per executor — see Authentication
disabledModelsBlock specific models from being used

BYOK lets an organization route LLM requests directly to the provider using their own API keys.

  1. Admin stores an API key for an executor (claude, codex, or gemini) in Settings > Integrations > Code Agents
  2. When a team member runs a task, the agent authenticates directly with the provider using the org’s key
  3. No Rebyte credits are consumed — the org pays the provider directly
  4. If no BYOK key is configured, the task routes through the Rebyte model proxy and consumes credits

With BYOK, each executor is restricted to its native provider’s models:

ExecutorProviderBYOK models
claudeAnthropicclaude-sonnet-4.6, claude-opus-4.6
codexOpenAIgpt-5.4, gpt-5.3-codex
geminiGoogleauto-gemini-3

opencode does not support BYOK — it routes through the model proxy only.

Without BYOK, the claude executor can access models from any provider through the model proxy. With BYOK, it is restricted to Anthropic models only, since the org’s Anthropic key cannot authenticate against other providers.