Skip to content

Search is only available in production builds. Try building and previewing the site to test it out locally.

Agent Context API

Manage your Agent Context server programmatically. Requires a Pro subscription and an API key.

https://api.rebyte.ai/v1/context-lake

Same as the Agent Computer API — use the API_KEY header:

Terminal window
curl https://api.rebyte.ai/v1/context-lake/datasets \
-H "API_KEY: rbk_your_key_here"

MethodPathDescription
GET/configGet full YAML config
PATCH/configPartial config update
GET/datasetsList datasets
POST/datasetsAdd a dataset
PUT/datasets/:nameUpdate a dataset
DELETE/datasets/:nameRemove a dataset
GET/viewsList views
POST/viewsAdd a view
PUT/views/:nameUpdate a view
DELETE/views/:nameRemove a view
POST/sqlRun a SQL query
GET/statusGet VM and dataset status
POST/startStart the VM
POST/stopHibernate the VM
POST/redeployRedeploy configuration

GET /config

Returns the full YAML configuration (SpiceD spicepod format with extensions).

{
"yaml": "version: v1\nkind: Spicepod\n...",
"config": { "version": "v1", "kind": "Spicepod", "datasets": [...], "views": [...] }
}
PATCH /config

Add, remove, or update datasets and views atomically. All operations are applied in order (removes first, then adds, then updates). If validation fails, nothing is applied.

{
"addDatasets": [{ "from": "s3://bucket/data.parquet", "name": "sales", "params": {...}, "notifications": true }],
"removeDatasets": ["old_data"],
"updateDatasets": { "sales": { "params": { "file_format": "csv" } } },
"addViews": [{ "name": "summary", "sql": "SELECT region, SUM(amount) FROM sales GROUP BY region" }],
"removeViews": ["old_view"]
}

All fields are optional. updateDatasets replaces the specified fields entirely (not merged).


GET /datasets — List all datasets
POST /datasets — Add a dataset
PUT /datasets/:name — Update a dataset
DELETE /datasets/:name — Remove a dataset

POST/PUT body:

{
"from": "s3://bucket/path/data.parquet",
"name": "sales",
"params": {
"s3_auth": "key",
"s3_key": "AKIA...",
"s3_secret": "...",
"s3_region": "us-east-1",
"path": "bucket/path/data.parquet",
"file_format": "parquet"
},
"notifications": true
}

All parameters are validated per connector type. Invalid params return 400 with specific error messages:

{
"error": "Validation failed",
"errors": [
{ "path": "s3_key", "message": "s3_key is required when auth mode is key" }
]
}

GET /views — List all views
POST /views — Add a view
PUT /views/:name — Update a view
DELETE /views/:name — Remove a view

POST/PUT body:

{
"name": "monthly_summary",
"sql": "SELECT month, SUM(amount) FROM sales GROUP BY month"
}

POST /sql

Run a SQL query against your datasets and views. Auto-starts the VM if it’s paused or not provisioned — the call blocks until the VM is ready and the query completes (up to 3 minutes).

Request:

{ "query": "SELECT * FROM sales WHERE region = 'us-east' LIMIT 10" }

Response:

{ "rows": [{ "id": 1, "name": "Alice", "amount": 100.5, "region": "us-east" }, ...] }

If the VM is cold-starting, this call may take 1-3 minutes. Returns 504 if the timeout is exceeded.


GET /status
{
"vmStatus": "running",
"datasets": {
"sales": {
"ok": true,
"status": "ready",
"connectorId": "s3",
"notifications": true,
"s3LastEventAt": "2026-04-01T08:30:00Z",
"s3LastRefreshAt": "2026-04-01T08:31:00Z",
"s3EventCount": 5
}
}
}

vmStatus values: running, paused, provisioning, error, not_provisioned.

When the VM is paused, dataset status shows "status": "unknown".


POST /start — Start or provision the VM
POST /stop — Hibernate the VM
POST /redeploy — Redeploy configuration to VM

Enable automatic data refresh when files change in S3. Set "notifications": true on any S3 dataset.

When enabled, the API response includes the SQS queue ARN:

{
"ok": true,
"notifications": {
"queueArn": "arn:aws:sqs:us-east-1:...",
"region": "us-east-1",
"instructions": [
"Go to S3 → your bucket → Properties → Event notifications",
"Create notification with events: ObjectCreated:*, ObjectRemoved:*",
"Destination = SQS queue, paste the ARN above"
]
}
}

After configuring the S3 notification, file changes are detected automatically. Track status via GET /status.