Professional slide creating, polish. The slides app at /code/slides/. Create, iterate, and publish HTML or image decks. Each {slug}/ folder is a deck, the code agent is the app's runtime user.
Published by rebyteai
Runs in the cloud
No local installation
Dependencies pre-installed
Ready to run instantly
Secure VM environment
Isolated per task
Works on any device
Desktop, tablet, or phone
/code/slides/ is an app. Each {slug}/ is a deck. You are the app's runtime; the user is its user.
Every deck must satisfy this. Full details + validation script: references/protocol.md.
/code/slides/
.slide # Marker file (triggers gallery in frontend)
{slug}/
index.html # The deck — sections + nav engine
outline.md # Deck outline (kept for re-generation)
source.{ext} # Original input content
prompts/ # Per-slide prompts (IMAGE MODE ONLY)
01-slide-{slug}.md, ...
01.png, 02.png, ... # One image per section (zero-padded, REQUIRED)
Invariants:
index.html has N <section data-page="1..N"> elementsNN.png exists for every section — no missing, no extrascripts/validate-protocol.sh before continuing/code/INDEX.md after every create/update/deleteEvery HTML deck's aesthetic comes from the live style service. There is no hardcoded list of aesthetics in this skill — styles are discovered at task time. Each style package ships concrete tokens, worked snippets, signature moves, don'ts, and a full reference deck.
Do not list aesthetics from memory. Always fetch the live list and present those options to the user. If memory contradicts the service, the service is correct.
RELAY_URL="${REBYTE_RELAY_URL:-https://api.rebyte.ai}"
curl -sf "$RELAY_URL/api/styles" -o /tmp/styles.json
jq '.styles.slide' /tmp/styles.json
Each entry: { id, name, summary, signals, version, bundle_url }.
Compare the user's prompt against each style's signals array. The style with the most overlapping signals is the recommendation. If there's a tie, pick the first one. If no signals match any style, don't guess — ask the user to choose explicitly.
During Step 2 Q2 (Style) below, present only the styles returned by /api/styles. Do not add, remove, or rename entries. Show each style's name and summary verbatim; mark the recommendation with (Recommended).
STYLE_ID=<user-picked id>
BUNDLE_URL=$(jq -r --arg id "$STYLE_ID" '.styles.slide[] | select(.id==$id) | .bundle_url' /tmp/styles.json)
mkdir -p ~/.slide-styles
curl -sfL "$BUNDLE_URL" -o /tmp/style.zip
unzip -q -o /tmp/style.zip -d ~/.slide-styles/
ls ~/.slide-styles/$STYLE_ID/ # manifest.json, snippets/, reference/
Extracted layout is used as the design guideline in Step 7 (HTML generation). Full usage: html/html-slides.md.
Every deck.html MUST include this meta tag in <head> — it tells follow-up edits which style to keep the deck consistent with:
<meta name="rebyte-style" content="<STYLE_ID>@<VERSION>">
If the deck already exists and declares <meta name="rebyte-style" content="<id>@<version>">, keep using that same style id for edits — no need to fetch or ask again. If the existing deck has no such meta tag, it's a legacy deck: keep its existing data-aesthetic/data-font attributes as-is and edit only the content.
Copy this checklist and check off items as you complete them:
Slide Deck Progress:
- [ ] Step 1: Setup & Analyze
- [ ] 1.1 Analyze content
- [ ] 1.2 Check existing
- [ ] 1.3 Fetch /api/styles (HTML mode) — see Style Service section
- [ ] Step 2: Confirmation (4-6 questions — Q2 style list comes from fetched /api/styles for HTML)
- [ ] Step 2.5: Download selected style bundle (HTML mode)
- [ ] Step 3: Generate outline
- [ ] Step 4: Review outline (conditional)
- [ ] Step 5: Generate prompts (IMAGE MODE ONLY)
- [ ] Step 6: Review prompts (IMAGE MODE ONLY, conditional)
- [ ] Step 7: Generate slides (BRANCH: html/ or image/)
- [ ] Step 8: Assemble & Validate
- [ ] Step 9: Output
1.1 Analyze Content
/code/slides/{slug}/source.{ext}source.md/code/raw/, copy from there (never modify raw/)source.{ext} exists, rename to source-backup-YYYYMMDD-HHMMSS.{ext}| Content | Slides |
|---|---|
| < 1000 words | 5-10 |
| 1000-3000 words | 10-18 |
| 3000-5000 words | 15-25 |
| > 5000 words | 20-30 |
/api/styles (see Style Service section above). Do this fetch now so the picked style is ready by Step 2.1.2 Check Existing Content
test -d "/code/slides/{slug}" && echo "exists"
If exists, ask user:
header: "Existing"
question: "Existing content found. How to proceed?"
options:
- label: "Regenerate all"
description: "Backup existing, regenerate from scratch"
- label: "Regenerate images only"
description: "Keep outline, regenerate slides"
- label: "Exit"
description: "Cancel, keep existing"
Language: Use user's input language for all questions and responses.
Display summary before asking:
header: "Mode"
question: "How should slides be rendered?"
options:
- label: "HTML (Recommended)"
description: "Rich HTML with pixel-perfect text, editable elements"
- label: "Image"
description: "Each slide is a generated image — artistic styles, visual storytelling"
If HTML selected:
Present the styles returned by /api/styles (fetched in Step 1). Build the options list from jq '.styles.slide' output — one option per style, label = name (or id), description = summary. Mark the top signal-match as (Recommended).
Example (generated from live data, not hardcoded — your concrete list will differ):
header: "Style"
question: "Which style?"
options:
- label: "<name of best-match style> (Recommended)"
description: "<its summary from /api/styles>"
- label: "<name of next style>"
description: "<its summary>"
# ... one entry per style returned by the service
If the service returned zero styles or is unreachable, follow the failure-mode guidance in the Style Service section — do not invent options.
After the user picks, immediately download the selected style bundle (see Style Service section, step 4).
If Image selected:
header: "Style"
question: "Which visual style?"
options:
- label: "{recommended_preset} (Recommended)"
description: "Best match based on content analysis"
- label: "{alternative_preset}"
description: "[description]"
- label: "Custom dimensions"
description: "Choose texture, mood, typography, density separately"
Auto Style Selection (Image mode):
| Content Signals | Preset |
|---|---|
| tutorial, learn, education, guide | sketch-notes |
| hand-drawn, infographic, diagram | hand-drawn-edu |
| architecture, system, technical | blueprint |
| investor, business, corporate | corporate |
| executive, minimal, clean | minimal |
| launch, marketing, keynote | bold-editorial |
| entertainment, gaming | dark-atmospheric |
| explainer, science communication | editorial-infographic |
| gaming, retro, pixel | pixel-art |
| biology, chemistry, medical | scientific |
| history, heritage, vintage | vintage |
| lifestyle, wellness, artistic | watercolor |
| Default | blueprint |
Full preset specs: image/styles/*.md. Dimensions: image/dimensions/*.md.
If "Custom dimensions" selected → Round 2 (4 questions for texture, mood, typography, density). See image/how-to.md for the full custom dimension question templates.
header: "Audience"
question: "Who is the primary reader?"
options:
- label: "General readers (Recommended)"
- label: "Beginners/learners"
- label: "Experts/professionals"
- label: "Executives"
header: "Slides"
question: "How many slides?"
options:
- label: "{N} slides (Recommended)"
- label: "Fewer ({N-3} slides)"
- label: "More ({N+3} slides)"
header: "Outline"
question: "Review outline before generation?"
options:
- label: "Yes, review outline (Recommended)"
- label: "No, skip outline review"
header: "Prompts"
question: "Review prompts before generating images?"
options:
- label: "Yes, review prompts (Recommended)"
- label: "No, skip prompt review"
After confirmation: Store render mode, style, audience, slide count, review flags.
Save as /code/slides/{slug}/outline.md:
# Deck Outline: {Title}
**Slug**: {slug}
**Render**: html | image
**Style**: {aesthetic or preset name}
**Slides**: N
**Audience**: {audience}
**Language**: {language}
---
## Slide 1 of N
**Type**: cover
**Headline**: {main title}
**Sub-headline**: {tagline}
**Visual**: {description of imagery}
---
## Slide 2 of N
**Type**: content
**Headline**: {slide heading}
**Points**:
- {point 1}
- {point 2}
- {point 3}
**Visual**: {description}
**Layout**: bullets | two-col | stat | quote | code
Image mode: also build <STYLE_INSTRUCTIONS> block from style/dimensions. See image/how-to.md for STYLE_INSTRUCTIONS format and image/outline-template.md for the full template.
After generation: If skip_outline_review → skip Step 4.
Show slide-by-slide summary table. Ask:
header: "Confirm"
question: "Ready to proceed?"
options:
- label: "Yes, proceed (Recommended)"
- label: "Edit outline first"
- label: "Regenerate outline"
HTML mode skips this step — the outline is the spec, HTML itself is the artifact.
For image mode:
image/base-prompt.md for the prompt templateimage/layouts.md/code/slides/{slug}/prompts/01-slide-{slug}.md, etc.After generation: If skip_prompt_review → skip Step 6.
Show prompt list. Ask same confirm/edit/regenerate question.
IF HTML → Read html/html-slides.md. Generate <section> elements with:
html/html-slides.md)data-page and data-bp-id attributesslide--title, slide--content, slide--stat, slide--quote, etc.html/html-slides.md)IF Image → Read image/how-to.md. For each slide:
references/nano-banana.md)aspectRatio: "16:9" (always)model: "flash" for iteration/previews, "pro" for final outputimageSize: "1K" default, "2K" for high-res final exports/code/slides/{slug}/NN.pngBoth modes: Build index.html using shared template (references/slide-template.md) and nav engine (references/css-patterns.md).
HTML mode: Sections already contain rich HTML. Run scripts/export-pages.sh to screenshot each section as NN.png.
Image mode: Wrap each image in a thin section:
<section class="slide slide--image" data-page="N">
<img data-bp-id="img-N" crossorigin="anonymous"
src="data:image/png;base64,{base64}"
alt="Slide N: {headline}"
style="width:100%;height:100%;object-fit:contain;display:block;" />
</section>
Cleanup: If deck went from M to N slides (N < M), delete stale (N+1).png through M.png.
Validate: Run scripts/validate-protocol.sh /code/slides/{slug}/index.html. MUST pass before you tell the user the deck is done.
Update /code/INDEX.md with deck info and last-updated date.
Summary in user's language:
Deck complete: "{Title}" — {N} slides ({render mode})
Location: /code/slides/{slug}/
Skip the outline. Go straight to surgical edit.
├── [editing ...] anchor present → User is viewing a deck, edit it
├── [selected ... bp=X page=N] → Edit that specific element
├── "add a slide about X" → Append a <section> to the open deck
├── "regenerate slide 3" → Re-render that page
└── Ambiguous + multiple decks → Ask with a numbered list (see editing.md)
Mode detection: Check index.html for slide--image class → image mode. Otherwise → HTML. If outline.md has Render: field, use that. Fallback: HTML.
HTML edits: by data-bp-id, preserve all attributes. See references/editing.md.
Image edits: update prompt in prompts/, regenerate via nano-banana, replace NN.png. See image/how-to.md "Slide Modification" section.
Image partial workflows: --regenerate N, --images-only. See image/how-to.md.
After ANY edit: run scripts/export-pages.sh (HTML) or replace PNG (image) → run scripts/validate-protocol.sh.
ALL responses use user's preferred language (questions, confirmations, progress, errors, summaries). Technical terms (style names, file paths, code) remain in English.
Detection priority: user's conversation language > outline language field > source content language.
Every file in this skill, with path from this SKILL.md:
references/)| Path | What |
|---|---|
references/protocol.md |
The contract: file layout, invariants, validation script |
references/css-patterns.md |
Slide engine CSS, transitions, controls, nav JS — every index.html |
references/slide-template.md |
Base HTML template for index.html |
references/nano-banana.md |
Image generation API: auth, models, parameters, saving |
references/editing.md |
Selection anchors, bp-id editing, visual magic layer |
references/gallery-engine.md |
Gallery view engine (spring physics) |
references/deploy.md |
How to deploy as standalone URL |
scripts/)| Path | What |
|---|---|
scripts/validate-protocol.sh |
Protocol enforcement — run after every generation |
scripts/export-pages.sh |
HTML → PNG screenshots (Chrome CDP) |
scripts/build-viewer.sh |
Build standalone viewer |
html/)| Path | What |
|---|---|
html/html-slides.md |
Aesthetics, fonts, CSS variables, slide types, typography, design rules, quality gates, DOM Lint Pass, blueprint attributes, images (CDN upload) |
image/)| Path | What |
|---|---|
image/how-to.md |
Image generation guide: style system, presets, auto-selection, design philosophy, prompt engineering, partial workflows, slide modification |
image/base-prompt.md |
Base prompt template for nano-banana slide generation |
image/outline-template.md |
Outline structure with STYLE_INSTRUCTIONS block |
image/layouts.md |
Layout options and selection tips for image slides |
image/design-guidelines.md |
Audience, typography, color, visual element guidelines |
image/content-rules.md |
Content and style guidelines |
image/analysis-framework.md |
Content analysis for presentations |
image/modification-guide.md |
Edit, add, delete slide workflows |
image/config/preferences-schema.md |
EXTEND.md structure (optional user preferences) |
image/dimensions/presets.md |
Preset → dimension mapping (17 presets) |
image/dimensions/density.md |
Density dimension spec |
image/dimensions/mood.md |
Mood dimension spec |
image/dimensions/texture.md |
Texture dimension spec |
image/dimensions/typography.md |
Typography dimension spec |
image/styles/blueprint.md |
Blueprint preset spec |
image/styles/bold-editorial.md |
Bold Editorial preset spec |
image/styles/chalkboard.md |
Chalkboard preset spec |
image/styles/corporate.md |
Corporate preset spec |
image/styles/dark-atmospheric.md |
Dark Atmospheric preset spec |
image/styles/editorial-infographic.md |
Editorial Infographic preset spec |
image/styles/fantasy-animation.md |
Fantasy Animation preset spec |
image/styles/hand-drawn-edu.md |
Hand-Drawn Edu preset spec |
image/styles/intuition-machine.md |
Intuition Machine preset spec |
image/styles/minimal.md |
Minimal preset spec |
image/styles/notion.md |
Notion preset spec |
image/styles/pixel-art.md |
Pixel Art preset spec |
image/styles/scientific.md |
Scientific preset spec |
image/styles/sketch-notes.md |
Sketch Notes preset spec |
image/styles/vector-illustration.md |
Vector Illustration preset spec |
image/styles/vintage.md |
Vintage preset spec |
image/styles/watercolor.md |
Watercolor preset spec |
image/merge-to-pptx.ts |
Merge slides into PowerPoint (bun script) |
image/merge-to-pdf.ts |
Merge slides into PDF (bun script) |
When the user is satisfied:
rebyte deploy from /code/slides/{slug}/ → shareable URLRecord published URLs in /code/output/.
Everyone else asks you to install skills locally. On Rebyte, just click Run. Works from any device — even your phone. No CLI, no terminal, no configuration.
Claude Code
Gemini CLI
Codex
Cursor, Windsurf, Amp
Generate images from text prompts or edit existing images using Google Nano Banana 2 (Gemini 3.1 Flash image generation) via Rebyte data API. Supports multi-size output (512px–4K), improved text rendering, and multi-image input. Use for text-to-image generation or image-to-image editing/enhancement. Triggers include "generate image", "create image", "make a picture", "draw", "illustrate", "image of", "picture of", "edit image", "modify image", "enhance image", "style transfer", "nano banana".
Create and refine visual designs — logos, brand assets, UI mockups, illustrations, interactive artifacts — using AI image generation, design intelligence, and impeccable.style design skills for professional polish. Use when user wants to design a logo, create brand assets, build UI mockups, generate illustrations, or create and refine interactive visual artifacts.
Generate videos from text prompts or images using Google Veo via Rebyte data API. Use for text-to-video generation or image-to-video animation. Triggers include "generate video", "create video", "make a video", "animate", "video of", "text to video", "image to video", "video generation".
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
rebyte.ai — The only platform where you can run AI agent skills directly in the cloud
No downloads. No configuration. Just sign in and start using AI skills immediately.
Use this skill in Agent Computer — your shared cloud desktop with all skills pre-installed. Join Moltbook to connect with other teams.