Documentation

Everything you need to know about using Chewy for AI image generation.

Installation

Homebrew (recommended)

The fastest way to get started. This installs both Chewy and the stable-diffusion.cpp inference engine with Metal GPU acceleration.

$ brew install Holy-Coders/chewy/chewy
$ chewy

From source

Requires Ruby 4.0+ and stable-diffusion.cpp on your PATH.

$ git clone https://github.com/Holy-Coders/chewy.git
$ cd chewy && bundle install
$ ruby chewy.rb

Set SD_BIN to point to your sd binary if it's not on your PATH.

Building stable-diffusion.cpp

$ git clone --recursive https://github.com/leejet/stable-diffusion.cpp
$ cd stable-diffusion.cpp
$ mkdir build && cd build
$ cmake .. -DSD_METAL=ON    # macOS with Metal GPU
$ cmake --build . --config Release
$ cp bin/sd-cli /usr/local/bin/sd

First Run

When you first launch Chewy, it will:

No models yet? Press ^d to open the model downloader. Start with "Stable Diffusion 1.5 (Q8)" for the best beginner experience — it's fast, low VRAM, and produces good results.

Providers

Chewy supports 6 image generation backends. Press ^y to switch between them.

ProviderModelsType
Local (sd.cpp)SD 1.x/2.x/3.5, SDXL, FLUX, Wan VideoLocal
OpenAIGPT Image 1, DALL-E 3, DALL-E 2API
GeminiImagen 3, Imagen 3 Fast, Gemini 2.0 FlashAPI
HuggingFaceFLUX.1 Schnell/Dev, SDXL, SD 3.5 Large, HiDreamAPI
OpenAI-CompatibleAny model via custom endpointAPI

API keys can be set via environment variables (OPENAI_API_KEY, GEMINI_API_KEY, HUGGINGFACE_API_KEY) or entered in-app. Keys are stored with chmod 600 permissions.

Generating Images

The main interface has three focus areas you can cycle through with tab:

  1. Prompt — describe what you want to generate
  2. Negative prompt — describe what you want to avoid
  3. Parameters — steps, CFG scale, dimensions, seed, sampler, scheduler, threads

Press enter while in the prompt or negative prompt field to start generating. The right panel shows a live preview (when available) and the final result.

Batch generation: Set the batch parameter to generate multiple images in sequence with different seeds.

Models

Place model files (.gguf, .safetensors, .ckpt) in ~/.config/chewy/models. Chewy also automatically scans:

Press ^n to open the model picker. Full filenames (with extension) are displayed. Press p in the model picker to pin frequently used models to the top.

Downloading models

Press ^d to open the model downloader. You'll see recommended starter models:

ModelTypeBest for
Stable Diffusion 1.5 (Q8)SD 1.5Learning, fast generation
Stable Diffusion 3.5 Medium (Q5)SD 3.5Quality/speed balance
SDXL Turbo (Q4)SDXLReal-time, 1-4 steps
DreamShaper v7 LCM (F16)SD 1.5Artistic, 4-8 steps
FLUX.1 Schnell (Q4)FLUXState-of-the-art quality

Select "Browse HuggingFace..." or "Browse CivitAI..." to search for models online. CivitAI is the largest community hub for Stable Diffusion models. Downloads support resume — if interrupted, just try again.

LoRAs

LoRAs (Low-Rank Adaptations) are small add-on weights that modify a base model's style or capabilities. Place LoRA .safetensors files in ~/.config/chewy/loras.

Using LoRAs

Press ^l to open the LoRA selector:

Multiple LoRAs can be active simultaneously. Weights control how strongly the LoRA affects the output (1.0 = full effect).

Downloading LoRAs

Press d in the LoRA panel to browse recommended LoRAs or search HuggingFace and CivitAI:

LoRACompatibleUse case
Detail TweakerSD 1.5Enhance or reduce fine details
LCM LoRA (SD 1.5)SD 1.5Generate in 4-8 steps instead of 20+
LCM LoRA (SDXL)SDXLFast generation for SDXL models
Pixel ArtSDXLPixel art style outputs
PapercutSD 1.5Paper cutout art style

LoRA compatibility: LoRAs must match your base model architecture. An SD 1.5 LoRA won't work with an SDXL model and vice versa. Chewy will warn you if it detects a mismatch when you try to generate.

img2img

Use an existing image as a starting point for generation:

The strength parameter (0.0-1.0) controls how much the output differs from the input. Lower values stay closer to the original; higher values allow more creative freedom.

Presets

Press ^p to open the preset picker. Chewy includes built-in presets optimized for common use cases:

In the preset panel:

Custom presets are saved to ~/.config/chewy/presets.yml.

Prompt History

Chewy remembers every prompt you've used across sessions. Press up / down while in the prompt field to cycle through your history.

History is loaded from your generation metadata files on disk, so even in a fresh session you'll have access to all your past prompts. New prompts from the current session are appended on top.

Inpainting

Inpainting lets you selectively regenerate parts of an image while keeping others intact — perfect for changing backgrounds while preserving faces.

  1. Set an init image (^b)
  2. Tab to params, navigate to Mask
  3. Press g for auto center-preserve mask, or p to paint a custom mask
  4. Write your prompt and generate

White pixels in the mask = regenerate, black = keep. The mask painter lets you click on the image to select which areas to change.

Built-in presets: "Inpaint - Face Preserve" and "Inpaint - Background Only" auto-generate a center mask and configure optimal settings.

AI Prompt Enhancement

Chewy can enhance your prompts using a local or cloud LLM. From the prompt field:

Or click the ✨ Enhance / ✨ Generate button in the prompt box.

LLM priority: local (ollama, LM Studio, llama.cpp) → OpenAI → Gemini → rule-based fallback. For local: brew install ollama && ollama pull llama3.2:3b.

Themes

Press ^t to open the theme picker. 10 built-in themes:

Themes affect the entire interface including the status bar, overlays, and CLI commands (help, list, etc.).

Custom Themes

Drop a YAML file in ~/.config/chewy/themes/ with 13 color keys and it will appear in the theme picker. Example ~/.config/chewy/themes/cyberpunk.yml:

primary: "#00FF41"
secondary: "#008F11"
accent: "#FF0090"
success: "#00FF41"
warning: "#FFD300"
error: "#FF0000"
text: "#E0E0E0"
text_dim: "#808080"
text_muted: "#505050"
surface: "#0D0D0D"
border_dim: "#333333"
border_focus: "#00FF41"
bar_text: "#000000"

Custom themes show a custom badge in the picker. All 13 keys are required. Files with missing keys or bad YAML are silently skipped.

Press ^a to open the gallery, which shows all your generated images. Navigate with arrow keys or j/k.

Keyboard Shortcuts

Global

KeyAction
tabCycle focus (prompt / negative / params)
shift+tabCycle focus backwards
enterGenerate image
^ySwitch provider
^nModel picker
^dDownload models
^tTheme picker
^aGallery
^lLoRA selector
^pPresets
^bBrowse for init image
^vPaste image from clipboard
^uClear init image
^eOpen last image in viewer
^fFullscreen preview
^xCancel generation
^wClear prompt and image (start fresh)
^rRandomize seed
F1Help overlay (all shortcuts)
^qQuit

Prompt field

KeyAction
upPrevious prompt from history
downNext prompt from history

Parameter editing

KeyAction
j / kNavigate params
enterEdit selected param
h / lDecrease / increase value

Status Bar

The bottom bar shows contextual information:

The right side of the bar shows context-sensitive keyboard shortcuts.

Samplers & Schedulers

Chewy supports 14 samplers and 9 schedulers for local generation. These control the denoising process and significantly affect image quality and style.

Samplers

euler, euler_a, heun, dpm2, dpm++2s_a, dpm++2m, dpm++2mv2, ipndm, ipndm_v, lcm, ddim_trailing, tcd, res_multistep, res_2s

Recommended: euler_a is a great default. For LCM models/LoRAs, use lcm. For fast generation, try dpm++2m.

Schedulers

discrete, karras, exponential, ays, gits, sgm_uniform, simple, smoothstep, kl_optimal

FLUX Models

FLUX is a state-of-the-art image generation architecture. FLUX models require companion files that Chewy will auto-download on first use:

You'll need a HuggingFace token with read access. Enter it when prompted, or set it via the HuggingFace Token overlay (^h in the model picker).

FLUX Schnell vs Dev: Schnell is faster (4-8 steps) but less flexible. Dev produces higher quality but needs more steps (20-50). Schnell doesn't work well with img2img.

Video Generation (Experimental)

Chewy supports text-to-video generation via Wan 2.1 models using sd.cpp's -M vid_gen mode.

Hardware requirements: Video generation is experimental and works best with dedicated GPU hardware. The Wan 1.3B model requires ~12GB RAM for weights alone. CPU-only generation is very slow (minutes per frame). A machine with 32GB+ RAM and a Metal/CUDA GPU is recommended.

Getting started

  1. Download a Wan model from the model browser (^d) — "Wan 2.1 T2V 1.3B (Q5)" is available in the recommended list
  2. Select the Wan model (^n) — a VIDEO badge appears in the header
  3. Companion files (t5xxl ~6.3GB, clip_vision ~1.2GB, vae ~242MB) are auto-downloaded on first generation
  4. Adjust Frames and FPS in the params panel (Wan requires frame counts of 1+4n: 5, 9, 13, 17, 21, 25, 29, 33...)
  5. Write a prompt and press enter

Video player

After generation, an in-terminal video player opens automatically:

KeyAction
spacePlay / pause
/ Step frame by frame
[ / ]Decrease / increase playback speed
0 / $Jump to first / last frame
oOpen MP4 in external player (requires ffmpeg)
qClose player

Video presets

Press ^p and look for the "Video" presets: Quick Preview (9 frames), Standard (17 frames), High Quality (33 frames), and Img2Vid.

Output format

Videos are saved as PNG frame sequences in outputs/<timestamp>_frames/. If ffmpeg is installed, Chewy also assembles an MP4 automatically. Videos appear in the gallery (^a) with a [VIDEO] tag.

ControlNet

ControlNet allows you to guide generation using a reference image's structure (edges, depth, pose). It's supported for SD 1.5, SD 2.x, and SDXL models (not FLUX).

To use ControlNet, you need both a ControlNet image and a ControlNet model file. These can be selected via the file picker.

Configuration

Config is stored at ~/.config/chewy/config.yml. Most settings can be changed in-app, but you can also set these environment variables:

VariableDefaultDescription
SD_BINsd (on PATH)Path to the stable-diffusion.cpp binary
CHEWY_MODELS_DIR~/.config/chewy/modelsDirectory containing model files
CHEWY_OUTPUT_DIR./outputsDirectory for generated images
CHEWY_LORA_DIR~/.config/chewy/lorasDirectory containing LoRA files
OPENAI_API_KEYOpenAI API key
GEMINI_API_KEYGoogle Gemini API key
HUGGINGFACE_API_KEYHuggingFace API key

CLI Commands

Chewy also provides CLI commands for managing generated images outside the TUI:

$ chewy              # Launch the TUI
$ chewy list          # List all generated images with metadata
$ chewy delete FILE   # Delete a generated image (with confirmation)
$ chewy help          # Show help

CLI output uses your configured theme colors.