Documentation
Everything you need to know about using Chewy for AI image generation.
Installation
Homebrew (recommended)
The fastest way to get started. This installs both Chewy and the stable-diffusion.cpp inference engine with Metal GPU acceleration.
$ brew install Holy-Coders/chewy/chewy
$ chewy
From source
Requires Ruby 4.0+ and stable-diffusion.cpp on your PATH.
$ git clone https://github.com/Holy-Coders/chewy.git
$ cd chewy && bundle install
$ ruby chewy.rb
Set SD_BIN to point to your sd binary if it's not on your PATH.
Building stable-diffusion.cpp
$ git clone --recursive https://github.com/leejet/stable-diffusion.cpp
$ cd stable-diffusion.cpp
$ mkdir build && cd build
$ cmake .. -DSD_METAL=ON # macOS with Metal GPU
$ cmake --build . --config Release
$ cp bin/sd-cli /usr/local/bin/sd
First Run
When you first launch Chewy, it will:
- Create a config file at
~/.config/chewy/config.yml - Scan for models in
~/.config/chewy/models(and DiffusionBee/Draw Things directories) - Show an animated splash screen
- Check for updates
No models yet? Press ^d to open the model downloader. Start with "Stable Diffusion 1.5 (Q8)" for the best beginner experience — it's fast, low VRAM, and produces good results.
Providers
Chewy supports 6 image generation backends. Press ^y to switch between them.
| Provider | Models | Type |
|---|---|---|
| Local (sd.cpp) | SD 1.x/2.x/3.5, SDXL, FLUX, Wan Video | Local |
| OpenAI | GPT Image 1, DALL-E 3, DALL-E 2 | API |
| Gemini | Imagen 3, Imagen 3 Fast, Gemini 2.0 Flash | API |
| HuggingFace | FLUX.1 Schnell/Dev, SDXL, SD 3.5 Large, HiDream | API |
| OpenAI-Compatible | Any model via custom endpoint | API |
API keys can be set via environment variables (OPENAI_API_KEY, GEMINI_API_KEY, HUGGINGFACE_API_KEY) or entered in-app. Keys are stored with chmod 600 permissions.
Generating Images
The main interface has three focus areas you can cycle through with tab:
- Prompt — describe what you want to generate
- Negative prompt — describe what you want to avoid
- Parameters — steps, CFG scale, dimensions, seed, sampler, scheduler, threads
Press enter while in the prompt or negative prompt field to start generating. The right panel shows a live preview (when available) and the final result.
Batch generation: Set the batch parameter to generate multiple images in sequence with different seeds.
Models
Place model files (.gguf, .safetensors, .ckpt) in ~/.config/chewy/models. Chewy also automatically scans:
- DiffusionBee —
~/.diffusionbee - Draw Things —
~/Library/Containers/com.liuliu.draw-things/Data/Documents/Models
Press ^n to open the model picker. Full filenames (with extension) are displayed. Press p in the model picker to pin frequently used models to the top.
Downloading models
Press ^d to open the model downloader. You'll see recommended starter models:
| Model | Type | Best for |
|---|---|---|
| Stable Diffusion 1.5 (Q8) | SD 1.5 | Learning, fast generation |
| Stable Diffusion 3.5 Medium (Q5) | SD 3.5 | Quality/speed balance |
| SDXL Turbo (Q4) | SDXL | Real-time, 1-4 steps |
| DreamShaper v7 LCM (F16) | SD 1.5 | Artistic, 4-8 steps |
| FLUX.1 Schnell (Q4) | FLUX | State-of-the-art quality |
Select "Browse HuggingFace..." or "Browse CivitAI..." to search for models online. CivitAI is the largest community hub for Stable Diffusion models. Downloads support resume — if interrupted, just try again.
LoRAs
LoRAs (Low-Rank Adaptations) are small add-on weights that modify a base model's style or capabilities. Place LoRA .safetensors files in ~/.config/chewy/loras.
Using LoRAs
Press ^l to open the LoRA selector:
space/enter— toggle a LoRA on/off+/-— adjust weight (0.0 to 2.0)w— type an exact weight valued— open the LoRA downloader
Multiple LoRAs can be active simultaneously. Weights control how strongly the LoRA affects the output (1.0 = full effect).
Downloading LoRAs
Press d in the LoRA panel to browse recommended LoRAs or search HuggingFace and CivitAI:
| LoRA | Compatible | Use case |
|---|---|---|
| Detail Tweaker | SD 1.5 | Enhance or reduce fine details |
| LCM LoRA (SD 1.5) | SD 1.5 | Generate in 4-8 steps instead of 20+ |
| LCM LoRA (SDXL) | SDXL | Fast generation for SDXL models |
| Pixel Art | SDXL | Pixel art style outputs |
| Papercut | SD 1.5 | Paper cutout art style |
LoRA compatibility: LoRAs must match your base model architecture. An SD 1.5 LoRA won't work with an SDXL model and vice versa. Chewy will warn you if it detects a mismatch when you try to generate.
img2img
Use an existing image as a starting point for generation:
^b— browse your filesystem for an image^v— paste an image from your clipboard^u— clear the init image (return to txt2img)
The strength parameter (0.0-1.0) controls how much the output differs from the input. Lower values stay closer to the original; higher values allow more creative freedom.
Presets
Press ^p to open the preset picker. Chewy includes built-in presets optimized for common use cases:
- fast — fewer steps, quick iteration
- quality — more steps, better results
- portrait — optimized for portraits
- flux-fast — FLUX-specific fast settings
- flux-quality — FLUX-specific quality settings
In the preset panel:
enter— load the selected presets— save current settings as a new presetd— delete a custom preset
Custom presets are saved to ~/.config/chewy/presets.yml.
Prompt History
Chewy remembers every prompt you've used across sessions. Press up / down while in the prompt field to cycle through your history.
History is loaded from your generation metadata files on disk, so even in a fresh session you'll have access to all your past prompts. New prompts from the current session are appended on top.
Inpainting
Inpainting lets you selectively regenerate parts of an image while keeping others intact — perfect for changing backgrounds while preserving faces.
- Set an init image (
^b) - Tab to params, navigate to Mask
- Press
gfor auto center-preserve mask, orpto paint a custom mask - Write your prompt and generate
White pixels in the mask = regenerate, black = keep. The mask painter lets you click on the image to select which areas to change.
Built-in presets: "Inpaint - Face Preserve" and "Inpaint - Background Only" auto-generate a center mask and configure optimal settings.
AI Prompt Enhancement
Chewy can enhance your prompts using a local or cloud LLM. From the prompt field:
alt+e— Enhance your prompt (adds detail, lighting, quality tags)alt+n— Auto-generate a negative promptalt+r— Generate a random creative prompt from scratch
Or click the ✨ Enhance / ✨ Generate button in the prompt box.
LLM priority: local (ollama, LM Studio, llama.cpp) → OpenAI → Gemini → rule-based fallback. For local: brew install ollama && ollama pull llama3.2:3b.
Themes
Press ^t to open the theme picker. 10 built-in themes:
- Midnight (default), Dracula, Catppuccin, Tokyo Night, Gruvbox
- Nord, Rose Pine, Solarized, Horizon, Light
Themes affect the entire interface including the status bar, overlays, and CLI commands (help, list, etc.).
Custom Themes
Drop a YAML file in ~/.config/chewy/themes/ with 13 color keys and it will appear in the theme picker. Example ~/.config/chewy/themes/cyberpunk.yml:
primary: "#00FF41"
secondary: "#008F11"
accent: "#FF0090"
success: "#00FF41"
warning: "#FFD300"
error: "#FF0000"
text: "#E0E0E0"
text_dim: "#808080"
text_muted: "#505050"
surface: "#0D0D0D"
border_dim: "#333333"
border_focus: "#00FF41"
bar_text: "#000000"
Custom themes show a custom badge in the picker. All 13 keys are required. Files with missing keys or bad YAML are silently skipped.
Gallery
Press ^a to open the gallery, which shows all your generated images. Navigate with arrow keys or j/k.
enter/space— fullscreen preview^e— open in your system image viewerp— load the prompt and settings from this imagedelete/backspace— delete the image
Keyboard Shortcuts
Global
| Key | Action |
|---|---|
tab | Cycle focus (prompt / negative / params) |
shift+tab | Cycle focus backwards |
enter | Generate image |
^y | Switch provider |
^n | Model picker |
^d | Download models |
^t | Theme picker |
^a | Gallery |
^l | LoRA selector |
^p | Presets |
^b | Browse for init image |
^v | Paste image from clipboard |
^u | Clear init image |
^e | Open last image in viewer |
^f | Fullscreen preview |
^x | Cancel generation |
^w | Clear prompt and image (start fresh) |
^r | Randomize seed |
F1 | Help overlay (all shortcuts) |
^q | Quit |
Prompt field
| Key | Action |
|---|---|
up | Previous prompt from history |
down | Next prompt from history |
Parameter editing
| Key | Action |
|---|---|
j / k | Navigate params |
enter | Edit selected param |
h / l | Decrease / increase value |
Status Bar
The bottom bar shows contextual information:
- After generation: output filename, generation time, seed
- During generation: progress bar, step count, ETA, speed
- Toast messages: confirmations (theme changed, model downloaded, etc.) that auto-dismiss after 3 seconds
- Errors: red background with error description, auto-dismiss after 5 seconds
The right side of the bar shows context-sensitive keyboard shortcuts.
Samplers & Schedulers
Chewy supports 14 samplers and 9 schedulers for local generation. These control the denoising process and significantly affect image quality and style.
Samplers
euler, euler_a, heun, dpm2, dpm++2s_a, dpm++2m, dpm++2mv2, ipndm, ipndm_v, lcm, ddim_trailing, tcd, res_multistep, res_2s
Recommended: euler_a is a great default. For LCM models/LoRAs, use lcm. For fast generation, try dpm++2m.
Schedulers
discrete, karras, exponential, ays, gits, sgm_uniform, simple, smoothstep, kl_optimal
FLUX Models
FLUX is a state-of-the-art image generation architecture. FLUX models require companion files that Chewy will auto-download on first use:
- clip_l.safetensors — CLIP text encoder
- t5xxl_fp16.safetensors — T5-XXL text encoder
- ae.safetensors — VAE decoder
You'll need a HuggingFace token with read access. Enter it when prompted, or set it via the HuggingFace Token overlay (^h in the model picker).
FLUX Schnell vs Dev: Schnell is faster (4-8 steps) but less flexible. Dev produces higher quality but needs more steps (20-50). Schnell doesn't work well with img2img.
Video Generation (Experimental)
Chewy supports text-to-video generation via Wan 2.1 models using sd.cpp's -M vid_gen mode.
Hardware requirements: Video generation is experimental and works best with dedicated GPU hardware. The Wan 1.3B model requires ~12GB RAM for weights alone. CPU-only generation is very slow (minutes per frame). A machine with 32GB+ RAM and a Metal/CUDA GPU is recommended.
Getting started
- Download a Wan model from the model browser (
^d) — "Wan 2.1 T2V 1.3B (Q5)" is available in the recommended list - Select the Wan model (
^n) — a VIDEO badge appears in the header - Companion files (t5xxl ~6.3GB, clip_vision ~1.2GB, vae ~242MB) are auto-downloaded on first generation
- Adjust Frames and FPS in the params panel (Wan requires frame counts of 1+4n: 5, 9, 13, 17, 21, 25, 29, 33...)
- Write a prompt and press
enter
Video player
After generation, an in-terminal video player opens automatically:
| Key | Action |
|---|---|
space | Play / pause |
← / → | Step frame by frame |
[ / ] | Decrease / increase playback speed |
0 / $ | Jump to first / last frame |
o | Open MP4 in external player (requires ffmpeg) |
q | Close player |
Video presets
Press ^p and look for the "Video" presets: Quick Preview (9 frames), Standard (17 frames), High Quality (33 frames), and Img2Vid.
Output format
Videos are saved as PNG frame sequences in outputs/<timestamp>_frames/. If ffmpeg is installed, Chewy also assembles an MP4 automatically. Videos appear in the gallery (^a) with a [VIDEO] tag.
ControlNet
ControlNet allows you to guide generation using a reference image's structure (edges, depth, pose). It's supported for SD 1.5, SD 2.x, and SDXL models (not FLUX).
To use ControlNet, you need both a ControlNet image and a ControlNet model file. These can be selected via the file picker.
Configuration
Config is stored at ~/.config/chewy/config.yml. Most settings can be changed in-app, but you can also set these environment variables:
| Variable | Default | Description |
|---|---|---|
SD_BIN | sd (on PATH) | Path to the stable-diffusion.cpp binary |
CHEWY_MODELS_DIR | ~/.config/chewy/models | Directory containing model files |
CHEWY_OUTPUT_DIR | ./outputs | Directory for generated images |
CHEWY_LORA_DIR | ~/.config/chewy/loras | Directory containing LoRA files |
OPENAI_API_KEY | — | OpenAI API key |
GEMINI_API_KEY | — | Google Gemini API key |
HUGGINGFACE_API_KEY | — | HuggingFace API key |
CLI Commands
Chewy also provides CLI commands for managing generated images outside the TUI:
$ chewy # Launch the TUI
$ chewy list # List all generated images with metadata
$ chewy delete FILE # Delete a generated image (with confirmation)
$ chewy help # Show help
CLI output uses your configured theme colors.