TL;DR
Text prompt → Stable Diffusion → Pretty picture
CLI prompt input

Generated output (512×512)
What We (✨Claude✨) Built — an honest confession
The Real Reason
This project exists because I needed something to test the /add-work and /log-work slash commands I built for this site.
I spent way more time perfecting the work tracking system than actually generating art. Turns out I'm not good at art — even when the AI does most of the work.
Time ratio: ~10% writing the CLI, ~90% testing slash commands with it as the guinea pig
But It Actually Works
The CLI does run Stable Diffusion 2.1 locally on Apple Silicon. No cloud APIs, no subscriptions — just your M-series GPU doing the heavy lifting via Metal Performance Shaders.
It's nothing fancy: pass a prompt, wait ~30 seconds, get a 512×512 image. But there's something satisfying about running AI models locally instead of paying per generation.
Stack: Python, PyTorch, Hugging Face Diffusers, ~4GB VRAM on M3