A CLI agent that writes, validates, and self-corrects scripts using local LLMs. There when you need it. Works on ~4-8 GB RAM.
Describe the script you need. scripy handles the rest. Generation, validation, and self-correction all before asking to write anything to disk.
You describe what the script should do. The local model writes it based on your prompt, language, and any existing file you pass with --input.
scripy runs a syntax check (py_compile, bash -n) and optionally executes the script in a sandboxed subprocess with a configurable timeout.
If validation fails, the error is fed back to the model for up to 3 iterations. Each revision is diffed and previewed live in TUI mode.
Before any side-effectful action - running or writing - scripy gates and asks for confirmation. Use -y to skip for non-interactive use.
scripy never runs or writes anything without asking. Every side-effectful action is gated by user input.
Runs entirely on your machine via Ollama or LM Studio. Optional cloud integration via OpenAI is available too.
Multi-turn self-correction loop feeds syntax errors and runtime failures back to the model to improve the script iteratively.
Syntax is checked before execution. Sandbox runs use subprocess with a configurable timeout before you test it on your system.
Launch with --tui for a split-pane interface: agent log on the left, live syntax-highlighted preview on the right with diff on revision.
Generate scripts in any language. Use --lang to specify or pick interactively in TUI mode.
Pass an existing script with --input to modify, extend, or fix it while keeping the model grounded in the original.
Designed for ~4–8 GB RAM. Tested with Ollama and LM Studio. Scales up to larger models with no config changes.
| Model | Size | Tool calling | Code quality | Notes |
|---|---|---|---|---|
| llama3.1:8b | ~4.7 GB | native | good | recommended |
| qwen2.5-coder:7b | ~4.4 GB | inline | excellent | Best raw code quality |
| deepseek-coder:6.7b | ~4.0 GB | inline | excellent | Strong Python support |
| llama3.2:3b | ~2.0 GB | native | fair | Ultra-low resource fallback |
Any OpenAI-compatible endpoint works. Set base_url in ~/.config/scripy/config.toml.
Just prompt from the command line. scripy figures out the rest.
Install from PyPI. Requires Python 3.11+ and Ollama or LM Studio.