Generate scripts
locally. Instantly.

A CLI agent that writes, validates, and self-corrects scripts using local LLMs. There when you need it. Works on ~4-8 GB RAM.

bash - 80×24
$ scripy -p "rename all my jpegs by date taken"
 
scripy v0.1.0 - qwen2.5-coder:7b
▸ generating ⚬⚬⚬⚬
 
syntax valid
? run script to validate? [y/n/e/v/a] › y
running sandbox...
validation passed
 
? write to disk as rename_jpegs.py? [y/n/v] › y
 
wrote rename_jpegs.py 2.1KB 4.3s
$ pip install scripy-cli

The agentic loop

Describe the script you need. scripy handles the rest. Generation, validation, and self-correction all before asking to write anything to disk.

01

Generate

You describe what the script should do. The local model writes it based on your prompt, language, and any existing file you pass with --input.

02

Validate

scripy runs a syntax check (py_compile, bash -n) and optionally executes the script in a sandboxed subprocess with a configurable timeout.

03
~

Self-correct

If validation fails, the error is fed back to the model for up to 3 iterations. Each revision is diffed and previewed live in TUI mode.

04
?

Confirm & write

Before any side-effectful action - running or writing - scripy gates and asks for confirmation. Use -y to skip for non-interactive use.

You stay in control

scripy never runs or writes anything without asking. Every side-effectful action is gated by user input.

Run gate - before sandboxed execution
? run script to validate? [y/n/e/v/a] ›
y run it n skip e open in $EDITOR v preview script a always yes
Write gate - before writing to disk
? write to disk as dedup.py? [y/n/v] ›
y write file n skip write, print to stdout v preview file

Use -y / --yes to bypass all gates non-interactively.

Everything you need,
nothing you don't

⚬ local-first

No cloud needed

Runs entirely on your machine via Ollama or LM Studio. Optional cloud integration via OpenAI is available too.

▸ agentic loop

Generate → validate → correct

Multi-turn self-correction loop feeds syntax errors and runtime failures back to the model to improve the script iteratively.

✓ sandboxed validation

Safe by default

Syntax is checked before execution. Sandbox runs use subprocess with a configurable timeout before you test it on your system.

~ TUI mode

Full interactive TUI

Launch with --tui for a split-pane interface: agent log on the left, live syntax-highlighted preview on the right with diff on revision.

? multi-language

Python, Bash, JS & more

Generate scripts in any language. Use --lang to specify or pick interactively in TUI mode.

▸ modify existing scripts

Refine scripts you already have

Pass an existing script with --input to modify, extend, or fix it while keeping the model grounded in the original.

Runs on small hardware

Designed for ~4–8 GB RAM. Tested with Ollama and LM Studio. Scales up to larger models with no config changes.

Model Size Tool calling Code quality Notes
llama3.1:8b ~4.7 GB native good recommended
qwen2.5-coder:7b ~4.4 GB inline excellent Best raw code quality
deepseek-coder:6.7b ~4.0 GB inline excellent Strong Python support
llama3.2:3b ~2.0 GB native fair Ultra-low resource fallback

Any OpenAI-compatible endpoint works. Set base_url in ~/.config/scripy/config.toml.

One command away

Just prompt from the command line. scripy figures out the rest.

bash
# Python
$ scripy -p "find duplicate files in a directory"

$ scripy -p "watch a directory for changes and notify me"

$ scripy -p "bulk resize images to 1080px" -o resize.py
# Bash / modify
$ scripy -p "backup home to /tmp" --lang bash

$ scripy -p "add a --dry-run flag" --input dedup.py

$ scripy -p "..." -y # skip all gates

Ready to try scripy?

Install from PyPI. Requires Python 3.11+ and Ollama or LM Studio.

$ pip install scripy-cli
GitHub Get Ollama →