HYLE(1)                      User Commands                      HYLE(1)

NAME
       hyle - autonomous code assistant

SYNOPSIS
       hyle [--free] [--model ID] [--task "..."] [PATHS...]

DESCRIPTION
       Read code. Write code. Execute tools. Text in, text out.

       Single binary. No runtime dependencies. Composes with
       existing tools: pipes, redirects, scripts.

PHILOSOPHY
       "Make each program do one thing well."

       hyle does one thing: translate natural language intent
       into file operations and shell commands.

       Output is text. Input is text. Context is JSON.
       Integrates with grep, sed, awk, git, make, and every
       other tool in your arsenal.

TOOLS
       read     Read file contents with line numbers
       write    Write content to file (atomic, with backup)
       patch    Apply unified diff
       grep     Search files with regex
       glob     Find files by pattern
       bash     Execute shell command
LIVE DEMO
COMPOSABILITY
       The real power of Unix: small tools that connect. hyle
       speaks the same language. Stdin, stdout, exit codes,
       file paths. Chain it with anything.
$ find ~/code/lexer -name "*.rs" | hyle --task "add docstrings" [easy]
You wrote a lexer last weekend. Functions are named well but undocumented. Running rustdoc gives you empty pages. Instead of manually writing 40+ doc comments, pipe the file list and let hyle read each file, understand what each function does from context, and add accurate /// comments. Five minutes later, rustdoc looks professional.
$ find ~/code/lexer -name "*.rs" | wc -l
14

$ find ~/code/lexer -name "*.rs" | hyle --task "add /// docs to pub fns"
[reading src/lib.rs...]
[patching src/lib.rs: 8 docstrings]
[reading src/token.rs...]
[patching src/token.rs: 12 docstrings]
[reading src/scanner.rs...]
[patching src/scanner.rs: 15 docstrings]
14 files, 47 docstrings added
$ hyle --task "list security issues" --json | jq '.issues[]' [pipe]
JSON output mode transforms hyle into a scriptable security scanner. Pipe to jq for filtering, to a dashboard for visualization, or into your CI pipeline to fail builds. The structured output means you can build workflows around it without parsing prose.
$ hyle --task "find unwrap() in error paths, SQL without params" \
    ~/code/inventory-api --json | jq -r '.issues[] | "\(.file):\(.line) \(.msg)"'
src/db/queries.rs:42 format! in SQL query - potential injection
src/handlers/orders.rs:78 unwrap() on db result
src/handlers/orders.rs:134 unwrap() on user input parse
# pipe to slack, pagerduty, or block deployment
$ hyle --task "add request tracing" ~/code/axum-api | patch -p1 [pipe]
Production debugging requires traces. But adding tracing::info! to every handler is tedious. Generate the diff first, review exactly what will change, then apply with patch. Full control. If the diff looks wrong, throw it away. No files touched until you say so.
$ hyle --task "add tracing spans to all handlers" ~/code/axum-api --diff-only > tracing.patch

$ head -20 tracing.patch
--- a/src/handlers/users.rs
+++ b/src/handlers/users.rs
@@ -15,6 +15,7 @@
 pub async fn get_user(Path(id): Path) -> impl IntoResponse {
+    let span = tracing::info_span!("get_user", user_id = %id);
+    let _guard = span.enter();
     let user = db::users::find(id).await?;

$ patch -p1 < tracing.patch
patching file src/handlers/users.rs
patching file src/handlers/orders.rs
patching file src/handlers/products.rs
$ git diff origin/main | hyle --task "write PR description" [pipe]
You've been coding for three hours. The diff is 400 lines across 12 files. Writing a coherent PR description feels like work. Pipe the diff, get a structured summary that actually explains what changed and why. Your reviewers will thank you.
$ git diff origin/main | hyle --task "PR description with summary and test plan"

## Summary
This PR adds rate limiting to the public API endpoints to prevent abuse
and ensure fair usage across tenants.

### Changes
- Added `RateLimiter` middleware using token bucket algorithm
- Configured per-endpoint limits in `config/limits.toml`
- Added 429 responses with Retry-After headers
- Updated integration tests to verify rate limiting behavior

## Test Plan
- [ ] Verify rate limits trigger at configured thresholds
- [ ] Check Retry-After header accuracy
- [ ] Confirm limits reset after window expires
# paste directly into PR
$ journalctl -u trading-engine | tail -500 | hyle --task "diagnose" [easy]
3am. PagerDuty fires. Logs are a wall of text. Instead of grep-scrolling through stack traces, pipe the recent logs to hyle. It finds the pattern, correlates timestamps, identifies the root cause, and points you to the exact line of code. Incident resolved before your coffee gets cold.
$ journalctl -u trading-engine --since "10 minutes ago" | \
    hyle --task "find root cause of errors"

[analyzing 847 log lines...]
[found 23 ERROR entries, 156 WARN entries]
[correlating timestamps...]

Root cause: Connection pool exhaustion starting at 03:14:22

The orderbook matching engine at src/matching/engine.rs:342
acquires a DB connection for each fill but doesn't release it until
the entire batch completes. During high volume (>1000 orders/sec),
the pool (max 100) exhausts before batch completion.

Suggested fix: Release connection after each fill, or batch fills
into single transaction with connection acquired once.
$ hyle --task "port to async" src/sync_client.rs | tee src/async_client.rs [wow]
You have a working synchronous HTTP client. Now you need an async version for the new tokio-based service. Manual conversion means updating every function signature, replacing std::io with tokio::io, adding .await everywhere. Let hyle do the mechanical work while you review the output.
$ hyle --task "convert to async tokio, keep same API shape" \
    ~/code/http-client/src/sync_client.rs | tee src/async_client.rs

[analyzing blocking operations...]
[std::net::TcpStream -> tokio::net::TcpStream]
[std::io::Read -> tokio::io::AsyncReadExt]
[fn connect() -> async fn connect()]
[adding .await to 23 I/O operations]

wrote src/async_client.rs (312 lines)

$ diff src/sync_client.rs src/async_client.rs | head -15
< pub fn connect(addr: &str) -> io::Result {
---
> pub async fn connect(addr: &str) -> io::Result {
< let stream = TcpStream::connect(addr)?;
---
> let stream = TcpStream::connect(addr).await?;
$ cargo test 2>&1 | hyle --task "fix the failures" [easy]
Refactoring broke 7 tests. The failures cascade - fix one, another appears. Pipe the test output to hyle. It parses failures, reads the test source, understands assertions, and patches either the test (if the new behavior is correct) or the code (if it's a regression).
$ cargo test 2>&1 | hyle --task "fix failures, preserve test intent"
[parsing test output...]
[found 7 failures in tests/parser_tests.rs]

Failure 1: test_parse_expression
  Expected: Expr::Binary but got Expr::Unary
  Analysis: Parser precedence changed. Test expectation is stale.
  [patching tests/parser_tests.rs:45]

Failure 2: test_parse_function_call
  Expected: 3 args but got 2
  Analysis: Bug in parser - trailing comma handling wrong
  [patching src/parser.rs:178]

$ cargo test
running 42 tests... ok
$ find src -name "*.rs" | xargs -P4 -I{} hyle --task "add tests" {} [pipe]
Legacy codebase. Zero test coverage. Management wants 80% by Q2. Batch process every file in parallel with xargs. Each hyle instance reads one file, generates tests for its public functions, writes to a corresponding test module. Coffee break while 200 files get tested.
$ find ~/code/legacy-api/src -name "*.rs" | wc -l
47

$ find ~/code/legacy-api/src -name "*.rs" | \
    xargs -P4 -I{} hyle --task "add #[test] fns for pub items" {}

[4 parallel workers]
[processing src/models/user.rs → tests/models/user_test.rs]
[processing src/models/order.rs → tests/models/order_test.rs]
[processing src/handlers/auth.rs → tests/handlers/auth_test.rs]
[processing src/handlers/api.rs → tests/handlers/api_test.rs]
...
47 files processed, 312 tests generated

$ cargo test 2>&1 | tail -1
test result: ok. 312 passed; 0 failed
$ hyle --task "deploy script" <<EOF | sh [wow]
Heredoc input, shell execution. Describe what you need in plain English, get executable shell commands. The | sh at the end means instant execution. Dangerous? Sure. Powerful? Absolutely. Use with --dry-run first if you're cautious. Or just trust the machine like a true Unix wizard.
$ hyle --task "create deployment script" <<'EOF' | tee deploy.sh | sh
Deploy the Rust service to prod:
- Build release binary
- Run migrations on DATABASE_URL
- Copy binary to /opt/services/
- Reload systemd unit
- Health check on port 8080
EOF

+ cargo build --release
   Compiling trading-engine v0.4.2
   Finished release [optimized] target(s) in 45.23s
+ sqlx migrate run
Applied 3 migrations
+ sudo cp target/release/trading-engine /opt/services/
+ sudo systemctl reload trading-engine
+ curl -sf localhost:8080/health
{"status":"healthy","version":"0.4.2"}
$ watch -n 60 'hyle --task "audit" src/ --json | jq .score' [pipe]
Continuous code quality monitoring. Every minute, hyle scans the codebase, computes a quality score, outputs JSON. Pipe to jq for the number, feed into prometheus, display on Grafana. Watch code health in real-time as your team commits. Alert when score drops.
$ watch -n 60 'hyle --task "score: 0-100 for safety, clarity, tests" \
    ~/code/payment-service/src --json | jq ".score"'

Every 60s: hyle --task "score..." | jq ".score"

{
  "safety": 82,
  "clarity": 74,
  "test_coverage": 91,
  "overall": 82
}
# hook into prometheus: echo "code_quality $score" | curl --data-binary @- pushgateway:9091
$ git show HEAD:src/auth.rs | hyle --task "security review" [easy]
Code review on a single file from git history. Fetch the version at any commit, pipe it through security review. No checkout needed. Compare reviews across commits to see if a change introduced vulnerabilities. Integrate into CI: review every changed file before merge.
$ git show HEAD:src/auth/jwt.rs | hyle --task "security review, OWASP top 10"

Security Review: src/auth/jwt.rs

[PASS] Token signature verification using RS256
[PASS] Expiration claims checked before use
[WARN] Line 45: aud claim not validated - accept any audience
[WARN] Line 78: Token stored in localStorage per code comment
       Recommendation: Use httpOnly cookie instead
[FAIL] Line 112: Secret key loaded from env without rotation support
       Risk: Key compromise requires code deploy to rotate

Score: 72/100
Critical: 0 | High: 1 | Medium: 2 | Low: 0
$ hyle --task "Makefile" src/ Cargo.toml | tee Makefile [wow]
Analyze the project structure and generate a build system. hyle reads Cargo.toml, understands the crate structure, figures out what commands make sense, and writes a proper Makefile. Works for justfiles, taskfiles, Makefiles, or any build system you prefer.
$ hyle --task "generate Makefile: build, test, lint, bench, doc, release" \
    ~/code/query-engine/src Cargo.toml | tee Makefile

.PHONY: all build test lint bench doc release clean

all: build test lint

build:
	cargo build

test:
	cargo test --all-features
	cargo test --doc

lint:
	cargo clippy -- -D warnings
	cargo fmt --check

bench:
	cargo bench --bench query_bench

doc:
	cargo doc --no-deps --open

release:
	cargo build --release
	strip target/release/query-engine

clean:
	cargo clean
# or: --task "generate justfile" for just
INSTALL
       $ cargo install --git https://github.com/uprootiny/hyle
       $ hyle config set key YOUR_KEY

FILES
       ~/.config/hyle/config.json    Configuration
       ~/.cache/hyle/models.json     Model cache
       ~/.local/state/hyle/sessions/ Session persistence

SEE ALSO
       github.com/uprootiny/hyle

AUTHOR
       MIT License. Contributions welcome.

hyle 0.3.3                     2026-01                         HYLE(1)
*
Default Composable Velocity Reliable Depth Playful Observable Community Independent Learning Control Secure Flow