hyle

paste sketch. get project.

LIVE BUILD
35+
Free Models
0
Setup Time
~10MB
Binary Size
Get Started

Ship in minutes, not days

Real projects built with hyle. Click to see prompts.

Stripe Webhook Handler ~4 min

Payment webhooks are tedious. Signature verification, event type switching, idempotency handling. Instead of copying from Stripe docs and adapting, describe what events you care about and let hyle generate the handler with proper error handling and logging.

// Generated: src/webhooks/stripe.rs pub async fn handle_webhook( State(state): State<AppState>, headers: HeaderMap, body: Bytes, ) -> Result<StatusCode, ApiError> { let sig = headers.get("stripe-signature") .ok_or(ApiError::MissingSignature)?; let event = Webhook::construct_event( &String::from_utf8_lossy(&body), sig.to_str()?, &state.stripe_webhook_secret, )?; match event.type_ { EventType::CheckoutSessionCompleted => { let session = event.data.object.into_checkout_session()?; state.orders.fulfill(session.id).await?; } EventType::InvoicePaid => { /* ... */ } _ => {} } Ok(StatusCode::OK) }
Auth with JWT + Refresh ~6 min

Every SaaS needs auth. JWT access tokens, refresh tokens, secure cookie handling, logout that actually works. Getting it wrong means security bugs. Describe your requirements once, get a complete auth system with proper token rotation and blacklisting.

// Generated: src/auth/mod.rs pub async fn login( State(state): State<AppState>, Json(creds): Json<LoginRequest>, ) -> Result<impl IntoResponse, AuthError> { let user = state.users.verify(&creds.email, &creds.password).await?; let access_token = create_jwt(&user, Duration::minutes(15))?; let refresh_token = create_refresh_token(&user)?; state.refresh_tokens.store(&refresh_token, user.id).await?; Ok(( StatusCode::OK, set_refresh_cookie(&refresh_token), Json(AuthResponse { access_token, user: user.into() }) )) }
Real-time WebSocket Chat ~8 min

Building real-time features from scratch takes days. Connection management, room subscriptions, message broadcasting, reconnection handling. Describe the chat structure you need, get a production-ready WebSocket server with proper state management and graceful shutdown.

// Generated: src/ws/chat.rs pub async fn handle_socket( ws: WebSocket, state: Arc<ChatState>, user_id: Uuid, ) { let (mut sender, mut receiver) = ws.split(); let (tx, mut rx) = mpsc::channel(32); state.connections.insert(user_id, tx); let send_task = tokio::spawn(async move { while let Some(msg) = rx.recv().await { if sender.send(Message::Text(msg)).await.is_err() { break; } } }); while let Some(Ok(msg)) = receiver.next().await { if let Message::Text(text) = msg { let cmd: ChatCommand = serde_json::from_str(&text)?; state.handle_command(user_id, cmd).await; } } state.connections.remove(&user_id); }
Background Job Queue ~5 min

Email sending, PDF generation, data exports. You need async processing but Sidekiq means Redis means ops burden. Describe your job types, get a Postgres-backed job queue with retries, timeouts, and dead letter handling. No new infrastructure.

// Generated: src/jobs/queue.rs #[derive(Serialize, Deserialize)] pub enum Job { SendEmail { to: String, template: String, data: Value }, GeneratePdf { doc_id: Uuid }, ExportData { user_id: Uuid, format: ExportFormat }, } impl JobQueue { pub async fn enqueue(&self, job: Job) -> Result<Uuid> { let id = Uuid::new_v4(); sqlx::query!( "INSERT INTO jobs (id, payload, status, run_at) VALUES ($1, $2, 'pending', now())", id, serde_json::to_value(&job)? ).execute(&self.pool).await?; Ok(id) } pub async fn process_next(&self) -> Result<bool> { let job = sqlx::query_as!(/* ... */).fetch_optional(&self.pool).await?; // ... retry logic, timeout handling } }
Rate Limiter Middleware ~3 min

API abuse protection is non-negotiable. Token bucket, sliding window, per-user limits. Usually you grab a crate and hope it works. Instead, describe your rate limiting strategy, get middleware that integrates with your existing auth and returns proper 429s with Retry-After.

// Generated: src/middleware/rate_limit.rs pub struct RateLimiter { limits: DashMap<String, TokenBucket>, config: RateLimitConfig, } impl RateLimiter { pub fn check(&self, key: &str) -> Result<(), RateLimitError> { let mut bucket = self.limits .entry(key.to_string()) .or_insert_with(|| TokenBucket::new(self.config.clone())); if bucket.try_consume() { Ok(()) } else { Err(RateLimitError { retry_after: bucket.time_until_refill(), }) } } }
Full CRUD API + Tests ~7 min

New resource? That's migrations, models, handlers, validation, tests. Boilerplate that takes an hour if you're careful. Describe your entity schema, get everything scaffolded correctly. Relations, pagination, filtering. Even the integration tests.

// Generated: src/handlers/products.rs pub async fn list_products( State(state): State<AppState>, Query(params): Query<ListParams>, ) -> Result<Json<PaginatedResponse<Product>>, ApiError> { let products = state.products .list(params.page, params.per_page, params.filter.as_deref()) .await?; Ok(Json(PaginatedResponse { items: products, page: params.page, total_pages: /* ... */, })) } // Generated: tests/products_test.rs #[tokio::test] async fn test_create_product() { let app = test_app().await; let res = app.post("/api/products") .json(&json!({"name": "Widget", "price": 999})) .send().await; assert_eq!(res.status(), 201); }

Why teams ship faster with hyle

Agentic Execution

Reads your codebase. Writes files. Runs commands. Iterates until done. You describe the feature, it builds it in context.

Zero Cost Start

35+ free models via OpenRouter. DeepSeek, Qwen, Mistral. Start building now, upgrade when you scale. No credit card required.

No JS Runtime

Single Rust binary. No node_modules. No npm. No npx. Compiles to ~10MB. Deploys anywhere. Starts instantly.

Session Persistence

Pick up where you left off. Context preserved across sessions. Decisions remembered. No re-explaining your codebase.

Guardrails Built In

Blocks rm -rf. Atomic writes with backups. Confirms destructive ops. Safe enough to run unsupervised. Almost.

Slash Commands

/build /test /commit /deploy - local execution, no LLM latency. 20+ commands included. Extensible with your own.

~
Default Composable Velocity Reliable Depth Playful Observable Community Independent Learning Control Secure Flow