Live
364
Tests
20Hz
TUI Refresh
35+
Models
00:00:00
Uptime
Token Usage (last 24h) healthy
Prompt tokens 124,892
Completion tokens 48,231
Context efficiency 78%
Cost (today) $0.42
Latency p99 <2s
340ms
TTFT p50
1.2s
TTFT p99
42
Tok/sec
Total (p50) 2.8s
API calls (24h) 847
Active Model
claude-3.5-sonnet $3/M in
gpt-4o $5/M in
llama-3.1-70b free
deepseek-coder $0.14/M in
Event Log streaming
14:23:01 TOOL read src/agent.rs (2,847 bytes)
14:23:02 API stream_completion started, model=claude-3.5-sonnet
14:23:04 INFO context compressed: 45,231 -> 12,847 tokens (72% reduction)
14:23:05 TOOL write src/agent.rs (atomic, backup created)
14:23:05 TOOL bash: cargo test --lib agent
14:23:12 INFO tests passed: 47/47 in 6.8s
14:23:13 API stream complete: 1,847 tokens, TTFT=340ms
Sessions
abc123 now
def456 2h ago
ghi789 yesterday
jkl012 2d ago
Thread Architecture
TUI Event Loop (50ms, ~20Hz)
├── telemetry_rx.try_recv() ← OS thread
├── process_bg_responses()  ← BgWorker
├── rx.try_recv()           ← API stream
├── event::poll()           ← Keyboard
└── render_tui()            ← Pure CPU
        

Click a component to see details. Non-blocking I/O throughout. All file operations via spawn_blocking.

Live Terminal
> add error handling to parse_config
[read] src/config.rs
[patch] src/config.rs +15 lines
[bash] cargo test config
test config::tests::parse_valid ... ok
test config::tests::parse_missing ... ok
test config::tests::parse_invalid ... ok
:
Default Composable Velocity Reliable Depth Playful Observable Community Independent Learning Control Secure Flow