logpare
Semantic log compression for LLM context windows
$ npx logpare ./app.log60-90% Token Reduction
Stop wasting tokens on repetitive log patterns
Before
INFO Connection from 192.168.1.1 established INFO Connection from 192.168.1.2 established INFO Connection from 10.0.0.55 established INFO Connection from 172.16.0.12 established ... (10,844 more similar lines)
After
=== Log Compression Summary === Input: 10,847 lines → 23 templates Top templates by frequency: [4,521x] INFO Connection from <*> established [3,892x] DEBUG Request <*> processed in <*> [1,203x] WARN Retry attempt <*> for <*>
Why I built this
As I began building with AI coding assistants, I hit a wall: context windows are expensive real estate.
Most developers just truncate logs or grep for errors. That felt imprecise. I wanted a way to keep the structure of the data without the repetition.
logpare is my attempt to solve the "noise vs. signal" problem—using the Drain algorithm to treat logs as a language rather than just text. It extracts the patterns that matter and discards the redundancy.
Features
High Compression
60-90% token reduction using the Drain algorithm to identify log templates.
Preserves Context
Extracts URLs, status codes, correlation IDs, and timing data automatically.
Fast Processing
10,000+ lines/second with V8-optimized, monomorphic classes.
Multiple Formats
Summary, detailed, and JSON output formats for different use cases.
Quick Start
CLI
npx logpare ./logs/*.log
Library
import { compress } from 'logpare';
const result = compress(lines);
console.log(result.formatted);Pipe
cat app.log | npx logpare