This tool compares different compression formats optimized for Large Language Model (LLM) context windows.
By reducing token count while preserving information, these formats allow you to fit more data into LLM prompts,
reducing costs and improving response quality.
MOTH (Machine Optimized Text Hierarchy)
Type: LLM-based compression Best for: Technical specifications, API definitions, database schemas, system architecture Compression: 70-90% reduction vs. verbose formats Philosophy: Blueprint not specification - answers "what" and "why", not exhaustive "how"
Accuracy: Same tokenization as OpenAI's official models
Method: BPE (Byte Pair Encoding) algorithm
The token counts you see are exactly what these models would use, not approximations.
How to Use This Tool
Select an example from the dropdown menu
Compare the 4 formats side-by-side: Original → MOTH → TOON → kablUI
Check token counts and compression percentages below each panel (using real GPT tokenization)
Click "Tokens" in the header to visualize token boundaries with colored borders
Click "Wrap" in the header to toggle word wrapping for all panels
Use the copy button on each panel to copy that format to your clipboard
Try It With Your LLM
Want to evaluate which compression format works best for your use case? Copy any compressed example and paste it into your favorite LLM (ChatGPT, Claude, Gemini, etc.) and try:
Ask questions: "What does this specification define?" or "Explain this UI structure"
Generate code: "Create a React component from this kablUI definition" or "Generate SQL from this MOTH schema"
Build prototypes: "Create a working prototype based on this compressed spec"
Extend it: "Add authentication to this API definition" or "Add a dark mode toggle to this UI"
Compare outputs: Try the same prompt with MOTH vs kablUI vs TOON to see which format the LLM understands best
💡 The compressed formats use far fewer tokens than the original, leaving more room in your context window for the LLM's response!
All formats are open source and free to use. Visit the project pages for documentation and implementation details.