Summarise YouTube videos from the command line using a local LLM via Ollama.
- Fetches transcripts with
ytx-cli(no API key required). - Runs summarisation against any Ollama model (Gemma 4, Qwen 3.6, Llama, etc.).
- Automatic map/reduce chunking for videos that exceed the model's context window.
- Split map + reduce models — use a tiny fast model per chunk, a bigger one for final synthesis.
- Throttled parallel map and intermediate-reduce calls.
- Hierarchical reduce — iteratively collapses map outputs that don't fit in one reduce call, up to 4 levels deep.