Just type wut and an LLM will explain whatever's in your terminal. You’ll be surprised how useful this is. I use it mainly to understand and debug stack traces, but there are a bunch of other use cases:
- Deciphering error codes
- Fixing incorrect commands
- Summarizing logs
You can connect wut to OpenAI, Anthropic, or a local model via Ollama.
I hope y’all find this as helpful as I do!