Look at the savebrace screenshot here
https://github.com/kristopolous/Streamdown?tab=readme-ov-fil...
There's a markdown renderer which can extract code samples, a code sample viewer, and a tool to do the tmux handling and this all uses things like fzf and simple tools like simonw's llm. It's all I/O so it's all swappable.
It sits adjacent and you can go back and forth, using the chat when you need to but not doing everything through it.
You can also make it go away and then when it comes back it's the same context so you're not starting over.
Since I offload the actual llm loop, you can use whatever you want. The hooks are at the interface and parsing level.
When rendering the markdown, streamdown saves the code blocks as null-delimited chunks in the configurable /tmp/sd/savebrace. This allows things like xargs, fzf, or a suite of unix tools to manipulate it in sophisticated chains.
Again, it's not a package, it's an open architecture.
I know I don't have a slick pitch site but it's intentionally dispersive like Unix is supposed to be.
It's ready to go, just ask me. Everyone I've shown in person has followed up with things like "This has changed my life".
I'm trying to make llm workflow components. The WIMP of the LLM era. Things that are flexible, primitive in a good way, and also very easy to use.
Bug reports, contributions, and even opinionated designers are highly encouraged!
streamdown: a markdown renderer for the terminal, intended for consuming LLM output. It has affordances to make it easier to run the code snippets: no indentation, easy insertion in the clipboard, fzf access to previous items.
llmehelp: tools to slurp the current tmux text content (i.e. recent command output) as well as slurp the current zsh prompt (i.e. the command you're currently writing).
I think the idea is then you bounce between the LLM helping you and just having a normal shell/editor tmux session. The LLM has relevant context to your work without having to explicitly give it anything.
You can do ./tool "bash" and then open up nvim, emacs, do whatever, while the tool sits there passing things back and forth cleanly. Full modern terminal support.
Now here's the thing. You get context. Lots of it. Here's what it can do:
psql# <ctrl-x - the tool sees this, looks at the previous N I/O bytes and reverses video to symbolize it's in a mode> I need to join the users and accounts table <enter>
Then it knows from the PPID chain you're in postrgresql, it knows the output of previous commands, it then sends that to an llm, which processes it and gives you this psql# I need to join the users and accounts table
[ select * from users as u ... (Y/n) ]
Then it shows it. Here's the nice thing. You're STILL IN THE MODE and now you have more context. You can get out of it at any time through another ctrl-x toggle.This way it follows you throughout your session and you can selectively beckon the LLM at your leisure, typing in english where you need to.
SSH into a host and you're still in it. Stuck in a weird emacs mode? Just press the hotkey and the i/o gets redirected as you ask the LLM to get you out.
But more importantly this is generic. It's a tool that allows you to intercept terminal session context windows and inject middleware, generically and then tie it to hotkeys.
As a result it works with any shell, inside of tmux, outside, in the vscode terminal, wherever you want... and you can make as many tools for it as you want.
I think it's fundamentally a new unix primitive. And I'm someone that researches this stuff (https://siliconfolklore.com/scale/ is a conference talk I gave last year).
If you know of anything else that's like this please tell me I haven't been able to find it.
Btw you cannot do this through pipes, the input of the left process isn't available to the piped process on the right. You can intercept stdin but you don't get the input file descriptor of the left process. The shell starts two processes at the time and then passes things through so you can't even use PPID cleanly without heuristic guessing. Trust me. I tried doing things this way many times. That's why nothing else works like this, you need new tricks.
I intend to package this up and release it in the next few days.
Yeah it might be problematic, but people make mistakes like this all the time, which is why rm -Rf /home /user/.file is a well known rite of passage.
But then I realise that I do enough sensitive stuff on the terminal that I don't really want this unless I have a model running locally.
Then I worry about all the times I have seen a junior run a command from the internet and bricked a production server.
Get rid of this bit, so the user asks question, gets command.
Make it so the user can ask a follow up question if they want, but this is just noise, taking up valuable terminal space
Do you want to execute this command? [Y]es/No/Edit
perhaps also add an "Explain" option, because for some commands it is not immediately obvious what they do (or are supposed to do).This seems… like an amazing attack vector. Hope it integrates with litellm/ollama without fuss so I can run it locally.
But well done for launching (the following is not hate, but onboarding feedback)
Who else had issues about API key ?
1. What is a TMUXAI_OPENROUTER_API_KEY ?? (is like an OPENAI key) ?
2. If its an API key for TMUXAI ? Where do I find this ? Can't see on the website ? (probably haven't searched properly, but why make me search ?)
3. SUPER simple instructions to install, but ZERO (discoverable) instructions where/how to find and set API key ??
4. When running tmuxai instead of telling me I need an API key. How about putting an actual link to where I can find the API key.
Again well done for launching... sure it took hard word and effort.
Just to answer your questions, it's an OpenAI API compatible service, which you can generate here: https://openrouter.ai/settings/keys
Also added recently in readme how you can use OpenAI, Claude or others.
It was created by one of my colleagues, Nathan Cooper.
https://www.answer.ai/posts/2024-12-05-introducing-shell-sag...
https://github.com/alvinunreal/tmuxai/issues/6#issuecomment-...
to add these lines:
``` openrouter: api_key: "dummy_key" model: gemma3:4b base_url: http://localhost:11434/v1 ```
TBH I found the whole thing quite flaky even when using Gemini. I don't think I'll keep using it, although the concept was promising.
In bold in their terms is: "Do not submit sensitive, confidential, or personal information to the Unpaid Services."
Appreciate the feedback as it evolves.
You can also override during session, with: /config set max_capture_lines 1000 - to increase capture lines for the current session only.