The Claude Code rendering UI is the first place where I realized the TUI is more like a DOS or Borland UI system rather than a command line interface.
I was poking about CLAUDE_CODE_NO_FLICKER=1 setting when I realized what exactly this TUI is, it is layers of stuff showing up on top of each other with terminal codes.
Ended up reading the Ink Terminal implementation of React
https://github.com/vadimdemedes/ink
Fascinating how it ends up looking Wordperfect or Wordstar from the past instead of pixel based graphics.
The usability for a vision impaired user is about the same, though I remember braille pads for DOS tools (80x25) which work better than all the screen readers which came later.
They’re attempts at pretending to have Windows (etc.) GUIs in a terminal.
Same stuff people made for DOS when Windows wasn’t common or good enough yet.
I’m not surprised they’re a disaster. Or built without understanding the abilities of the terminal they’re running on.
Actually, I think that is close to a good name for them: Terminal-based GUIs.
Some are pretty useful, for instance I like lazygit as a simple dashboard/panel for a small repo (or when making small changes to a larger one), some make me wonder what those who made them were smoking!
The less silly ones are handy when you are tinkering with a far away machine and want something a little more interactive than CLI commands and stuff connected by pipes and scripts but don't want to deal with the latency of GUI remoting. Some, though, are so badly thought out that they are slower than using a browser over long-distance X…
My objection to TUI is I don’t think it’s clear enough for what’s happening here. I think you could easily argue that applies to most readline style stream CLI programs.
Would you call a fully 3D UI in VR, not a planar in the VR world but true 3D, a GUI? It is graphical by definition. But if you talked to someone about a GUI that’s not what they’d think you’re talking about without additional context.
That’s my objection. I think TUI implies way less than what these programs are doing. Yeah it can describe them but I don’t think it should be the word for them.
That's what a terminal UI is, and has been since Emacs was a thing.
I get it that maybe the constraints of terminals force design of TUIs to be more focused on the purpose of the tool than polish, but it's not that compelling of a point to me.
But the terminal is just fundamentally the wrong basic abstraction on which to build a structured GUI, it just happens to require few enough bits to be sent over the wire that it actually works reasonably well over SSH as opposed to pushing graphics.
Not only forwarding is trivial with Wayland, it also tends to provide better experience than X11 does.
I typically prefer CLI myself but having a TUI to manage torrents for instance was much more ergonomic.
Like, okay, they are a big step back with accessibility, but they're flickering garbage because they were vibecoded in a weekend and the TS or Python library they're built on was similarly forced upon this world.
1. There's already a large-ish community of engineers who live in the terminal e.g. Vim/Neovim/tmux/zellij/etc users. Lots of engineering tasks are accomplished by running scripts in a terminal, so it makes sense for some people to just move as much of their work there as possible. This means there's a set of users you can address with dev tools that run in a terminal.
2. Cross-platform distribution among the platforms most of those people care about — macOS and Linux — is largely a solved problem via package managers. Distributing cross-platform native apps is fragmented at best.
3. Building modern TUIs has become a lot easier thanks to the demand+distribution wins above: there's a lot of appetite for building blocks, and so lots of good options have flourished like Ink for React, Bubble Tea for Go, etc.
4. General developer distaste for the most straightforward analogue to all of this for desktop GUIs: Electron. Deservedly or not it's associated with slow, bloated applications. And if you don't use Electron, doing cross-platform anything is going to be a much harder problem than just pushing out a quick TUI app.
Eventually successful products seem to eventually jump the gap, like Claude Code eventually spawning Claude Cowork and OpenCode adding OpenCode Web. But it's easier and faster to test product market fit for dev tools with a TUI. And plenty of your users will stay there, even after you launch something else.
I still today meet users missing those old workflows. But they express it as "old text interface" aka TUI. If you listen to them you realize they mean blazing fast and shortcut driven. When you work with data entry you care about speed - not animations.
Any beginner likes eye candy. The veteran has stopped caring.
2) Constraints imposed by the terminal make all the apps look and work approximately the same - in the outside world the standards developed for UX are ignored as a matter of routine just because they can be. TUIs are in an optimum of least surprise, so to speak.
If you're going to "run command, edit command, run command", performing the edits from the terminal you're running the commands in seems reasonable/intuitive. (In contrast, for tools like VSCode, I think it's more common for terminals to take up a fraction of the screen space rather than switching it to full screen. And then developers will say they need a huge monitor).
It also seems to be that keyboard-driven programs are more commonly TUI than GUI. e.g. magit or lazygit. Or lazydocker. Or k9s.
Ever used Emacs? Or Vim? Or Mutt? Or Borland's old IDEs?
The power of the terminal is also in ubiquitness, trivial connection to a remote system, and lack of mountains of GUI cruft, that a TUI app can as well have.
- you can use em over ssh
- they’re typically made with keyboard usage in mind, which is often an afterthought in a typical browser based UI
- other GUI options are browser (sandboxes, obvi, not good for lil personal tools), native (not dead simple, compared to TUI/browser/electron), or something like electron (no way lmao)
I don’t seek out TUI’s instead of other solutions. But it’s so dang easy to pop open a new pane and run lazygit. And it makes you look really cool when people walk behind you
- CLI by default
- if I need a GUI, but no access to the local system: web
- if I need a (restricted) GUI with access to the local system: TUI
- else: either start a local web server, or, if nothing else works, go for a GUI toolkit
Another perspective from a project maintainer’s point of view:
The people who own and maintain the project get to decide what the status Open and Closed means in the context of a ticket. Users do not necessarily have to agree.
For example, a project maintainer may choose to assign to a status of Open the meaning “this is untriaged or we’re actively working on it”, and Closed could mean “we have looked at this ticket and determined that no team member is going to work on it right now.” In other words, Closed does not have to mean “rejected and this decision is final” but can mean “it’s not something we’re currently working on.” These semantics might not be intuitive for everyone but can be justifiable if they help the project members organize their workload.
Google generally try to be good at accessibility and even publish conformance reports for most of their products https://belonging.google/accessibility-conformance-reports/
If developers want to experiment with various UI configs then let them but keep a CUA in the background that can be called upon by machines and humans alike. (Unfortunately, ergonomics has never been a strong point for developers.)
They're a long way from web apps, far worse on most axes.
Back in the 90s when most SAP systems switched from AS/400 terminals to Windows NT, people reported massive losses in productivity.
I've never worked on SAP, my mother did. And basically, she went from a fully tabular, function-key based oriented workflow, to holding a mouse, moving around and clicking a lot (tabbing and F keys were lost for many functions).
She showed me how she could go from ESC ESC F4 F3 TAB TAB and she was across the whole system a super speed. And this was a terminal, not the actual system!
The short of the story is this
Windows based application work best for discoverability and new users
Terminal based applications work best for faster, memory based navigation and power users.
In fact, all successful applications for professionals/power users are built with fast paths in mind. Even Microsoft's ribbon which gets a lot of hate for some reason is an example of that, it's keyboard-driven, customizable, and discoverable at the same time.
Nope, nobody believes that. Devs say that for text documents which is somethig else entirely -- and, with provisions, for terminal single command apps (like grep, cut, ls, and so on). Nobody said it for TUIs.
Windows and panes… are free to go. The calming environment of text, which as of 2026 is still integral part of humanity’s cultural dna, and is super underexplored in this medium.
I would say the cognitive agression of modern web is more of a nightmare really in the very psychedelic sense of it with all the imagery and video and ads completely devoid of context.
> It isn't fair to blame TUIs.
> The real problem is that pretty much the whole stack has a terrible AX story.
> First, most GPU-rendered terminal emulators don't engage in system-provided accessibility APIs AT ALL. Because text is GPU-rendered, AX tooling can't "read" it, it just shows up as an image. This applies to Kitty, Alacritty, WezTerm. My own terminal Ghostty is AX-readable (on macOS), and so are others like iTerm2 and Terminal.app (which admittedly do it better than me, we have gaps to fill).
> Second, there are no terminal sequences or initiatives at all for TUIs to communicate AX information to the emulator, so the emulator itself can't do much more than display a blob of text to AX tooling. We need the equivalent of ARIA-style annotations but for terminal cells, runs, and regions. No such initiative exists. Even if TUIs do great things with the cursor, this is going to bite a lot of use cases.
> As an example of combining the above, I've been working on something with Ghostty where we integrate semantic prompt (OSC133) and AX APIs so that we can present each shell prompt, input, and command as structurally significant to AX tooling (rather than simply a text box where the cursor is somewhere else). This shows the importance of the relationship between terminal specs (OSC133), TUIs (which must emit OSC133), and terminal emulators (which must both understand OSC133 AND communicate it to AX APIs).
> The whole stack is rotten. And no one is earnestly trying to fix it (including me, I have limited time and I do my best but this is a WHOLE TOPIC that requires a huge amount of time and politicking the ecosystem and I don't have it, sorry).
Bonus: a simultaneously awesome and horrible reality is that AI is really helping to improve AX here. A lot of AI tooling uses/abuses AX APIs to make things happen. How is OpenAI reading your list of windows, typing into them, etc? Accessibility frameworks! So a lot more apps are taking AX integration a lot more seriously since its table stacks for AI using it... Sad it requires that but the glass half full is more software is doing that.
The language of accessibility – Staffnet | ETH Zurich https://ethz.ch/staffnet/en/service/communication/digital-ac...
Yes it's gronky.
But so are VT100 escape codes.
If your TUI works in conditions like these, or can at least be accessibly configured to do so, you're pretty much done accessibility-wise.
There are some quick wins to be made here; disabling spinners, gratuitous animations and status lines, moving the hardware cursor as you navigate through menu options, even if there's a color or emoji-based selection indicator present, avoiding unnecessary re-renders (don't redraw the whole prompt if you're just echoing a single character, I'm looking at you, Python 3.13+), sorting line segments by importance (timestamps should go at the end, messages at the beginning), that sort of thing.
The real fix would involve an aria-style protocol that lets you actually communicate semantics to a screen reader, but that would require buy-in from TUIs, TUI libraries, terminal emulators, operating systems and screen readers, so it will never happen.
But the UX? If you the goal is not to read a single line of the code churned by these agents, and perhaps that is the point, then they are fine - type the prompt and cross your fingers.
Anything else that requires reading and changing needs an IDE of sorts. I am not saying you cannot have your workflow works with TUIs - plenty of people do. It is just not as good and flexible as full desktop applications.
[0]: I do not need any accessibility features myself.
[1]: https://diesenbacher.net/blog/entries/speaking-emacs.html
I am surprised, though, that something like "turning off the cursor" enhances the accessibility.
Seems a lot more viable than trying to get new standard escape codes and outputting those along with visual content that may be flickering erratically. Also probably gets too complex faster than those proposals with more intricate UIs, but IMO it's really hard to defend TUIs for anything but relatively simple programs as an in-between a CLI and a native application.
We have a term for that, it's called a CLI. For example, ed and ex are the historical CLI counterparts to vi.
A single screen with no distractions really helps me focus. That said, dark terminal bg (which i like) kinda sucks for astigmatism (but that’s not unique to TUIs).
Maybe that’s why we have them? Tbh I don’t mind them, beats some space inefficient bubbly ui showing virtually no info per screen.
OTOH, I am also a Braille user and think codex works just fine.
Maybe check the alternatives before declaring an accessibility crisis?
I wish the terminal-wg was more active. There's a bunch of weird odd OSC's folks have tried to make for enhancing structure of the terminal, for various ways to emit more layout-coupled semantic info. Accessibility APIs are great but in most forms a huge chunk of their capabilities feel pretty disconnected from the actual drawing on the screen, are somewhat a parallel construct to what's on screen. Using OSC to layer in more information about what is being drawn feels righter.
Two examples, collapsible regions, semantic prompt regions, https://gitlab.freedesktop.org/terminal-wg/specifications/-/... https://gitlab.freedesktop.org/terminal-wg/specifications/-/...
But in general feels like, for all the TUI interest, not many folks are about and working together to actually figure out how to advance the terminal itself.
For example, I really dislike mouse support in TUIs. 100% of the times I used the mouse on a TUI, I wanted to copy a piece of text. If the TUI hijacks the mouse and does something different with it (e.g. vim switching into visual mode) that is just annoying.
Of course a11y is important. But it barely works on the web and we won't get perfect semantics on the terminal without a lot of work. I say the better option is to strip down the experience to the parts that work well.
tmux is great for this. Just press Ctrl+[ and the cursor can select whatever text is in the window.
TL;DR: Use the terminal cursor to indicate focus. Hide it if it bothered you, but place it where the action is. Never use something like just background color to emulate a cursor. Use the real thing.