The design documents (https://github.com/google/xi-editor/tree/master/doc/rope_sci...) explaining the concepts and implementation details is a must for those who want to understand its core.
But I still have my regards regarding input latency being accredited only to the text editor. I recently switched back to Linux (nvim running on alacritty with tmux) and the latency issues magically disapeared. During my iterm2 (and even terminal) period on OSX I felt horrible lag during operations which I don't even have time to blink with my current setup.
The last straw was once I found out iterm2 consuming 10% of my CPU in idle, the issue only happened when any non builtin font (menlo, monaco or courier) was used.
I must say however that things are looking a bit better in iTerm2 when switching the Metal renderer on. It's just a shame that an Nvidia GPU on OSX is necessary to compete with my $1000 Lenovo with its crappy Intel chip on OpenBSD when it comes to text editing latency.
As for using CPU when idle, there was a bug where a background thread was doing the equivalent of running `ps` over and over. That's been fixed in the last few versions, but please file issues for such things and I'll get to them as soon as I'm able.
I also recall finding Terminal.app to have lower CPU consumption, matching postit's experience.
Well, if your requirements are low and the defaults are OK, sure, Terminal.app can do it. But iTerm2 does so many more things it's not even funny.
As Ralph and others have pointed out, their rope is fundamentally a persistent data structure.
I think that one of the fundamental differences between how Rust and C++ deal with value and reference semantics make the two approaches look more different than they are. The C++ API is allowed to look a bit more "immutable", where one can always use the same functional looking API even though sometimes the performance of an operation is gonna depend on the category of the reference you are operating on (l-value, r-value). Rust forces the user to be explicit about everything all the time, making it very explicit when intend to "branch" or "snapshot" the persistent data structure and where do we wanna allow "in place" updates.
I am still undecided about which approach I like most, and I do think that there merits to each approach. But maybe just because I never really wrote anything serious in Rust. The C++ approach has the problem that it becomes unsafe once you introduce `std::move` (but if the C++ type system was not a crazy evolutionary monster one could imagine a similar approach with moves being inserted automatically and safely by the compiler.)
There is also a plan[1] for making Alacritty's latency best-in-class.
> When people measure actual end-to-end latency for games on normal computer setups, they usually find latencies in the 100ms range.
And in that very sentence link to a latency test for a game that is notorious for being laggy, and even then it reaches 59.4ms latency. The same video they link, even says (at 1:32) that Overwatch has a button-to-pixel-change latency of about 15 ms! So where is this "usually 100ms" coming from?
Just because of this I don't really trust the rest of the article.
I've actually been using nvim as my terminal multiplexer instead of using tmux.
Also thanks to the awesome Recurse Center for inviting me to speak and making the recording, and the audience for their great questions.
I’m in the process of learning Kakoune (1) which happens to have JSON-RPC API as well (2). Kak is fully terminal app which doesn’t give it any edge on latency (doing low latency terminals and shells seems to be a hard task). Hopefully Kak’s take on Vi’s commands as selection oriented language can be leveraged from within Xi editing framework in some capacity.
(1)[http://kakoune.org] (2)[https://github.com/mawww/kakoune/wiki/JSON-RPC]
How soon do you think it will be before more product-level thinking can be brought into the mix?
(Edit: I see I'm not the only one to ask a question along these lines.)
Thanks!
Monolithic machine executable that fits into under a meg of RAM and comes up in a fraction of a second from a cold invocation.
Hardware rendering routinely breaks in Ubuntu (and probably other distros) because NVIDIA's driver still isn't packaged well enough to survive a kernel update, and doesn't work over ssh or on an many embedded systems anyway.
I have successfully integrated several interpreters/event loops into the same process (C++/Boost, Python, Tcl/Tk, Qt). It was a real pain to implement and felt very hacky but worked.
Using processes you can achieve something like a micro-service architecture. An example of where I've seen this kind of coupling is the custom transfer agents in Git LFS.
https://github.com/git-lfs/git-lfs/blob/master/docs/custom-t...
They use processes and line delimited JSON for communication.
I wonder, in the talk you say that you took some ideas from the design of Chrome; do you think that --in turn-- browsers could learn from the concepts in Xi?
And another question: you are using multiple languages. Doesn't that make it needlessly difficult to express the same operations in different (concurrent) parts of the system? E.g. a character update is rendered on the screen (in Swift), and simultaneously the operation is sent to the core (in Rust) to reconcile the change in the core data structures; both operations are essentially the same, but now they have to be written in a different language, which means more development work and increased likelihood of mistakes/inconsistencies (?)
PS: I'm not sure if this makes sense; I couldn't finish watching the talk but will watch the rest later!
I'd consider a different serialization format, but it doesn't seem to be on the critical path for either performance or functionality, so I feel there are a lot of other things ahead of it.
I found https://github.com/google/xi-editor/blob/master/doc/plugin.m... , but it doesn't have anything technical (or a tutorial).
Would you mind elaborating on this?
(IE it's not an experimental product, it's not a product at all. It's just raphlinus releasing some code)
(FWIW, I tried compiling Xi on OSx & opening a large file in it. To its credit, it worked, but there are issues; e.g. horizontal scrollbar is wrong, word wrapping is not working etc)
I'm curious about your reasoning for JSON, as compared to protocol buffers, flat-buffers, etc. I would imagine that accessibility and ease of use would be the main reason for choosing JSON, is that in fact the case?
Are there any other reasons you chose JSON over other wire formats/protocols?
I agree with your assertion in the video that this is not much overhead, but I struggle with this debate as I work to scrape every ounce of performance out of the architecture.
Edit: Asked and answered elsewhere https://news.ycombinator.com/item?id=16268332
I particularly like the "No change of state" meaning in "Z notation", apparently a formal language for specifying and modeling computer systems.
Edit: never mind, author beat me and debunked my overly-complicated analysis
That above needs some reliable proof to be honest.
I am testing this claim with Sciter (https://sciter.com)... On Windows Sciter uses Direct2D/DirectWrite (with an option to render on DirectX device directly) and/or Skia/OpenGL. On Mac it uses Skia/OpenGL with an option to use CoreGraphics. On Linux Cairo or Skia/OpenGL.
Here is full screen text rendering on DirectX surface using Direct2D/DirectWrite primitives. Display is 192 ppi - 3840x2160:
https://sciter.com/temp/plain-text-syntax.png
On window caption you see real FPS that is around 500 frames per second for the whole screen for the sample. CPU consumption is 6% as it constantly does rendering in a "game event loop" style. In reality (render on demand as in sciter.exe) CPU consumption is 3% on kinetic scroll (120 FPS).
As you see platform text rendering is quite capable of drawing texts on modern hardware.
As of problems of text layouts ... Not that CPU and memory consuming too to be honest.
Sciter uses custom ::mark(name) pseudo-elements that allow to style arbitrary text runs/ranges without the need of creating artificial DOM elements (like <span> for each "string" in MS Visual Code), see: https://sciter.com/tokenizer-mark-syntax-colorizer/
To try this by yourself: get https://github.com/c-smile/sciter-sdk/blob/master/bin/32/sci... (sciter on DirectX) and sciter.dll nearby. Download https://github.com/c-smile/sciter-sdk/blob/master/samples/%2... and open it with the sciter-dx.exe.
Skia is definitely capable of good performance, as it resolves down to OpenGL draw calls, pretty much the same as Alacritty, WebRender, and now xi-mac. One thing though is that it doesn't do fully gamma-corrected alpha compositing, so it's not anywhere near pixel-accurate to CoreText rendering.
Doing proper measurement is not easy, but seems worth doing.
If to consider more complex DOM cases then you can try https://notes.sciter.com/ application. Or to run it from SDK directly: https://github.com/c-smile/sciter-sdk/blob/master/bin/32/not...
Notes window layout resembles IDE layout pretty close. And Notes works on Window, Mac and Linux so you can compare different native text rendering implementations (I mean without conventional browsers overhead).
This claim is a bit surprising to me. I was under the impression Skia is an immediate mode renderer which ends up issuing a lot GL calls that could be avoided with a retained mode renderer.
Sure, 500 fps, but that's not the important part. Latency would be. So at 500 fps with how much output latency to the display?
Editor keeps each line in separate <text> DOM elements (like <p> but no margins and only text inside).
So we just need to relayout one particular line in order to show typed character.
Xi sounds like an editor I'd love to use. Latency while typing is a jarring experience.
This is so true.
I grew up on the Commodore 64. The machine was usually pretty responsive, but when I typed too quickly in my word processor it sometimes got stuck and ate a few characters. I used to think: "If computers were only fast enough so I could type without interruption...". If you'd asked me back then for a top ten list what I wished computers could do, this would certainly have been on the list.
Now, 30 years later, whenever my cursor gets stuck I like to think:
"If computers were only fast enough so I could type without interruption..."
I'm talking 100-800ms. I was so off-put, even after changing a few settings, that I returned to Notepad++ for a while.
I'd never been so disgusted at a piece of software. It's the 21st century and we're dealing with input latency in a text editor???
Eventually they seemed to fix things and it started running better, and now on Linux it runs buttery smooth and is a joy to use, but clearly they were not finished with their work by the time this article was published.
Fuschia front-end client for Xi: https://fuchsia.googlesource.com/topaz/+/master/app/xi/
I have tried tons of editors, and I always come back to vim. Heck, I have even moved my .vimrc to be the simplest possible with the smallest number of plugins.
For instance, notepad is default on windows, and I'll use that when needed on vanilla windows machines, but for the 99% of my editor needs I install and customize a number of enviroments (including Vim). I don't mind whether it's installed by default or not.
I know this is an issue where I work. We have 40 thousand plus servers, and we aren't going to have everyone install their editor of choice on all those machines. If I have an issue I need to check on a server that requires some text editing, I have to use one of the default editors.
I glanced over much of the presentation, and none of it was about the editor itself as a user would see it. I'm impressed and glad for the technical investment, but to be even a consideration for a replacement, I need a focus on how it's supposed to be an improvement, not just a substitute, for vim.
Bui if it's because the world needs yet another editor, just because it can be written in Rust, I'm not so sure.
Edit: So I built Xi-Mac and the file opens in a fraction vs ST3 but once it's open performance is just as bad or maybe even worse.
It's painfully slow, moving the cursor 1 character to the right takes like 10 seconds.
Do you want me to file a bug?
That solves a lot of the Emacs trouble iiuc. (Same for the Async first design).
> The xi editor will communicate with plugins through pipes, letting them be written in any language,
That's caused a lot of trouble for vi/vim, has it not? I get the separation of concerns, but intermingling the concerns has helped emacs in a certain way.
> That solves a lot of the Emacs trouble iiuc.
True enough. Despite the fact that it's IMNSHO the best editor out there, emacs is not without its problems. It has a lot of historical baggage.
> I get the separation of concerns, but intermingling the concerns has helped emacs in a certain way.
I agree. One of the huge features of emacs is that (above a very low level) everything is implemented in a fairly decent Lisp, and everything is available to Lisp code. How much will plugins be able to dig into one anothers' internals in order to get work done? Yes, monkey-patching is bad — but it's better than not being able to monkey-patch. Relying on plugin authors to expose extension points is probably not going to work in the long run: it really helps being able to define advice on functions, wrap functions, replace functions, mutate internal variables when one knows what one's doing &c.
Sounds as if it should be easy to port those apps to the browser world (WebGL + Web Assembly)? Does anybody know of any plans to do just that?
This feels more correct. So the GUI authors can work on the chrome and polish, while the editor team focus on performance and all the nitty-gritty memory operations.
Also, I don't thin we need another editor with a pluggable focus. It's not true that you need plugins to a text editor. It is plugged into an environment that also contains all the other things you need. It interacts via "open file", "read file", "write file" and "close file" with the other tools. It basically _IS_ the plugin for editing files.
And if you really want an editor containing plugins, then there's a load of it out there. You can start with all the IDEs, you can start with Emacs, you can start even with Neovim or Atom. This is not a feature that needs another project.
"zigh" as a pronunciation seems natural to me.
I always say "ksenon", "ksülofown" (ü is a sound between -e and -u and funnily exists in German and Chinese but not English), "ksifoid".
Anyway each hint is helpful. I'll google for that.
Question re: json communication though: is there a space for something like protocol buffers here, or is that a case of YAGNI, or simply ill-suited?
Text editors running on the desktop that are based on web browsers was a big step backwards. It encouraged features/plugins but now my text editor and my chat client each take gigabytes to run!
would you support plugins in a scripting language like python?
error: could not find `Cargo.toml` in `/home/xyz/Downloads/xi-editor-0.2.0` or any parent directory
Solution in search of a problem.
I haven't run into an editor performance issue in more than twenty years.
I will gladly trade a hundred text editor performance fixes for one web browser performance fix.
If the issue is the claim that Eclipse/Atom/PyCharm/JetBrains products are fast enough for you, I couldn't disagree more.
If you'd follow either chrome's or firefox's changelog you'd also read about constant progress in optimizing their browser engine and dev tools.
Can you provide an example? Because I've worked on > 1MM line projects in IntelliJ with no performance issues, or at least none related to text editing.
I certainly wouldn't consider any of those toys to be more modern than vim, sublime, emacs, etc. Newer maybe, but not more modern.
So first I have to create a problem for myself by using a "modern" editor, and then instead of fixing the problem in the obvious way (going back to the normal editor that works efficiently) I should wait around for something like this.
> If you'd follow either chrome's or firefox's changelog you'd also read about constant progress in optimizing their browser engine and dev tools.
I.e. it's not obvious from the unchangingly crappy browser experience itself, so we have to convince ourselves by believing the changelog.
I'd love to see this project continue and be a success. I think there is plenty of room for yet another text editor.
Here's where I'm going to be a bit of a party pooper.
Pragmatically speaking all of the great stuff Raph is talking about doesn't really matter. What Raph cares about are the kind of things people who build text editors like to geek out on and again that's all great.
Imma let you finish but VSCode and Atom and the entire Electron ecosystem have once again proven that a sub-optimal but powerful platforms with low barriers to entry win most of the time.
I'm rarely impressed by software projects (close or open) and I'm even less often impressed by Microsoft but what they've been able to do with VSCode in such a short amount of time, and the VELOCITY with which they are continuing to move means that for all practical purposes I will probably be in an Electron based editor for the foreseeable future.
I'm not saying I don't want Xi to continue to grow and be as great as it can be, I'm just being pragmatic.
Unfortunately that's not true. If only MORE people that build text editors actually DID geek out on such things. Alas, we have 30+ years of editors that got all those wrong.
>Imma let you finish but VSCode and Atom and the entire Electron ecosystem have once again proven that a sub-optimal but powerful platforms with low barriers to entry win most of the time
Well, they don't win with me. But even if so, I don't see why you can't have an optimal platform AND a big ecosystem. Or how json-rpc (which is what the plugins will need to talk to) is a high "barrier to entry".
The momentum there eclipses... Eclipse, as well as many other closed and open source editors.
If you don't see the Electron ecosystem as a winner today would you say they'll have won if all editors stay at their current momentum of new features and bug fixes?
In the 1980s and 1990s the i386 ISA had tremendous velocity, which is why it still exists whereas DEC Alpha does not. In contrast, to the situation with microprocessor ISAs, it is practical to use a text editor without a lot of velocity -- or at least it is practical for me because of the way I use my text editor: basically, I just want something with a clean internal architecture that is as easy for me to learn how to modify as Emacs currently is.
Modifying or customizing vscode has a low barrier to entry only if you already know web technologies. if you do not, then it has a very high barrier to entry compared to most other text editors.
Based on this video, I feel I'll almost certainly be using Xi for most of my editing in the future.
At least according to HackerRank's "2018 Developer Skills Report"[1] 67% of developers used VIM with only 4% and 2% using VS or Atom.
Stack Overflow in last year's "Developer Survey Results"[2] does align with your opinion however. Sooo... \_(ツ)_/¯
[1] http://research.hackerrank.com/developer-skills/2018/#insigh... [2] https://insights.stackoverflow.com/survey/2017#technology-mo...
But for extremely large files, I use EditPad Lite.
What looks nice about Xi is that the next cool editor can be built on top of it and leverage it's stability.