What would be the data structures for that? I can only think of trying to replicate something like the HTML DOM but i have a feeling something like Write for Windows 3.1 used a simpler data structure.
The real complexity is rendering all of unicode properly, and supporting international fonts, bidi layout, vertical text, etc.
The "styling a range of text" is something i thought but you still need to somehow associate the text with the range - and vice versa - and this doesn't handle things like inserting images and other types of objects since these aren't text.
You could have a document be a series of "paragraphs", each being a series of "elements" with each "element" being something like "text" (with a style), "image", etc. But then once tables enter the picture, you need to expand paragraphs to be of "table" type and each table cell is itself a self-contained "series of paragraphs" - and then start thinking about nested tables or images in tables!
Generalize that enough to avoid special cases inside special cases and you end up with more of a tree-like structure representing a DOM and less with a linear structure with range-based styling.
(of course, then again, i don't remember Write for Windows 3.1 having tables in the first place :-P but i'm interested if there are alternative approaches anyway)
EDIT: one thing i forgot to mention - and why i am curious about non-DOM-based approaches - is that one problem with the DOM approach is the selection: with a linear/range-based structure the selection is just one or two indices inside the range, but with the DOM the selection can start from a node with node-specific subrange (e.g. character in a text node) and end with another node and both being very unrelated to each other (i.e. only having some distant common ancestor and not necessarily at the same level).
The docs for ProseMirror are a brilliant insight into how many of these editors are designed. ProseMirror maintains its own document model in parallel to the html DOM and rectifies one to the other as either changes.
For realtime collaborative rich text editors take a look at PeriText, they have a brilliant article explaining the CRDT structures used.
Here are the Win32 docs: https://learn.microsoft.com/en-us/windows/win32/controls/ric...
The more I read about this control, the more I learn about its insane feature set! Microsoft continues to make significant improvements to a version that is only shipped with Microsoft Office -- not available from a barebones Win7/10/11 install. Read more here: https://devblogs.microsoft.com/math-in-office/using-richedit...
Rich Edit control also supports the Text Object Model, which is very powerful. Read more here: https://learn.microsoft.com/en-us/windows/win32/api/tom/nn-t...
There is a text file describing the reverse engineered file format here:
https://web.archive.org/web/20130831064118/http://msxnet.org...
The TOM stuff was added later when the MSWord people took over ownership.
I worked on Ready,Set,Go! back in the day and also wrote my own styled editor for the Mac in the 90's.
One interesting thing about this is that you could use a lazy "adjustor" to remove duplicate or overlapping styles. No need to worry about it during typing or selecting. A low priority task could analyze the runs and fix them up as needed and no one was the wiser.
IMO, the hardest part about writing a word processor back then was font management. You had to construct width tables to be able to compose lines properly. This generally involved a bunch of poorly documented APIs and peering into opaque data structures.
[1]: https://github.com/FXMisc/RichTextFX/tree/master/richtextfx/...
[2]: https://gluonhq.com/presenting-a-new-richtextarea-control/
[3]: https://github.com/gluonhq/rich-text-area/blob/master/sample...
[4]: https://github.com/DaveJarvis/keenwrite
[5]: https://www.youtube.com/watch?v=3QpX70O5S30&list=PLB-WIt1cZY...
[1] https://source.winehq.org/git/wine.git/tree/HEAD:/dlls/riche...
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48...
The major failing point of the Piece Tree from my benchmarks is the substring/querying time. An idea I want to try out to speed up my Piece Tree implementation is to distinguish between Concat (metadata) and Leaf (string) nodes, just as a Rope does, storing metadata in the internal nodes and Pieces at the leaves.
The reason (I hope) that will improve substring times is because, in a Piece Tree, the original string can be reconstructed through an in-order traversal of the tree's internal nodes.
So, if you specify a substring range that starts from one character before the root node and ends one character after the root node, you end up traversing to the rightmost node in the left subtree and the leftmost node in the right subtree (two O(log n) operations).
I'm hopeful the tree depth that would need to be traversed if the nodes were at the Leaves (like in a Rope) would be shorter (especially since adjacent pieces won't be O(log n) distance away) and want to try it out myself, but my intuition might be wrong. You can have a go trying that out yourself if the idea interests you.
[0]: https://github.com/microsoft/vscode/tree/main/src/vs/editor/...
Now in the world of V8 and JavaScriptCore, I don't think it's crazy at all. So much work has gone into JS runtime optimization. For heavily concurrent and memory intensive workloads, I can imagine problems though.
This would still only be a O(n) operation. The constant value might be higher, but the complexity is the same.
I don’t buy the argument that gap buffers “are bad for multiple cursors”. I get the argument on a theoretical level, but real hardware is not theoretical. There are several operations that are theoretical faster with a hashmap or b-tree than a vector, like insert and delete. But in reality the vector is usually faster in the real world except for very large inputs[1]. Gap buffers are basically vectors.
Another point with multiple cursors and gap buffers is that Chris Wellons animations show the gap moving back to the first cursor every time it needed to add a new character. But in reality you would just walk the cursors in reverse order each time, saving a trip.
I have actually written and benchmarked a very naive and unoptimized gap buffer, and the results showed that it was faster than highly optimized rope implementations on all real world benchmarks[2], including multiple cursors.
That being said, a gap buffer is still probably not the best data structure for a text editor because it has worse “worse case” performance than something like a rope or piece-tree. Even though it is faster overall, it’s the tail latency's that really matter for interactive programs.
Overall I enjoyed reading the post, I find the topics fascinating and this was well presented.
[1] http://www.goodmath.org/blog/2009/02/18/gap-buffers-or-dont-...
[2] https://github.com/CeleritasCelery/rune/issues/17#issuecomme...
I miss that editor so much, that I'm considering to write one some day, but I have no idea how to do so. I can invent things myself, but I guess those things were invented already back in the days computers were different.
https://github.com/arximboldi/ewig
https://github.com/arximboldi/immer
See the author instantly opening a ~1GB text file with async loading, paging through, copying/pasting, and undoing/redoing in their prototype “ewig” text editor about 27 minutes into their talk here:
https://m.youtube.com/watch?v=sPhpelUfu8Q
It’s backed by a “vector of vectors” data structure called a relaxed radix balanced tree:
https://infoscience.epfl.ch/record/169879/files/RMTrees.pdf
That original paper has seen lots of attention and attempts at performance improvements, such as:
Yes, this isn't encouraged enough. I often serialize my data structures as either JSON or Graphviz DOT files for visualization. It helps save an immense amount of time. You can also use the generated files for regression testing, i.e. diff the actual output with the serialized output and if they're different, then a bug was introduced.
It's been a while, but I believe that it represented everything in a tree of n-character chunks (n = 6?). It was probably the first editor that I used that could open files of pretty much any length.
"Idioteque" is a song => by the English rock band Radiohead <=, released on their fourth album, Kid A (2000).
If I want to then edit this text, is there an efficient algorithm for figuring out the start and end index of the highlight for the edited text?
1. The claim that the rope is inefficient for undo/redo is based on a singular example of a small edit on short strings. This isn't where ropes shine, admittedly, but they don't need to shine there, because when dealing with such small pieces of data, pretty much anything you do will be faster than the user can see. If using larger strings, the space allocated for nodes becomes background noise as the strings themselves dominate the size.
2. The chosen solution, the piece table, is more memory-efficient than the rope at first glance, but that's a surface-level efficiency. The eventually-chosen solution, a piece tree, is far less memory efficient. Sure, at first glance it is more memory-efficient, but this is at the expense of tree traversals, which in the VSCode article, are addressed with cache, which... uses more memory. In the author's implementation there's even more memory used because there's a requirement he didn't include in his list: he wants it all to be immutable. Nevermind that ropes were immutable from the start...
3. If you have a document which uses a lot (read, thousands) of very small edits, then the size of small strings might start to matter. So if you're going to optimize for this, optimize it. There are some fairly small optimizations that make the inefficiency concerns completely irrelevant. One is pointer packing: in a 64-bit system, pointers are 64 bits, but in practice, the vast majority of systems use 48 or fewer of those bits: as it turns out, there aren't many systems with more than 2^48 bytes = 256 terabytes of RAM. This means the leading 16 bits are 0s. Trivially, this means you can store strings of 7 8-bit characters in the pointer itself, using the first 8 bits to signal if it's a string or pointer (if they're all 0s, it's a pointer) and the length of the string. All the strings in the inefficiency example can fit in a 64 bit integer: "Hello", " ", and "world" are all fewer than 7 bytes, which means you're passing around 64-bit integers, by value, with no allocations necessary. In fact, this means you can either append the " " to "Hello" like "Hello ", or prepend it to "world" like " world", and still stay under 7 bytes in either case. Remember, this is now 64 bit integers being passed around by value: this is far faster than piece trees.
4. The author treats undo/redo as a stack, but all the cool editors treat it as a tree. If you make a change, then undo it, then make another change, then undo the second change, is your first change lost? In vim/emacs, the answer is no: you can go back into a tree and find it to reapply. This means that all text is not only immutable but immortal: it has to be kept for the duration of the editing session. This enables a few more optimizations: we no longer need reference counts or garbage collection since we aren't reclaiming the memory, and now we can point into existing strings since they'll never change. Consider the following string: "The quick brown box jumps over the lazy dog." You may have noticed a typo: "box" should be "fox". This change requires 0 buffer allocations: we have a pointer to "The quick brown box jumps over the lazy dog." for the original string, a pointer to the same spot for "The quick brown " (with a length), a second pointer to "ox jumps over the lazy dog.", and a packed pointer (integer) for the string "f". This is pretty key because if you're not freeing any of this memory, you need to make sure you don't allocate more than necessary!
NOTE: I'm not saying that the rope is the better structure here. There may be more requirements which weren't captured in the article which mean that piece buffers really are the right answer. All I'm saying is that the article doesn't really explore ropes deeply enough to write them off so quickly.