If IDEs weren't confined to a 2D window, what would it look like? Are there any features you can think of that would make AR coding more productive than simply on a monitor?
AR VR iOS and macOS app for arbitrary code rendering in 3D space. Terminal like rendered glyph by glyph means perfect control over ever mesh and texture.
The iOS demo is fun. You can walk around your code like an art museum, draw lines of executed traces, and perform visual hierarchical search.
The tracing is more complicated. SwiftTrace is a bit tangly, and I’m still trying to think of a way for folks to “drop in” something to their code base to produce a structured set of traces. There’s a file-backed array that is is used to read compacted run logs, so as long as I can can get the format right, it should work for most cases. I even experimented with a syntax rewriter to insert logs!
What languages do you use? What would you like to see supported?
Thanks for asking around and letting me share
It is not obvious to me how an AR interface would make a difference other than more virtual screen real estate. You would still need a way to enter text, build code, run tests, etc. This means a keyboard and pointing device (mouse, trackpad, etc.) are still needed unless something else can do it better.
Granted, for the same cost as the Vision Pro you could get several large, high resolution monitors and have lots of screen to work on.
Text no longer needs to be the primary way of conveying programs. There are practical reasons text works best on screens, but if your coding environment is boundless then there’s no reason to believe you can’t do fancier things like direct manipulation of ASTs pretty easily. Imagine "grabbing" an AST node and attaching it to a different parent, all in space.
Beyond simple AST manipulation, the Vision Pro will probably enable Dynamicland-esque "programming spaces" where you manipulate objects in your virtual environment to construct programs.
I think it boils down to it being a “programming language”
What we need for AR / VR is “programming gestures”
This way there is no syntax but visual mediums you manipulate via gestures. And this would get compiled to a binary which can be executed.
I was expressing this to a friend who was involved in VR in the 80s (VPL research) and was simultaneously elated and disheartened to learn that they had the same idea ! Googling around for it now I suppose he was telling me about "Body Electric" or "Bounce" and looks like any other 2D data flow language [0]. Maybe just ahead of its time. A patent about it [1] describes the problem with wires going everywhere and needing to provide the user the option to hide any set of connections. I'd want to accomplish this by representing the whole connectome in 4D space, and then shifting the projection into 3D to hide and reveal a subset of connections. Further visual filtering could be performed with a depth of field focus and fog effect, controlling all these parameters to isolate the subset of the system you want to inspect.
[0] http://www.art.net/~hopkins/Don/lang/bounce/SpaceSeedCircuit...
[1] https://patents.google.com/patent/US5588104 (bonus, figure 3 shows the dataglove as just a hand plugging into the PC)
How do you deal with references? Like defining a variable and using it later in multiple places?
But that setup definitely doesn't fit in a backpack. So the idea of duplicating this inside a spatial computing environment is really exciting, even though everything released so far doesn't quite cut it.
But aside from the portability angle, another way to leverage the possibilities would be make the OS, probably using applied statistics trained-model techniques, really good at helping move the 2D planes through space (basically, re-arranging the windows or "screens"). 4 monitors is really all I can get on my desk. In theory I could like, ceiling-mount a row of three more above the main row of left-center-right but IRL that would be way too much work and too low-benefit to be worth doing.
But in a virtual, spatial-compute environment, all kinds of things might be worth doing. Maybe I have 40 or 50 2D planes for editors, debuggers, design schematics, and a team of flying robot ninjas with jetpacks who can instantly move them in and out of position.
The main rationale for a multi-monitor setup like mine is to "see everything at once". I do know how to press Alt-Tab and use virtual desktops, and I understand some people even prefer that workflow.
But for me the ability to keep the main work front and center, but everything in my field of view and glanceable, eliminates hundreds of instances of what would otherwise be window/desktop switching, every day.
If I had Jedi powers, would I just fling a few more monitors to hover in the corner of my room near the ceiling? Sure I would. Why do I need btop taking up space on my main screens? Or my chat thing that I will only use a couple times?
The only reason is that I only have these monitors, right on my desk. In theory I could like, wall-mount more displays for those things I don't really care about, but the effort and time investment to do that is totally not worth it.
But in theory a spatial compute environment could go hog-wild putting less relevant info farther away, but still in view, using space in the room that we just can't practically make use of today. All my screens are 60cm or so from my face.
I suspect we will see novel ideas around how to position these 2D planes, and use applied statistics/trained-model techniques to make the OS better at helping manage them (in a useful, but perhaps also delightful, way) before we transcend the 2D-plane "window" metaphor.
I also don't see avoiding keyboard, for programming or other text-input-heavy tasks. The mouse, though... I mean, we'll see how the eye tracking works once they really ship it. I could see the mouse becoming obsolete on computers that are 20mm away from your eyeballs and know exactly what you are looking at....
If you get an error, automatically search for the answer and propose the change.
If you add a new flow uncovered by tests, propose the test.
Generally, have panes that are dynamic to what you are doing, and tightly couple them.
I could imagine looking at different zoom levels of a code file, folder, or architecture, and working primarily on abstractions, approving / rejecting the resulting proposed edits.
Strategic coding more akin to a game like Supreme Commander or Planetary Annihilation.
Using it to present stacks of information (version history, undo/redo chain)
Using it to render background information that doesn't need to be swapped into the foreground to be useful - the architecture/module that the code you're working in serves, the remote services that fulfill certain commands, the test coverage available to you in this module.
Nodes would include class/object/method nodes w/ code blocks. So, an important AR/VR UX feature would be the ability to collapse/dive-into nodes & groups of nodes much like code-outliners in 2D IDEs.
Another awesome feature would be the ability to affix dials/gauges and other displays to the outside of nodes & node groups that would provide indicators of the unit's state: How "full" is a collection node, how often/frequently was this node invoked, the health of the node (errors, exceptions, slow-execution times), etc.
But in all seriousness, you can have a tree of small code windows connected by various dependencies. So it will be much easier to see a piece of code and it’s uses / definitions, and thus understand a codebase’s architecture
If they manage to pull something like this off and be first to market I guess Magic Leap, HoloLens, and whatever Meta is cooking up (if they still are doing something in that space anyway) will very likely be pretty much dead, by the way.
One obvious thing is much more space. I.e. unlimited number of extra monitors, or entities which work as such.
nobody invest into a closed platform. i expect jetbrains to came up with something marvelous for ar/vr, but it will run on the upcoming version of Microsoft or HP glasses. you know, the only ones today that works just like an external monitor without a locked in ecosystem like apple or facebook.
the silly apps and games and such will net millions tho.
With tools like Remote Development extensions, VS Code already works superbly from a desktop or laptop, editing locally but using a remote machine (your own or some cloud thing) for the filesystem, git, compilation, dependency management, development server if its a web/network app, etc.
It also works pretty great on the iPad, other than the tiny screen (which yeah, is a dealbreaker IMHO, but wouldn't be on a many-screens device like Vision Pro).
It also works pretty fabulously in a web browser with that same setup (e.g. Github Codespaces, but you can roll your own, too).
I doubt we will see a great IDE that does everything locally on the device. But a great IDE client is eminently doable; it's basically already done.
The IDE is one application type where the tooling around offloading all the compute, and just keeping the UI local, has flourished. VS Code was the first to really do it so it was just as good or better as developing 100% locally. JetBrains is right behind them. Others are doing it too.
This was the only way iPad OS became usable as development machines. I don't expect there to be any good native IDEs for Vision OS, either.
But if you can deal with requiring a network connection, and using VS Code, you will be able to use Vision Pro to develop on a remote Linux machine (or Mac or Windows, if that is your thing).
Whether JetBrains will make an IDE client for it, enable third parties to make that, or not support it at all remains to be seen, I guess.