One specific thing Utopia addresses to me is the need for the code and the actual thing to be treated more as one single interactable component and not two separate things.
Instead we're treating the thing as a one-way compile step. There's no way to sync the DevTools in-memory changes we make to the DOM with the actual code.
The fact that Utopia allows the two things to be treated as one is a huge step towards making webdev more enjoyable.
And they’re following good steps… SwiftUI’s editor is very similar in this regard. Using the code as the main thing but having all kinds of interactable components around it that make writing code simpler with cool visual autocomplete widgets & visual aids.
Before with direct DOM changes building something like this was impossible but now with the React paradigm is seems natural to have this sync between code and visuals.
Kudos to the team for pulling this off.
This is something I thought a lot about and really hope to see more in programming. Everything - every object, every concept, every program - has multiple representations, and in order to understand that thing you need to see it in multiple representations. But code is one representation, static text. Having other representations (like the laid-out website), especially that you can not just view but also edit, makes your project a lot easier to understand.
I have added Utopia to my curated list of startup tools[1] under 'Visual Programming | Low code | No code'(Yep it's the only heading which I had to add alternatives as well).
[1]framer.com
The Smalltalk way.
Honestly, this isn't the reason. The reason for the nocode movement is that businesses are sick of constantly trying to find technology talent for big salaries, and want to DIY. I cast no judgement on whether that's going to be a disaster or not, or for whom.
I also wouldn't underestimate the push factor from the creative folks who are technical enough to "read" code / reason in abstract patterns, but can't write it. Fun definitely comes into that, as does a learning curve that's very shallow and near-infinite.
I've grown numb to the friction, but I think this Utopia demo just reset my pain tolerance. Going back to manually mapping designs from Figma mocks to React components feels so dated. Almost like those PSD2HTML websites of the early 2000s.
The market absolutely needs a product like this.
As I'm designing an app I don't want just click & drag visually to get some eye-balled spacing or size value. I want a design system with consistent spacing rules. Design tools enable this somewhat but the token standardization is pushing this much further. For example: there's already a hub/API for distributing design tokens to other services. I've seen a live demo of Specify[2] pushing token changes from Figma to a GitHub PR or creating JIRA task during a design token talk from a token w3c group member.
And it's not just about design -> code. If these tokens are standardized, design tools themselves (among others) could have a standardized way of transferring designs.
Heck, even Utopia could embrace this at some point to provide you an alternative view/workflow for your already standardized design tokens.
Another issue I have with the UI is that a lot of elements are greyed out even though they remain interactive. The convention is that UI elements that are greyed out are not interactive due to the current state of the application (wrong mode, unmatched requisites etc). This application seems to violate that and I'm still baffled as to what greying out of UI elements means in in this app. If they aren't disabled why are they greyed out?
And... about bloody time IMHO. There's a reason tools like FrontPage and Flash were so popular - those tools made the internet available to millions of people who aren't into tedious coding.
We also are trying our best to preserve intent (eg if you resize a flexbox element, we don't set a width on it but default to flexBase, unless there's already a flexGrow on it).
Technology stacks appear, and go right ahead reinventing dependency managers and repositories, IDEs with autocompletion, refactoring & WYSIWYG editors, a strong type system, comments and schema validation for configuration file formats (JSON)...
Nice work though, looks really promising!
Ed: it's strange that the lively-kernel.org approach (the Self/Smalltalk approach) never did seem to gain any real traction. Rather we get these odd half-meassures like source-maps, bundled js and what-not.
Well, any small progress towards the 80s in the 20s is good i guess...
It outputs React code as well. But the approach is more similar to Figma. You can create components with variants etc.
It can automatically sync changes to your local repo or create a pull request on GitHub.
If this works well, it will be amazing and I'll almost definitely use it for my next webapp.
We started with inline styles because typically that's the starting point when designing / prototyping. A lot of the features on our near-term roadmap are about then abstracting those: "take this element with styles and make it a component", "refactor this manually styled thing with the best matching layout component" etc.
Our design philosophy here is to help you do what you'd do in code (heck, you can even do it in code!), but not make broad assumptions about what that is or should be. Inline styles, helper components, utility css classes, styledComponents all have their place, and it's your job as a designer / engineer to decide on where to use what. What we can do is make those changes as fast as possible, while always showing you the resulting code. No black box, no "smart" decisions.
Have you thought about supporting multiple use-cases for each component? So for example the component might be showing a list. I want to test (and visually design) the use-case when the list is empty, then I want to switch and visually design when the list has one element, then when the list has a lot of elements..
Also it would be great that in those cases you could define not only the data but also the surrounding canvas and browser properties. Test&design for mobile, laptop, dark background, old browser, changed styles, etc.
These other libraries offer a great deal of more support, such as targeting children and pseudo-elements (e.g., :hover).
Any plans/timeline on adding support for other React-in-CSS libraries?
Either way, this is really awesome. Huge congrats on the release. It reminds me of one of my fav talks which I can’t find right now but it’s a guy that codes up a Mario clone while toggling back and forth between the game and the code and making changes in the game updates the source code. If anyone knows what I’m talking about please reply here.
The critical piece almost all current-gen UI editing tools are missing is that there needs to be a sync between changes in the code and changes in the design. John Otander and Chris Biscardi built a complex system based on Babel to manage syncing visual changes to the code.
I wonder how Utopia solved this! How do you "move" the visual changes made in the editor to the code?
You guys seem to be addressing the right pain point in the right way.
I will definitely give this a go soon
> I saw some visual builders for React , downloaded/tried a couple but they had pretty big learning curves of their own so I didn't bother taking it any further.
This was our experience with existing low code or code generating design tools as well – you actually have to put a lot of time into learning how to use them, time you could spend on learning actual React.
One of our guiding principles was that people who use Utopia should not need to learn anything that is proprietary to our platform.
React will be around for a while because of its popularity. It has nothing to do with JSX. Acutally, Vue, especially Vue 3 with the compositiona API is much easier to work with than React. You could use JSX with Vue as well. React has quite some leaky abstractions, which makes it more hard to master than Vue or Svelte. React seems easy, but it is not.
A bit of self-promotion: I've been building something similar but without the visual editing, integrating directly in your VS Code editor (and soon IntelliJ/WebStorm too). See https://reactpreview.com for more info.
Direct link to the latest beta for Visual Studio Code for HN friends: https://reactpreview.com/download/releases/reactpreview-vsco...
(it's very early days, this beta isn't even a week old, so expect to find bugs)
How would you hydrate the app with data while you dev? Feels like adding some storybook like features to test components could be useful.
This is precisely what the scenes are for in the storyboard.js file - it allows you to create multiple scenes that render your components with different data / props.
Never heard the word "hydrate" in this context before but it sounds very… appropriate :)
I think it is: https://github.com/concrete-utopia/utopia#build-editor-vscod...
https://github.com/artf/grapesjs
Later however, I discovered craft.js, which is basically a framework for creating systems similar to this (page/component editors). Craft.js was inspired by grapesjs, but is specifically made for react.
https://github.com/prevwong/craft.js
Of course craft.js only solves the UI editor, not the code parsing/generation part. Babel is an obvious choice for code generation/manipulation, but I found its imperative approach unnecessarily complicated, so I built react-ast to enable declarative and composable code generation using react.
https://github.com/clayrisser/react-ast
The part I had not figured out was using the code as the source of truth for the editor and syncing it back. I definitely thought about it a lot, but postponed solving it since I had more pressing problems.
So, I have a lot of curiosity about this project. How does it work? And how does it stack up against a technology like craft.js? I noticed it’s not using craft.js, so I’m guessing the developers rolled their own equivalent.
I feel like the thing that would really make this over-the-top powerful is deep integration with a component library, either a company's in-house one or an open-source library. It would allow for super fast UI prototyping that would also serve as scaffolding for the full-featured product.
I imagine I will use Utopia and then tailor the code afterwards with a code-editor to add my own idiosyncrasies. Still pretty darn efficient to design UIs this way honestly. I'm pretty stoked!
https://www.youtube.com/watch?v=ouzAPLaFO7I [280 North Atlas]
- Objective-J (JavaScript equivalent of Objective-C)
- Cappuccino (Objective-J equivalent of Cocoa, from what I remember)
- Atlas (equivalent of InterfaceBuilder)
- Slides (equivalent of Keynote)
The UI builder serialized to the nib equivalent so code wasn't the source of truth, but it was definitely not a no-code approach (code methods were exposed in their GUI builder, and you could consume the nib artifact from your code).Lastly, thanks for making this. It looks really cool.
I find it similar to Hadron App[0] that generates modern HTML and CSS. I was very excited about that tool. Unfortunately, the blog[1] and changelog[2] went silent last year.
- Extendable component library and property panel so it can support custom component libraries and styling systems
- Native integration with vscode (maybe as an extension)
- Tailwind style editor and theme ui style editor - Full TypeScript support
We are working on something similar at Builder.io that utilizes jsx-lite to allow visual editing right alongside code. Feel free to check it out here: https://jsx-lite.builder.io/?outputTab=react
It seems that any "logic" (conditionals, iterators, etc) in the components will prevent the component (and their children) from being inspectable.
Is the intent to keep it "no-code" in this respect (i.e. effectively a visual JSX editor), or will you be adding the ability to inspect more "dynamic" components to make it useful for app development?
If you do have specific cases that you come across and are interested in following the development or just want to let us know of them, please do file an issue on our GitHub repo.
I've been working on a React/JSX wrapper for OpenJSCAD (a solid CAD library), where the "viewer" (that shows the rendered model) offers simple interactivity that is decoupeled from the code. It would be amazing if that could be extended to allow for editing the code when you interact with parts of the model in the viewer (as you would do in traditional CAD programs).
Nice work anyways.
Failed to load resource: net::ERR_CERT_INVALID 4b99d2cfb81c64aed531.css:1
Failed to load resource: net::ERR_CERT_INVALID cdn.utopia.app/_next/static/chunks/webpack-50bee04d1dc61f8adf5b.js:1
etcSame in Chrome and Safari
Sometimes just the "I click on something, it opens in the code editor" is a huge time saver compared to "hunt for the element in devtools, search in VSCode for what file that component / instance lives in"
How long did it take you to arrive at this version?
Thanks! :)
Doesn't really look ready or supported out of the box for M1 Macbooks, unsurprisingly. Probably going to install this on my other machine to try it out instead.
[0] https://github.com/concrete-utopia/utopia#troubleshooting
Preview would have a dynamic ID that maps to the source JSX.