I am a backend developer, that from time to time need to write some code on the frontend side (ReactJS). From my limited experience, I find that most of the work on the UI side is limited to fetching data from the server, handling multiple views for single-page application and writing custom component-specific logic (that is pretty well encapsulated in re-usable React components). In majority of cases, I don't even need to write new React components, but rather I re-use some that were already written by someone and just need to integrate them by writing glue-code to pass the data and handle events.
I wonder why nobody tried (successfully) to automate these items by creating a graphic editor. My idea would be to:
1. In GUI, we would select an area on the screen where new React component would be added (from a list of already available components). For the selected component, we would write a GraphQL query telling component how to fetch data that would be rendered.
2. Based on this, codegen would generate JavaScript with HTML. At this stage I assume we would have fully functional application (able to fetch data from the server via GraphQL).
3. Then we would have some ComputerVision/AI component, that would generate random CSS for the generated webpage. Next based on the screenshot and CV analysis, we would be able to do sth-like-gradient-descent approach, to get CSS, so that webpage looks very similar to what user specified in the GUI.
I would like to get some feedback from experienced frontend developers, why there is hardly any automation in writing web UIs. Off-course for complicated parts, developers would still need to write React components, that would take data via GraphQL.