When you do a project from scratch, if you work enough on it, you end up wishing you would have started differently and you refactor pieces of it. While using a framework I sometimes have moments where I suddenly get the underlying reasons and advantages of doing things in a certain way, but that comes once you become more of a power user, than at start, and only if you put the effort to question. And other times the framework is just bad and you have to switch...
I don't think the moat of "future developers won't understand the codebase" exists anymore.
This works well for devs who write their codebase using React, etc., and also the ones rolling their own JavaScript (of which I personally prefer).
I found myself in that situation with both foreign languages and with programming languages / frameworks - understanding is much easier than creating something good. You can of course revert to a poorer vocabulary / simpler constructions (in both cases), but an "expert" speaker/writer will get a better result. For many cases the delta can be ignored, for some cases it matters.
What’s the quality like? I’d expect it to be riddled with subtly wrong explanations. Is Claude really that much better than older models (eg. GPT-4)?
Edit: Oops, just saw your other comment saying you’d verified it manually.
Yes, but have you fully verified that the documentation generated matches the code? This is like me saying I used Claude to generate a year long workout plan. And that is lovely. But the generated thing needs to match what you wanted it for. And for that, you need verification. For all you know, half of your document is not only nonsense but it is not obvious that it's nonsense until you run the relevant code and see the mismatch.
I wasn't able to get into your 'startup ideas' site.
Signing in with google led to internal server error, and signing in with a password, I never received the verification email.
Thought I would let you know. Can't wait to get those sweet startup ideas....!
But ya, I hate when people say they don't like "magic." It's not magic, it's programming.
Yes, it's not magic as in Merlin or Penn and Teller. But it is magic in the aforementioned sense, which is also what people complain about.
in my experience among personality types of programmers both laborers and artists are opposed to the reading of guides, I think the laborers due to laziness and the artists due to a high susceptibility to boredom and most guides are not written to the intellectually engaging level of SICP.
Craftsmen are naturally the type to read the guide through.
Of course if you spend enough time in the field you end up just reading the docs, more or less, because everybody ends up adapting craftsmen habits over time.
Meanwhile in JavaScript land: Node, Deno, Bun, TypeScript, JSX, all the browser implementations which may or may not support certain features, polyfills, transpiling, YOLOOOOO
I've found taking the throwaway approach a bit further down the line pays big benefits; delaying full commitment to a particular path until you have a lot more information tends to work better.
My experience with it is that functional components always grow and end up with a lot of useEffect calls. Those useEffects make components extremely brittle and hard to reason about. Essentially it's very hard to know what parts of your code are going to run, and when.
I'm sure someone will argue, just refactor your components to be small, avoid useEffect as much as possible. I try! But I can't control for other engineers. And in my experience, nobody wants to refactor large components, because they're too hard to reason about! And the automated IDE tools aren't really built well to handle refactoring these things, so either you ask AI to do it or it's kind of clunky by-hand. (WebStorm is better than VSCode at this, but they're both not great)
The other big problem with it is it's just not very efficient. I don't know why people think the virtual DOM is a performance boost. It's a performance hack to get around this being a really inefficient model. Yes, I know computers are fast, but they'd be a lot faster if we were writing with better abstractions..
Re: unrefactorable, large components, you probably want to break these down into smaller pieces. This talk ("Composition is all you need") is an excellent guide on the topic: https://youtu.be/4KvbVq3Eg5w?si=1esmAtrJthois1uf
Re: performance, people overstate the performance overhead of VDOM. Badly performing React applications are virtually always due to bad implementations. React Scan is an excellent tool for tracking down components that need optimizing: https://react-scan.com/
Re: getting other people on the team to write good code, this is the biggest struggle IMO. Frontend is hard because there's a lot of bad ways to solve a problem, and co-workers will insist that their changes work so why invest more time into building things correctly. I've only found success here by first writing an entire feature with good patterns and pointing to it as reference for other teams. People are more willing to make changes if they find precedent in the codebase.
I dunno, AI tools love adding not only useEffect but also unnecessary useMemo.
> I don't know why people think the virtual DOM is a performance boost.
It was advertised as one of the advantages when React was new, due to the diffing browsers would only need to render the parts that changed instead of shoving a whole subtree into the page and the having to do all of it (because remember this came out in the era of jquery and mustachejs generating strings of HTML from templates instead of targeted updates).
That said, none of that is specific to the VDOM, and I think a lot of the impression that "VDOM = go fast" comes from very early marketing that was later removed. I think also people understand that the VDOM is a lightweight, quick-to-generate version of the DOM, and then assume that the VDOM therefore makes things fast, but forget about (or don't understand) the patching part of React, which is also necessary if you've got a VDOM and which is slow.
Electricity is magic. TCP is magic. Browsers are hall-of-mirrors magic. You’ll never understand 1% of what Chromium does, and yet we all ship code on top of it every day without reading the source.
Drawing the line at React or LLMs feels arbitrary. The world keeps moving up the abstraction ladder because that’s how progress works; we stand on layers we don’t fully understand so we can build the next ones. And yes LLM outputs are probabilistic, but that's how random CSS rendering bugs felt to me before React took care of them
The cost isn’t magic; the cost is using magic you don’t document or operationalize.
The key feature of magic is that it breaks the normal rules of the universe as you're meant to understand it. Encapsulation or abstraction therefore isn't, on its own, magical. Magic variables are magic because they break the rules of how variables normally work. Functional components/hooks are magic because they're a freaky DSL written in JS syntax where your code makes absolutely no sense taken as regular JS. Type hint and doctype based programming in Python is super magical because type hints aren't supposed to affect behavior.
Hmm, they aren't if you have a degree.
> Browsers are hall-of-mirrors magic
More like Chromium with billions LoC of C++ is magic. I think browser shouldn't be that complex.
It’s quite magical.
how is browser formed. how curl get internent
But it is not quite the case. The hand coded solution may be quicker than AI at reaching the business goal.
If there is an elegant crafted solution that stays in prod 10 years and just works it is better than an initially quicker AI coded solution that needs more maintenance and demands a team to maintain it.
If AI (and especially bad operators of AI) codes you a city tower when you need a shed, the tower works and looks great but now you have 500k/y in maintaining it.
Anything that can be automated can be automated poorly, but we accept that trained operators can use looms effectively.
Programming is famously non-linear. Small teams making billion dollar companies due to tech choices that avoid needing to scale up people.
Yes you need marketing, strategy, investment, sales etc. But on the engineering side, good choices mean big savings and scalability with few people.
The loom doesn't have these choises. There is no make a billion tshirts a day for a well configured loom.
Now AI might end up either side of this. It may be too sloppy to compete with very smart engineers, or it may become so good that like chess no one can beat it. At that point let it do everything and run the company.
https://pomb.us/build-your-own-react/
Certain frameworks were so useful they arguably caused an explosion the productivity. Rails seems like one. React might be too.
const element = document.createElement("h1");
element.innerHTML = "Hello";
element.setAttribute("title", "foo");
const container = document.getElementById("root");
container.appendChild(element);
I now have even less interest in ever touching a React codebase, and will henceforth consider the usage of React a code smell at best.Maybe nobody needs React, I’m not a fan. But a trivial stateless injection of DOM content is no argument at all.
<h1 title=foo>Hello</h1>
I have even less interest in touching any of your codebases!If you've only been in a world with React & co, you will probably have a more difficult time understanding the point they're contrasting against.
(I'm not even saying that they're right)
It's just such a different concept/vibe/whatever compared to modern frontend development. Brad Frost is another notable person in this overall space who's written about the changes in the field over the years.
In other words, why is one particular abstraction (e.g. Javscript, or the web browser) ok, but another abstraction (e.g. React) not? This attitude doesn't make sense to me.
As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing.
My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions.
I've read the react source, and some of v8. Imagine how you'd implement hooks, you're probably not too far away. It's messier than you'd hope, but that's kind of the point of an abstraction anyway. It's really not magic, I really dislike that term when all you're doing is building on something that is pretty easy to read and understand. v8 on the other hand is much harder, although I will say I found the code better organised and explained than React.
If this is true, why have more than one abstraction?
Yeah, JavaScript is an illusion (to be exact, a concept). But it’s the one that we accept as fundamental. People need fundamentals to rely upon.
I’d rather make comparative statements, like “JavaScript is more fundamental than React,” which is obviously true. And then we can all just find the level of abstraction that works for us, instead of fighting over what technology is “fundamental.”
Sure you can, why can't you? Even if it's deprecated in 20 years, you can still run it and use it, fork it even to expand upon it, because it's still JS at the end of the day, which based on your earlier statement you can code for life with.
But it does seems that culture of complexity is more pervasive lately. Things that could have been a simple gist or a config change is a whole program that pulls tens of dependencies from who knows who.
The big thing here is that the transformations maintain the clearly and rigorously defined semantics such that even if an engineer can't say precisely what code is being emitted, they can say with total confidence what the output of that code will be.
Don't get me wrong, I don't think you need or should need a degree to program, but if your standard of what abstractions you should trust is "all of them, it's perfectly fine to use a bunch of random stuff from anywhere that you haven't the first clue how it works or who made it" then I don't trust you to build stuff for me
Sure, obviously, we will not undersatnd every single little thing down to the tiniest atoms of our universe. There are philosophical assumptions underlying everything and you can question them (quite validly!) if you so please.
However, there are plenty of intermediate mental models (or explicit contracts, like assembly, elf, etc.) to open up, both in "engineeering" land and "theory" land, if you so choose.
Part of good engineering as well is deciding exactly when the boundary of "don't cares" and "cares" are, and how you allow people to easily navigate the abstraction hierarchy.
That is my impression of what people mean when they don't like "magic".
In that post, the blanks reference a compiler’s autovectorizer. But you know what they could also reference? An aggresively opaque and undocumented, very complex CPU or GPU microarchitecture. (Cf. https://purplesyringa.moe/blog/why-performance-optimization-....)
LLMs are vastly more complicated and unlike compilers we didn't get a long, slow ramp-up in complexity, but it seems possible we'll eventually develop better intuition and rules of thumb to separate appropriate usage from inappropriate.
When I design hardware interfaces one of my main rules is that user agency should be maximized where needed. That requires the manufacturer to trust (or better: ensure) that the user has a meaningful mental model of the device they are using. Your interface then has to honor this mental model at all times.
So build a hammer and show the user how to use it effectively, don't build a SmartNailPuncher3000 that may or may not work depending if the user is holding it right and has selected the wrong mode by touching the wrong part.
A couple of megabytes of JavaScript is not the "big bloated" application in 2026 that is was in 1990.
Most of us have phones in our pockets capable of 500Mbps.
The payload of an single page app is trivial compared to the bandwidth available to our devices.
I'd much rather optimise for engineer ergonomics than shave a couple of milliseconds off the initial page load.
The idea that React is inherently slow is totally ignorant. I'm sympathetic to the argument that many apps built with React are slow (though I've not seen data to back this up), or that you as a developer don't enjoy writing React, but it's a perfectly fine choice for writing performant web UI if you're even remotely competent at frontend development.
Autovectorization is not a programming model. This still rings true day after day.
[0] this includes for example int main(), which is a hook for a framework. c does a bunch of stuff in __start (e.g. in linux, i don't know what the entrypoint is in other languages) that you honestly don't want to do every. single. time./ for every single OS
Granted, there are limits to how deep one should need to go in understanding their ecosystem of abstractions to produce meaningful work on a viable timescale. What effect does it have on the trade to, on the other hand, have no limit to the upward growth of the stack of tomes of magical frameworks and abstractions?
Simple: if it's magic, you don't have to do the hard work of understanding how it works in order to use it. Just use the right incantation and you're done. Sounds great as long as you don't think about the fact that not understanding how it works is actually a bug, not a feature.
That's such a wrong way of thinking. There is simply a limit on how much a single person can know and understand. You have to specialize otherwise you won't make any progress. Not having to understand how everything works is a feature, not a bug.
You not having to know the chemical structure of gasoline in order to drive to work in the morning is a good thing.
It's about layers of abstraction, the need to understand them, modify them, know what is leaking etc.
I think people sometimes substitute magic when they mean "I suddenly need to learn a lower layer I assumed was much less complex ". I don't think anyone is calling the linux kernal magic. Everyone assumes it's complex.
Another use of "magic" is when you find yourself debugging a lower layer because the abstraction breaks in some way. If it's highly abstracted and the inner loop gives you few starting points ( while (???) pickupWorkFromAnyWhere() )). It can feel kafkaesque.
I sleep just fine not knowing how much software I use exactly works. It's the layers closest to application code that I wish were more friendly to the casual debugger.
It seems common with regard to dependency injection frameworks. Do you need them for your code to be testable? No, even if it helps. Do you need them for your code to be modular? You don't, and do you really need modularity in your project? Reusability? Loose coupling?
React, which just is functions to make DOM trees and render them is a framework? There is a reason there are hundreds of actual frameworks that exist to make structure about using these functions.
At this point, he should stop using any high level language! Java/python are just a big frameworks calling his bytecode, what magical frameworks!
calling a framework necessarily magic is the weird thing.
I don't see it, either the notion that other people's code is to be avoided for its own sake nor that depending on LLM-generated code is somehow analogous to depending on React.
> I don’t like using code that I haven’t written and understood myself.
Why stop with code? Why not refine beach sand to grow your own silicon crystal to make your own processor wafers?
Division of labor is unavoidable. An individual human being cannot accomplish all that much.
> If you’re not writing in binary, you don’t get to complain about an extra layer of abstraction making you uncomfortable.
This already demonstrates a common misconception in the field. The physical computer is incidental to computer science and software engineering per se. It is an important incidental tool, but conceptually, it is incidental. Binary is not some "base reality" for computation, nor do physical computers even realize binary in any objective sense. Abstractions are not over something "lower level" and "more real". They are the language of the domain, and we may simulate them using other languages. In this case, physical computer architectures provide assembly languages as languages in which we may simulate our abstractions.
Heck, even physical hardware like "processors" are abstractions; objectively, you cannot really say that a particular physical unit is objectively a processor. The physical unit simulates a processor model, its operations correspond to an abstract model, but it is not identical with the model.
> My control freakery is not typical. It’s also not a very commercial or pragmatic attitude.
No kidding. It's irrational. It's one thing to wish to implement some range of technology yourself to get a better understanding of the governing principles, but it's another thing to suffer from a weird compulsion to want to implement everything yourself in practice...which he obviously isn't doing.
> Abstractions often really do speed up production, but you pay the price in maintenance later on.
What? I don't know what this means. Good abstractions allow us to better maintain code. Maintaining something that hasn't been structured into appropriate abstractions is a nightmare.
> What? I don't know what this means. Good abstractions allow us to better maintain code. Maintaining something that hasn't been structured into appropriate abstractions is a nightmare.
100% agree with this. Name it well, maintain it in one place ... profit.
It's the not abstracting up front that can catch you: The countless times I have been asked to add feature x, but that it is a one-off/PoC. Which sometimes even means it might not get the full TDD/IoC/feature flag treatment (which aren't always available depending upon the client's stack).
Then, months later get asked to created an entire application or feature set on top of that. Abstracting that one-off up into a method/function/class tags and bags it: it is now named and better documented. Can be visible in IDE, called from anywhere and looped over if need be.
There is obviously a limit to where the abstraction juice isn't worth the squeeze, but otherwise, it just adds superpowers as time goes on.
No it’s not. They will get shown a collection of pixels, a bunch of which will occupy coordinates (in terms of an abstraction that holds the following promise) such that if the mouse cursor (which is yet another abstraction) matches those coordinates, a routine derived from a script language (give me an A!) will be executed mutating the DOM (give me a B!) which is built on top of more abstractions than it would take to give me the remaining S.T.R.A.C.T.I.O.N. three times over. Three might be incorrect, just trying to abstract away so that I don’t end up dumping every book on computers in this comment.
Ignorance at a not so fine level. Reads like “I’ve established myself confidently in the R.A.C. band, therefore anything that comes after is yucky yucky”.
This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.