(Did I mention that import maps was only supported in only Firefox 3 months ago?)
People think they are smart and come up with ideas nobody thought of before. "This seems so simple, why don't people use this?" I'm sorry, that's not the case here. I have been doing web development as a hobby and professionally for well over a decade and have seen too much of this.
Sigh...the entire bloody point of the author is to use this for light projects, to scale the sophistication of the setup gracefully. With very obvious benefits: easy to understand, not linked to any setup so it works forever, interoperable and transferable.
Typescript isn't universally important for development. It has its advantages for large projects with many developers but it's in no way essential. Most web projects don't use Typescript and the very idea that it's a must-have only a few years old and quite opinionated.
Likewise, Babel isn't important either. You don't need to use some futuristic JS feature on your simple project, you can just stick to well supported ones, thus not needing Babel.
"I have been doing web development as a hobby and professionally for well over a decade and have seen too much of this."
I've been doing web development since 1996, so that makes for 27 years, should such obnoxious statement make anyone's opinion more important. If you need to piss on the idea of somebody scaling a web architecture proportionally to actual needs, you should reconsider who is the "smart" one.
The other side of it is it's not really clear what the author is gaining here. They're still reliant on NPM package structures, and they're still pulling in third party dependencies. They're just doing it in an overly complicated way.
In contrast, this is what I'd do to get to the same point the author does in this example: (I'm going off the top of my head here, and I don't have a computer to check this, but it should just work.)
mkdir myproject/ && cd myproject/
npm init # although I think just doing echo '{}' > package.json is enough here
npm install -D vite
npm install solid-js
# write some actual code
$EDITOR index.html
npx vite
And now you've got basically everything that the author has, but you didn't need to write your own shell script to download modules, you don't need to do all the import mapping yourself, and it's much easier to graduate from this simple setup if you ever do need to.I would say even many moderately complex sites/apps have no need for [shiny new feature X]. These days new JS features tend more towards nice-to-have than they do essential. Back in the days when jQuery was ubiquitous it was a different story but things have improved quite a bit.
https://deno.land/manual@v1.30.3/basics/import_maps
Maybe add aleph too (which is similar to nextjs)
Deno won't require nearly as much tooling as nodejs, but it still has tooling for the cases you need it.
I like having a build step for code that I write. I build C, I build Go, I build C#. I don’t ship debug builds of those things.
I don’t know why I’d want to forego all of the niceties that a modern TS build brings and also ship what is essentially a debug build of JS to the browser.
Sure, for a really simple web page, I’ll just slap a script together. But for anything real, hell yes, give me a good build mechanism.
With JS, it seems like many of the negatives are much more sticky and refuse to go away. Not sure why. Perhaps it’s related to how that ecosystem has a tendency towards cyclical reinvention of the upper layers while foundational/structural issues get considerably less attention, and so they get swept under the rug more often than they’re fixed, leading them to rear their heads every so often to remind everybody of their existence.
I do mind that they install 2000 packages, though.
Nor does their message come across as "why don't people do this" to me.
> Until recently though, I wasn’t sure how to make this step feel incremental.
Seems like they genuinely just found a neat way to do a thing and wanted to share it.
"Sigh, another car user. What are you going to do when you need to transport 50 humans at the same time?"
I think you're projecting your thoughts about some other thing on to this article... this one is a "this is how I like to do things and I found a new thing that helps do it that way" post.
The tone of the article is really not like what you're suggesting.
I'll give credit to the post about the fragility of certain build tools long-term in the npm ecosystem, but I don't think that's a reason to shrug off build tools period. We just need better, more stable ones (and I think we're starting to finally get there).
Maybe the reasons are not as strong as it seems?
You don't really compare web development 25 years ago and now, do you? Then the most complex front-end was having two forms. Also people wrote operating systems in ed but don't think one will even consider giving up their IDE/editor for ed because people could build complex software in past with that.
So, in your opinion how do things improve then? People see how painful some tools are and so they try to improve things. Some times it works, some times it doesn't.
The author says that he starts with a single html file and splits it or incrementally adds stuff when needed.
That course of action has proven to be a really bad idea on every non trivial web project I worked on.
Mostly for teamwork and maintainability reasons.
E.g there is no clear project structure, the next dev will not understand stuff and do things differently. And welcome to the chaotic legacy code hell. It's like Php all over but in Js.
I usually work for customers who to some extend now what they want. I choose an appropriate tech stack for that use case. The team can use that stacks conventions to develop the project. There is no need to slowly grow, everything can be done in parallel and due to the conventions stuff sticks together in the end.
This sort of work flow has a tendency to just pile incredible amounts of code in files thousands of lines long. That is going to be unmaintainable in no time.
Everyone claims they don't do that but they all lie (yes you too).
Also everyone has a chance to end up maintaining the resulting amorphous blob (could be you again).
This lacks tooling like hot reload, spell checkers, linting and so on. You will need that anyway so why don't you start with it?
Also you need tooling for deployment like config files. Tests, Ci and what not...
Lack of documentation. Since this follows your personal style your colleague needs to follow. A framework provides documentation for that. Working like this, the docs need to be written by the devs. Although they may claim otherwise they usually don't.
There are a whole bunch of devs who work alone or in (very) small teams. For this type of work it’s really more of a hindrance to have an imposed structure and tooling.
We want to get to your goals as efficiently as possible. Minimal abstractions, guidelines, tooling, indirection, magic, surprises and general overhead are in order. We don’t want to struggle with questions like “how to do this in X”. We already know how to do it, so we just do it.
As time goes on and LOC get merged we find ways to add sensible structure and compress our code.
In my case, setting up a React or Angular project is going to be more work than just writing the whole thing in vanilla JavaScript. This has the added benefit that I actually understand what's going on.
The modern JavaScript frameworks are great for large projects, but it feels like learning Django in order to make an API that goes "pong" when you do a get request. I get why this happens. Vue looked great 8 years ago, small, simple and easy to get started with. Then it grew to support more and more use cases, and larger projects and now we need a new tiny framework. Or we can accept that JavaScript has come a long way and that many of us might not need a framework.
Btw, this direct module import approach means you’ll be shipping a lot of unused code to users.
Edit: If you only need a tiny amount of progressive enhancement on top of static HTML: forget about this import map and ad hoc npm script business, use petite-vue (single script, 6KB, add a script tag like good old jQuery).
And you still need to reimplement a bunch of stuff if you don't use oss packages. And the next maintainer will curse your family for generations because of your own custom framework.
https://gitlab.com/jmattheij/pianojacq/-/tree/master/js
This project will likely never be finished, there are always nice new things to add or requests from people, there is no commercial pressure because it is a hobby project and I don't have a boss to answer to. And even if such refactoring operations take me two weeks or more (this one I did while I was mostly just working on a laptop without access to a keyboard so it was sometimes tricky to ensure that nothing broke) in the end it is worth it to me because I am also paying the price for maintaining the code and if it is messy then I would stop working on it. Project dependencies are the absolute minimum that I could get away with.
The project moves forward in fits and starts, sometimes I work on it for weeks on end and sometimes it is dormant for months. In a commercial setting or in a much larger team I don't think this approach would work.
So much this!
Current project I'm on, I own the entire front end for a major modernisation (AKA rewrite) of a legacy application and we are working in 4 week "sprints". I'm giving myself two days every month just to refactor the code I wrote that month.
How you plan to structure a project never works out, you always find better ways as you go and as the objective and priorities change (they always do).
On another tangent, before pivoting into software development I used to be a mechanical/industrial engineer. The parallels between coding and CAD are enormous. With CAD you also need to be spending 10% of your time "refactoring" your model. It's almost exactly the same from a maintainability perspective and leaving the model with good hygiene for the next person to work on.
While I loathe files thousands of lines long, I also loathe the endless chasing of things from file to file that tends to be the result of applying big frameworks to tiny projects.
Thinking outside the node-vue-kitchen sink sort of development box is a good idea. Maybe you don't have a grand idea, maybe you just have a kinda neat idea. Punch that thing out fast and light. You can always grow it bigger, but maybe you don't need to.
Instead of taking the path of starting with DOM manipulation and then going to a framework as necessary, I've kept really trying to make raw web components work, but kept finding that I wanted just a little bit more.
I managed to get the more I wanted -- sensible template interpolation with event binding -- boiled down to a tag function in 481 bytes / 12 lines of (dense) source code, which I feel like is small enough that you can copy/paste it around and not feel to bad about it. It's here if anyone cares to look: https://github.com/dchester/yhtml
function $E(t,p,c) {
const el=document.createElement(t)
Object.assign(el,p)
el.append(...c)
return el
}
Usage: const button = $E('button', {onclick: () => alert('click!')}, [
$E('img', {src: '/assets/icon-lightbulb.png'}, []),
'Ding!',
])
(With thanks to 'goranmoomin: https://news.ycombinator.com/item?id=23590750)parent.append(<p>Hello</p>);
(Mine has slightly more functionalities, such as allowing passing in styles as a object, auto concatenate array of strings for class names, etc.)
That function is now in my must-haves in my new Django side projects; I usually overuse them before I finally move to a JSX-based UI library. It’s great for simple DOM manipulation… and for me it seems to create a semi-simple migration path in the case the code gets complex.
You get better "natural" tree-shaking from that library of lots of small little ESM modules, you don't need to rely on maintainers building their own minified bundles.
The obvious trade-off, of course, is that HTTP 1.0 wasn't optimized well for lots of little files and even HTTP 1.1 servers haven't always been best configured for connection reuse and pipelining. Bundling is still sometimes useful for a number of reasons (whether or not minification matters or compile-time treeshaking makes a noticeable difference from "natural" runtime treeshaking). Of course, all the browsers that support importmaps and script type="module" all support HTTP 2 and most support HTTP 3 and that trade-off dynamic shifts again with those protocols in play (the old "rules" that you must bundle for performance stop being "rules" and everything becomes a lot more complex and needs "on the ground" performance eyeballs).
Our team (at Fortune 500 co) are pretty agile and forward looking (compared to corporate IT) but we're still not allowed to use anything from NPL-like repos without permission, and even then it has to get a code review and be pulled into our local repo. Personally, I agree with the CTO's belief that going from NPM directly to production is sort of insane.
Note that this blog right now converts markdown posts into html at runtime, which is definitely not an efficient way of doing things. But considering that I have just a handful of people landing there, I really haven't tried to update the approach.
I wonder whether you could turn this into a static renderer by skipping the `after_render` as a separate step that only happens in the browser. You could even rename it `hydrate` if you want to use fancy modern JS lingo. The benefit would be to show content on initial page load without JS enabled as well as letting crawlers index your site (some still don't run JS).
One trick I've used is to `npm install` my dependencies as normal but bundle them with esbuild, while also generating a typescript definition file. This gives me one file I can import in vanilla JS but also lets me check my JS with typescript.
So when I got back I couldn't believe how bad the experience became versus SSR frameworks, and gladly focused on backend and devops stuff instead.
It is refreshing to see this starting to happen.
Like frogs in a slowly boiling pot, most full time js devs have no idea the shit they put up with. IMHO modern js frameworks suffer from complexity rot; they are so over engineered it is a sad joke.
A neat way to organize that mess is to just use docker for tools and use maybe a simple Makefile to call those tools. Build your docker container once, push it to a docker registry and use it many times. Also nice for CI/CD.
I did that for my website which is a simple static site that I generate with some bash scripts, pandoc, and other tools from markdown files. The scripts run in Docker. I don't use node.js for this but it doesn't matter. It's just another set of tools. The only tools I need on a laptop are docker and make to build this website.
The issue with npm is that it pretends that everything happens inside of npm/node.js. Just npm this or that. It's a package manager. It's a build tool. And it's a way to run tools that you install via the package manager. And then almost as an afterthought there are a few run-time dependencies that it minifies into your actual application as part of some convoluted bundling toolchain (typically installed via npm).
The actual size of those run-time dependencies is surprisingly small for most applications. And of course with modern web browsers and html 2 & 3, loading them as one blob of minified js is not necessarily even the best way to pull in dependencies anyway. You can load a lot of this stuff asynchronously these days.
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/sc...
(But it is fairly large: 53KB raw, 15KB gzipped, 32KB minified, 11KB minified+gzipped. It’s providing a lot of likely-unnecessary functionality. I’d prefer a stripped-down and reworked polyfill that can also be lazily-loaded, controlled by a snippet of at most a few hundred bytes that you can drop into the document, only loading the polyfill in the uncommon case that it’s needed—like how five years ago as part of modernising some of the code of Fastmail’s webmail, I had it fetch and execute core-js before loading the rest iff !Object.values (choosing that as a convenient baseline—see https://www.fastmail.com/blog/using-the-modern-web-platform/... for more details), so that the cost to new browsers of supporting old browsers was a single trivial branch, and maybe fifty bytes in added payload. I dislike polyfills slowing down browsers that don’t need them, by adding undue weight or extra requests.)
This is the sort of thing to do for lightweight projects that have a little bit of code. But SPAs are necessarily not that.
UPD:
- Preact template: https://github.com/wizzard0/naked-preact
- node/browser TypeScript loader: https://github.com/wizzard0/t348-loader
UPD you recognize a software developer a mile away by his awesome estimation skills >_> published the typescript loader too!
FROM node:19.1.0
WORKDIR /project/vite_project
RUN npm init -y
RUN npm install react react-dom
RUN npm install -g esbuild
RUN npm install vite
EXPOSE 8081
CMD ["npm","run","dev"]
The react install isn't normally there if I start light - but it shows that the path to throwing in a framework is smooth. Typically combined with: #!/usr/bin/env bash
docker build -t vite_play -f Dockerfile.vite emp/
Obviously there is something to a bundler step happening in the background, but it is fast enough (and implicit) so it doesn't get in the way of rapid prototyping.I don't really trust polyfill tho.
<script src="mod.js"> </script>
just how you include (or included in the past) jquery or mathjax. I manage my namespaces like function Zebras() {
let l; //local
let e; //has setter and getter
gete = () => {return e}
sete = (x) => {e = x}
function lfunc() {console.log("local function")}
function efunc() {console.log("exported function")}
// exports :
this.efunc = efunc
this.gete = gete
this.sete = sete
}
and import it <script src="mod.js"> </script>
<script>
zr = new Zebras()
zr.sete(4)
console.log( zr.gete() ) // 4
zr.efunc()
// zr.lfunc() <- not working
</script>
You can export objects or functions trivially. I don't think you can expose primitives this way.I think this method is working since ~20 years, and will work 20 years from now, when you can't even get your bundlers to run. But then if you have to change it frequently, probably ES modules are easier to automate, test, and handle with bundlers.
mkdir -p myproject/{public,src} && cd myproject
touch src/index.js public/index.html
npm init -y && npm i -D react-scripts
# add to package.json scripts:
# "start": "react-scripts start",
# "build": "react-scripts build",
npm start
Now you've got a hot-reloading development server that can handle (and transpile) ES6 / TypeScript, import JS/CSS with 'import', etc. If you want more control over webpack you can 'eject' to be able to edit the webpack configs. Note that you don't need to use React to use react-scripts. I just want to get up to speed fast, without doing a 4hr webpack deep dive.Hot reloading definitely saves me some extra RSI.
Esbuild + a script in package.json and you’re done for modern front end. Maybe run a tailwind watcher.
> When I need to add a dependency, I invoke download-package and then declare it in the import map. I like this setup because it feels like I’m only using the bits of the NodeJS ecosystem
Thats using node/npm but with extra steps...
Anyway kinda like the importmap thing, it would have been better if he didn't reference solidjs but thats life.
edit: In the same vein; cross-linking to "Writing JavaScript without a build system (jvns.ca)" https://news.ycombinator.com/item?id=34825676
but bold move cotton
nice to see there is a polyfill
I’ll think about it for my next side project, but I’ll probably just do something I can add to my React + Node portfolio on github because recruiters and employers won’t know about this until 2027
"Ideally, I’d just grab the framework files, import them from my JavaScript, and then carry on with my file:// URL."
Does this work with file:// URLs at all?
I see in the article that paths in import maps start with /. So, the author must be using a local server, otherwise browsers will block loading the modules by CORS policy. Is there something I'm not getting here?
I mean, you can literally just download the files to a folder on your computer. It's not like it stops working just because it's not hosted on a javascript cdn
We have also tried this approach and I still think that it might be the future. But apart from using it on tiny projects and experiments, the DX is just subpar compared to any modern vite based “build-full” stack.
Haven't used NodeJS in a while and my current projects have no build step. But I'd like to hear some opinions on what specific threshold of complexity would lead one to add NodeJS.
"The code you provided is a shell script, commonly used in Unix-based operating systems. It uses the "set" command to set shell options, "-eo pipefail" means that the script will exit immediately if any command fails and that it will propagate any error code through a pipeline."
"The script then declares four local variables using the "local" command, which are used to construct the name and location of a file that will be downloaded using the "curl" command. The script creates a directory using "mkdir -p" and then downloads a file from a remote URL and saves it to a local file using the ">" operator."
Wouldn't this require node_modules to be publicly accessible?
Not strong on the frontend that's why genuinely curious.