/ 2.7 kB
main.css 2.5 kB
favicon.png 1.8 kB
-------------------
Total 7.0 kB
Not bad, I think! I generate the blog listing on the home page (as well as the rest of my website) with my own static site generator, written in Common Lisp [2]. On a limited number of mathematical posts [3], I use KaTeX with client-side rendering. On such pages, KaTeX adds a whopping 347.5 kB! katex.min.css 23.6 kB
katex.min.js 277.0 kB
auto-render.min.js 3.7 kB
KaTeX_Main-Regular.woff2 26.5 kB
KaTeX_Main-Italic.woff2 16.7 kB
----------------------------------
Total Additional 347.5 kB
Perhaps I should consider KaTeX server-side rendering someday! This has been a little passion project of mine since my university dorm room days. All of the HTML content, the common HTML template (for a consistent layout across pages), and the CSS are entirely handwritten. Also, I tend to be conservative about what I include on each page, which helps keep them small.You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/
I would love to use MathML, not directly, but automatically generated from LaTeX, since I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.
For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals actually appear in print.
Even if I decide to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.
Why can't this be precomputed into html and css?
It can be. But like I mentioned earlier, my personal website is a hobby project I've been running since my university days. It's built with Common Lisp (CL), which is part of the fun for me. It's not just about the end result, but also about enjoying the process.
While precomputing HTML and CSS is definitely a viable approach, I've been reluctant to introduce Node or other tooling outside the CL ecosystem into this project. I wouldn't have hesitated to add this extra tooling on any other project, but here I do. I like to keep the stack simple here, since this website is not just a utility; it is also my small creative playground, and I want to enjoy whatever I do here.
There is such a thing as over-optimization. All this SEO-driven performance chasing is really only worthwhile if the thing you're building is getting click-through-traffic in the millions of views.
It's a bit like worrying about the aerodynamics of a rowboat when you're the only one in it, and you're lost at sea, and you've got to fish for food and make sure the boat doesn't spring any leaks.
Yes, in the abstract, it's a worthwhile pursuit. But when you factor in the ratio of resources required vs gain recieved, it is hardly ever a wise choice of how to use your energy.
ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.Any reference for this?
https://news.ycombinator.com/item?id=3632765
https://web.archive.org/web/20120603070423/http://blog.benst...
The range means MTU varies from reasonable, where you can argue that an IW of anything from 1-30 packets is good, to a world where the MTU is ridiculously small and the IW is similarly absurd.
We would probably be better off if consumers on >1gbps links got higher MTUs, then an IW of 10-30 could be reasonable everywhere. MTU inside cloud providers is higher (AWS uses 9001), so it is very possible.
A single REST request is only truly a single packet if the request and response are both < 1400 bytes. Any more than that and your “single” request is now multiple requests & responses . Any one of them may need a retry and they all need to arrive in order for the UI to update.
For practical experiments, try chrome dev tools in 3g mode with some packet loss and you can see even “small” optimizations improving UI responsiveness dramatically.
This is one of the most compelling reasons to make APIs and UIs as small as possible.
What are you doing with the extra 500kB for me, the user?
> 90% of the time in interested in text. Most of the reminder vector graphics would suffice.
14 kB is a lot of text and graphics for a page. What is the other 500 for?
It's fair to prefer text-only pages, but the "and graphics" is quite unrealistic in my opinion.
The modern web has crossed the rubicon long time ago for 14kb websites.
> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.
This is why almost all applications and websites are slow and terrible these days.
Performance isn’t seen as sexy, for reasons I don’t understand. Devs will be agog about how McMaster-Carr manages to make a usable and incredibly fast site, but they don’t put that same energy back into their own work.
People like responsive applications - you can’t tell me you’ve never seen a non-tech person frustratingly tapping their screen repeatedly because something is slow.
To add to this, bloated performance is often 'death by a 1000 cuts' - ie there isn't just one thing that makes it slow, but it's the cumulative combination of many individual choices - where each choice doesn't incrementally make that much difference, but the cumulative effect does.
ie if you have 100 code changes, each one adding 'just' a 10 millis - suddenly you are a second slower - and yet fixing any one problem has a minimal effect.
The actual reason is almost always some business bullshit. Advertising trackers, analytics etc. No amount of trying to shave kilobytes off a response can save you if your boss demands you integrate code from a hundred “data partners” and auto play a marketing video.
Blaming bad web performance on programmers not going for the last 1% of optimization is like blaming climate change on Starbucks not using paper straws. More about virtue signaling than addressing the actual problem.
> This is why almost all applications and websites are slow and terrible these days.
But no, there are way more things broken on the web than lack of overoptimization.
But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.
Ahh, if only. Have you seen applications developed by large corporations lately? :)
:)
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.
Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.
FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL
Built in Rails too!
It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!
Our initial page load is far bigger than 17.2KB, it's about 120KB of HTML, CSS, and JS. The big secret is eliminating all extra HTTP requests, and only evaluating JS code that needs to run for things "above the fold" (lazy-evaluating any script that functions below the fold, as it scrolls into view). We lazy-load everything we can, only when it's needed. Defer any script that can be deferred. Load all JS and CSS in-line where possible. Use 'facade' icons instead of loading the 3rd-party chat widget at page load, etc. Delay loading tracking widgets if possible. The system was already built on an SSR back-end, so SSR is also a big plus here. We even score perfect 100s with full-page hires video backgrounds playing at page load above-the-fold, but to get there was a pretty big lift, and it only works with Vimeo videos, as Youtube has become a giant pain in the ass for that.
The Google Lighthouse results tell you everything you need to know to get to 100 scores. It took a whole rewrite of our codebase to get there, the old code was never going to be possible to refactor. It took us a whole new way of looking at the problem using the Lighthouse results as our guide. We went from our customers complaining about page speeds, to being far ahead of our competition in terms of page speed scores. And for our clients, page speed does make a big difference when it factors into SEO rankings (though it's somewhat debatable if page speed affects SEO, but not with an angry client that sees a bad page speed score).
It was more a quick promote Rails comment as it can get dismissed as not something to build fast website in :-)
1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.
2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.
Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.
I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.
Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.
I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.
When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.
Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.
EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned
I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
It's a small thing, but as you say internet video is relatively heavy.
To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.
For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]
To go all-out on bandwidth conservation, LocalCDN[4] and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.
Sorry this got long. Cheers
[0] https://greasyfork.org/whichen/scripts/23661-youtube-hd
[1] https://arstechnica.com/gadgets/2024/05/google-searchs-udm14...
[2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ubloc...
Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.
Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.
Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.
Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.
And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation
A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.
I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.
Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.
If we want to really fix places with bigger impact we need to change this approach in a first place.
I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.
A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.
I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.
Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.
Then compare that number to how much energy it takes to produce a single hamburger.
Do the calculation yourself if you do not believe me.
On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.
Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.
Also that the 14kb size is less than 1% of the current average mobile website payload.
I myself have installed one single package, and it installed 196,171 files in my home directory.
If that isn't gratuitous bloat, then I don't know what is.
Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.
The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.
I think those are pretty good ways to use the energy.
Then multiply that by the number of daily visitors.
Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.
For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?
(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)
Instead we should be looking to nuclear power solutions for our energy needs, and not waste time with reducing website size if its purely a function of environmental impact.
And no, reducing resource use to the minimum in the name of sustainability does not scale down the same way it scales up. You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
It's never clear to me whether people who push this line are doing so because they're bitter and want to punish other humans, or because they hate themselves. Either way, it evinces a system of thought that has already relegated humankind to the dustbin of history. If, in the long run, that's what happens, you're right and everyone else is wrong. Congratulations. It will make little difference in that case to you if the rest of us move on for a few hundred years to colonize the planets and revive the biosphere. Comfort yourself with the knowledge that this will all end in 10 or 20 thousand years, and the world will go back to being a hot hive of insects and reptiles. But what glory we wrought in our time.
Whataboutism. https://en.m.wikipedia.org/wiki/Whataboutism
> You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
Strawmaning. https://en.m.wikipedia.org/wiki/Straw_man
Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.
But this sort of goes against my no / minimal JS front end rendering philosophy.
A 14kb page can load much faster than a 15kb page - https://news.ycombinator.com/item?id=32587740 - Aug 2022 (343 comments)
HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.
One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).
Very relevant. A lot of websites need 5 to 30 seconds or more to load.
Doesn't this sort of undo the entire point of the article?
If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.
So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.
The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.
And TLS handshakes aren't that big, even with certificates... Although you do want to use ECC certs if you can, the keys are much smaller. The client handshake should fit in 1-2 packets, the server handshake should fit in 2-3 packets. But more importantly, the client request can only be sent after receiving the whole server handshake, so the congestion window will be refreshed. You could probably calculate how much larger the congestion window is likely to be, and give yourself a larger allowance, since TLS will have expanded your congestion window.
Otoh, the important concept, is that early throughput is limited by latency and congestion control, and it takes many round trips to hit connection limits.
One way to apply that is if you double your page weight at the same time you add many more service locations and traffic direction, you can see page load times stay about the same.
The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window
Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting
There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important
I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!
Would love it if someone kept a list.
Hopefully you'll find some of them aesthetically pleasing
How about a single image? I suppose a lot of people (visitors and webmasters) like to have an image or two on the page.
Questioning my premise is like saying its perfectly logical to plan your kitchen around eating MRE's because "they aren't obsolete". Today's satellite internet isn't yesterday's satellite internet.
The quality of connection is so much better, and as you can get a starlink mini with a 50GB plan for very little money, its already in the zone that just one worker could grab his own and bring it on the rig to use on his free time and to share.
Starlink terminals aren't "infrastructure". Campers often toss one on their roof without even leaving the vehicle. Easier than moving a chair. So, as I said, the geostationary legacy system immediately becomes entirely obsolete other than for redundancy, and is kinda irrelevant for uses like browsing the web.
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
If I’m selling to cash cows in America or Europe it’s not an issue at all.
As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.
https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.
- rural location
- roommate or sibling torrent-ing the shared connection into the ground
- driving around on a road with spotty coverage
- places with poor cellular coverage (some building styles are absolutely hell on cellular as well)
Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.
You are correct in that TCP packets are processed within the kernel of modern operating systems.
Edit for clarity:
This is a web server only algorithm. It is not associated with any other kind of TCP traffic. It seems from the down votes that some people found this challenging.