Just a second. I'm late for golf!
0118999881999119725 ...3
And yes, I have that memorised!
data:text/html,<h1>My%20small%20website</h1><p>Look,%20it's%20real!</p>
You can use a data uri generator to base64-encode it, if you want.
Advantages of smolsite:
- Zip might let you fit a bit more than a data uri
- Some JS APIs would work on a smolsite url, but wouldn't work in a data uri
data:text/html,<html contenteditable>
I keep it on my bookmarks toolbar.> data:text/html,<html contenteditable><body style="margin: 10vh auto; max-width: 720px; font-family: system-ui"><h1>New Note
I added some basic styles so I can screenshare :D
Also in most browsers, CTRL + B, I and U work in contenteditable divs.
In before someone writes a smolsite to install a service worker on the domain that sends analytics for all other smolsites to their own server
[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_...
https://wgx.github.io/anypage/?eyJoMSI6IkhlbGxvIEhOISIsImgyI...
Every Show HN post I've seen was interesting. Motivated me to start my own projects and polish them so I can show them here. It's a really good feeling when someone else submits your work and you get to talk about it.
You may only "Show HN" something you've made yourself.
Eventually you'd hit the URL size limit, of course, but maybe we add a layer on top for curators to bundle sets of URLs together to produce larger texts. Maybe add some LLM magic to generate the bundles.
You'd end up with a library that has, not just every book ever written, but every book that could ever be written.
[Just kidding, of course: I know this is like saying that Notepad already has every book in existence--you just have to type them in.]
As usual, no idea is unique--it's all about who executes first!
To get around paying our website vendor (think locked-down hosted CMS) for an overpriced event calendar module, I coded a public page that would build a calendar using a base64-encoded basic JSON "events" schema embedded in a "data-events" attribute. Staff would use a non-public page that would pull the existing events data from the public page to prepopulate the calendar builder, which they could then use to edit the calendar and spit out a new code snippet to put on the public page. And so on.
It basically worked! But I think they eventually did just fork over the money for the calendar add-on.
https://sonnet.io/projects#:~:text=Laconic!%20(a%20Twitter%2...
I remember seeing this for the first time on HN with urlpages, inspired me to build my own version of these
and my website _is_ already in a zip hehe https://redbean.dev
It works quite well, but I'll need to update the syntax highlighting soon as at least Gleam is out of date (boy that language moves fast), and sometimes brotli-wasm throws a memory allocation error for some reason. I guess that's one cool thing that WASM brought to the table, memory handling issues.
Myself and two other people have literally kept this page alive for many years - the github repo says 2017.
* https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
1. You could achieve this on a static server, like GitHub pages
2. You could make the page editable and auto generate new URLs as the page gets edited.
See:
https://developer.mozilla.org/en-US/docs/Web/API/Compression...
Then someone tried fitting King Lear in there (which worked).
Then it turned out that until that day but not for very long after that URLs were not counted for character limits in mastodon toots.
That froze quite a few Mastodon clients who were unlucky enough to open that toot for a day or two until that got fixed. Not sure why, I'm guessing (accidentally) quadratic algorithms that weren't counting on urls that were multiple kilobytes in length.
I like embedding external resources such as bitmap images in SVG in CSS in HTML so that a document is truly portable and can be sent by email or messenger services. So I don't need a URL. The whole document has to be shared, not just a link to it.
I also found the favicon can be encoded in this way.
I don't do scripts, but a lot of fun can be had with HTML when you start doing unusual things with it. For CSS I also use my own units, just for fun. So no pixels, ems or points, but something else... CSS variables make this possible. I like to use full semantic elements and have minimal classes, styling the elements. This should confuse the front end developer more used to the 'painting by numbers' approach of typical website frontend work.
I just work from the HTML specs and go my own way. There is something I am working on that 'needs' this stuff. I see HTML as a creative medium and I wanted to solve problems such as internal document navigation - rather than hundreds of web pages, compile the lot into one.
The PWA takes an entirely different approach to what I am trying to do. I like the PWA approach but I want one file that can be moved or emailed, to be available offline.
I found that making all the images inline worked for me. I got best results with webp rather than avif but don't care about losing the size benefits with base64 encoding - once zipped those compress nicely.
Has the advantages of being centralized (site can be shut down, nuking all URLS) and decentralized (requires tech skills to set up, site cannot be updated without changing URL, etc.). Adding tinyurl to this as suggested in another comment takes it to the next level!
The fun part: any report will include the content in full, so any report will itself be reportable.
echo "https://smolsite.zip/`cat somesite.zip | base64 --wrap 0`"
can become echo "https://smolsite.zip/`base64 --wrap 0 < somesite.zip`"
- Really awesome project nonetheless!Search "Pepsi can" and some when you right click > copy image address will result in "data:image/jpeg;base64,/.../" instead of the website's image. Presumably to limit server cost / make the browser render? It's not for all sites, so perhaps more common sites (Walmart for example) it gives the correct image URL.
Pepsi can image from:
[1] https://crescentmarket.shop/ data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2...//9k=
[2] But when you click through: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQwXbLG...
[3] Paperpunchplus.com shows the correct image URL https://www.paperpunchplus.com/https://cdn3.evostore.io/prod...
When you load that results page, you'd be reaching out to ~100+ different domains that will respond and render the images at different rates (and some will fail to load at all). Base64-encoding lets you shove binary content into caches like Redis, retrieval and embedding of which would be preferable to hotlinking to a slow site. Then most of the page gets rendered at the same time client-side.
What ultimately stopped me is that on a site of this type you can't really include links to other sites made the same way, because your URL length is going to balloon.
https://github.com/lpereira/lwan - presume this is the web server library you're referring to? Very cool.
The bad news is it required firefox nightly and honestly I'd be surprised if it even still works because Mozilla laid off the people who were working on libdweb.
(this was so predictable)
Thinking of reasons to not do this though, it's effectively impossible to content moderate, at least not without building a database of all the content you don't want to host.
The deal made years ago in law in the US (and followed around the world) is that websites are not liable for the user generated content that they make available, as long as they remove it if requested for legitimate reasons. These two components go hand in hand. If a website is unable to remove content, it's effectively liable for that content. This basically breaks the web as we know it today.
In which case it would be also your hosting provider.
[0] https://stackoverflow.com/questions/417142/what-is-the-maxim...
I called it the Twitter CDN™ [1]
Here's Pong and The Epic of Gilgamesh, in a Tweet: https://twitter.com/rafalpast/status/1316836397903474688
Here's the editor: https://laconic.sonnet.io/editor.html
And here's the article explaining how it works: https://laconic-mu.vercel.app/index.html?c=eJylWOuS27YV%2Fu%...
[1] all thanks to the fact that Twitter applies a different char limit to URLs. We wouldn't want to lose those delicious tracking params, would we?
- <https://RTEdge.net/> output
- <https://RTEdge.net/?> playground
For more information: https://efn.kr
---
There is also service worker support which deploys as a Cloudflare Worker!
See <https://sw.rt.ht/?> -> https://sw.rt.ht
i.e.
https://smolsite.zip/#UEsDBB...I've already built a website that read zip files client-side with JS here: https://madacol.github.io/ozempic-dicom-viewer/ . It will read the zip file and search for MRI/CT scan images to display
Where I have doubts is how to reference external files from the main `index.html` file. I know you can load files as blobs and get a URL (I did that in my website above), but I am not sure if that will work as references in the <head>
https://smolsite.zip/UEsDBBQAAgAIAPtxJlepozjzcAAAAIgAAAAKAAA...
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...
Of course, you can just link directly to the URL the iframe points to. In the case of your video, you can simply visit https://smolsite.zip/s/57fe209faf4e6c0316ad32d7eeb792dfb571e... to have it autoplay while getting rid of the footer. I'm not sure how long the /s/ URL stays around before getting garbage collected by the server, but I bet you could regenerate it by sending a GET request to https://smolsite.zip/UEsDBBQAAgAIAPtxJlepozjzcAAAAIgAAAAKAAA... again.