I'm excited to share something I've been working on: a way to set up and launch websites directly through the domain control panel. It allows anyone with a domain name to create, publish, and edit basic websites directly from the domain control panel without any traditional hosting providers or coding knowledge. This narrows the gap for non-technical people looking to publish simple personal and small business websites.
This is also the first TWA (triweb application) on the triweb platform (https://triweb.com) we are working on. Triweb currently has limited functionality, and this app is mostly just a showcase of how TWAs and triweb containers work. We have an exciting lineup of upcoming features and a unique, simple vision of the decentralized web without the overhyped web3 technologies. We hope that one day triweb will become a standard platform for local-first, browser-based decentralized web applications.
I put small web pages into DNS TXT RRs. Thus, one only needed to make a DNS request over UDP to retrieve the web page, instead of a DNS request and then an HTTP request over TCP. tinydns allows one to put any data into RRs, including control characters. RRs can therefore contain MIME headers. Then I put dnscache in front of it. The result was a highly resilient website with compact pages.
Doesn't this make the triweb relay server the central authority?
The triweb platform is still a work-in-progress, and there is a lot left to be done, released, and documented. The Banner app is mostly just an early demo of how TWAs are built and how they may be deployed to domains, that I thought may be interesting to HN.
I'm no networking expert, but "without the need for web servers" isn't really correct because the DNS is a type of server, right?
First, because of a distributed nature of the DNS, clients do not directly fetch this data. Instead, resolvers and DNS forwarders do the work and cache it.
Secondly, because displaying or converting this data to html requires a special website or a dedicated client, at which point this dedicated component should use a different storage than DNS. Whatever that may be, p2p, etc.
Third, as the impact on the DNS ecosystem is problematic here. I will say something controversial by stating this: DNS had been designed in one goal, for you or your program to connect "to a string" that you or a string owner control and manage, as opose to connecting to a network address (IP) that you usually do not own, that can change at any time, and that it is meaningless and hard to remember. Over the years, this basic principle extend a bit with introduction of some policies being stored in the DNS that is meant to help the domain owner setting up some boundaries or policies, like SPF or CAA records.
Storing website data is not something I would recommend. Again, you can argue that this "client" only prints TXT data to users that would not normally be able to access it (average internet browser user) but clearly it goes far beyond this, encouraging storing domain content (data) into a DNS.
Fourth, security. Very few domains implements dnssec or curvedns and clients do not use DoH en masse. A mitm is a serious threat to the client rendering this data.
Fifth, if this gains wide adoption (doubt it), resolvers will stop caching TXT records as a policy to save resources for crucial data, and the crucial data is A(dress) record(s).
Because the purpose of the DNS on a high level view, is too make a connection between two programs or users. I know that's a simplification, but this is the core idea. Sure, it has reverse DNS mapings to assing a string onto a network address, but 99.99% of the time DNS job is to make connect() possible. So you connect() to Amazon using DNS, Amazon contacts payment processor using DNS, they connect() to Visa which connect() to your Bank and every connection in this payment processing chain use DNS to maka it possible and reliable.
So the way things work today is this, the way I see it:
1. We have a physical layer -> a wire
2. We have a data link layer -> ethernet + ARP or NDP are common here
3. we have a network layer -> currently, two networks are widely used: IPv4 and IPv6
4. We have a transport layer: tcp and udp and quic
7. We have an actual data, encrypted or not.
So where the DNS fits in this model? People say it is build on top of UDP, so it must be layer7, right? But because of the core function of the DNS protocol, I would put DNS into Layer3, the same as I would put ARP and NDP on layer2 even if NDP is built into IPv6. I would put it on Layer 2 solely by the function it provides.
A sole purpose of the DNS is to make a connect() possible without using direct network addressess, but client use "a string" as a destination. For me this is a Layer 3 protocol, because it is a helper protocol used to establish a connection.
Once you have a connection, on top of it you use other layers to transfer data reliably.
Your solution puts a Layer 7 data into a Layer 3 component by using TXT records. And if there were no TXT records invented, you could still encode binary data in the AAAA records, 8 bytes at a time and hack it to store arbitrary data that a custom "client" can process in 8 byte chunks.
And while clever, this is not the right tool to store layer 7 data.
It is like sitting on an island with a hammer. You need to cut a tree and says: well, I can use hammer to cut the tree or I can try with bare hands. What should I do? While I agree hammer has a higher chance of succeeding, I would advice to find a sharp rock instead. Find or invent a better tool for the job.
The main disadvantage is that due to the UI/UX of domain control panels, managing anything more than a few paragraphs of text gets messy really quickly. But that could be actually an advantage, as the DNS is not particularly well suited to serve large amounts of data, and the app itself is meant for small and simple websites.