It all went wrong at NAT, we were supposed to be peers, not producers and consumers.
It's not really simple. It's actually complex and having a general purpose computer be hardened enough for non-geek homeowners not to screw up (social engineering, security warnings fatigue, etc) is possibly an unsolvable problem.
If RSA SecurID whose very business competency is security can be hacked[1], the homeowner running Dovecote/Sendmail/Qmail/etc on Linux or Linux container has no chance at all. If uber-techno-geek Mark Russinovich can get infected with a rootkit[2], the average homeowner has no chance.
One could supposedly burn an embedded chip with email software (a dedicated "email server appliance") that can't be hacked -- but that also means it can't be updated. Email technology evolves (cleartext email --> SSL email --> next tech is ???) Also, if a vulnerability is discovered, the homeowner has to buy a new email appliance. If you make an email server on FPGA than can be flashed with new firmware, you've now re-opened an attack vector from social engineering.
>Re-enable mail servers to be run from home connections,
If you're talking technical issues such as ISPs opening up SMTP traffic on port 25 for residential internet connections, that's not really the problem. The real issue is the social dynamic of trust which is affected by bad actors and spam. Analyzing it through the lens of "technology" disguises the true problem. The puzzle of "trust" happens in a layer above SMTP/25.
[1]http://bits.blogs.nytimes.com/2011/04/02/the-rsa-hack-how-th...
[2]https://blogs.technet.microsoft.com/markrussinovich/2005/10/...
It's actually straight-forward based on lessons learned in high-assurance security: research and field-proven stuff that actually stopped pentesters vs common stuff that doesn't. Apple already gets far with iPhones and app stores despite having way less security than I'd advocate. Their Macs showed auto-configuration & easy everything with a UX experience that needs to be in secure computers. That's just through whitelisting, quality control for apps, and sandboxing. Weak but fairly effective.
So, what's something better take? It takes POLA all through the architecture for one. Orange Book showed no apps must be able to write to most critical files of the system except specific, administrative ones. Secrets should be compartmentalized in partitions with code that uses them without leaks, esp covert channels. Turaya et al showed you could do that in a single partition most of the time. OpenVMS showed a transactional, versioned filesystem lets you dodge all kinds of bullets with broken installs, accidentally deleting stuff, and so on. Time Machine shows how to make other process, backups, easy as possible. It would be combined with above to make that safe where apps couldn't subvert it.
With above + whitelisting, apps are neither likely to try to do damage nor will do damage to critical files. The next step is mapping user's intent to computers securely. CapDesk, a capability-secure desktop from Combex, shows us hints of how to do that with their PowerBox feature. It's basically a file dialog that, in the background, gives the app permission for just what user clicks on. OS or GUI manages it where app can't do crap with it. Trustworthy GUI's like Nitpicker GUI in Genode prevent spoofing of this or other windows plus screen scraping. So, they're trusted dialogs for key actions that people get used to and allow fine-grained permissions to be inferred.
The next step is on the apps themselves. They're going to try all kinds of tricky stuff. Android permission model is a start on dealing with this. Could, as Common Criteria did, create profiles for specific types of apps that have just what permissions make sense for that type of app. Wouldn't be mandatory but profiles or certification to them could help users instantly determine security package is reasonable. Certification would be easy if it was a permission list, use of a safe language, avoidance of features marked risky (eg macros, ActiveX, or JavaScript), and so on. Any app using stuff like that leverages capability-model with extra compartmentalization. All apps in this model are written with a language using GC or safe, manual memory. Any JIT's are integrated with isolation and/or capability schemes. All that is enforced, a la CHERI or SAFE, down to the CPU with IO/MMU transparently managing tags on incoming or outgoing data.
Hard for me to imagine easy hacks on systems that combine above principles. It will sound hard to some reader until they remember existing mainframe, desktop, and UNIX architectures are much more complicated than a Rust or Go app against a straight-forward API with a permission list and/or security policy. Or a set of OS components coded similarly. Side effect is extensions and maintenance become easier once you have OS in type-safe, memory-safe, concurrency-safe, modular language with isolation as default coding strategy.
Note: This covers security at the software and firmware level. Attacks on transistors or RF through software are outside of scope as R&D is ongoing.
> if a vulnerability is discovered, the homeowner has to buy a new email appliance.
that's the update
You remind me of that episode of Star Trek when the crew is trying to figure out how to stop an asteroid impact and Q goes "It's simple! Just change the gravitational constant of the universe".
Darkweb shows that there is a (small, but real) market for an anonymous, decentralized web. I don't know anyone that prefers to navigate the web in a way that is permanently stored in NSA/advertiser servers forever.
Which implies that the reason there is not widespread adoption of a more decentralized, anonymous web is thus more of a product question.
Mail servers, perhaps. SMTP servers, not a chance in hell. SMTP simply wasn't designed for a world with SPAMers, phishing and others who abuse email. If we're going to enable decentralized, we need to build in security and accountability from the start rather than relying on the naive protocols from the dawn of the internet. Note that you can have security and accountability while still maintaining anonymity. As an example, proof-of-work systems can mitigate the risk of bulk mail.
But if you're trying to replace SMTP, you're traveling a well-trodden path that no one has successful traversed as of yet. I wish whomever goes down that path the best of luck, since we need a good, modern, decentralized replacement for SMTP. But I wouldn't bet on anyone, even someone like TBL, being able to succeed at it.
Though I'm not sure if this was just a market response to customer preferences or if it was a contributing factor to digital consumerism.
Verizon ran a campaign recently where they made their FiOS service 1:1 claiming that it was due to customer demand. But it happened right around when the FCC was debating net neutrality, so I always assumed that Verizon hoped they could fool a few people into thinking that it was what net neutrality was all about.
Also running a home email server isn't usually that helpful. Even if you do that you're going to have to use relays to get your message where you want it to go (unless the topology of internet email changes dramatically), so you might as well cut out the step of having your own server and just send the mail directly to the first relay you'd be using anyway.
I agree with the need for what you suggested, though. It's also within reach of current technology with legacy compatibility for the most part. CHERI team has demonstrated that nicely with a FreeBSD port to a capability architecture that also supports safe, C apps. See main paper and "Beyond PDP-11" for details.
It sounds like the path forward is going to rely on improving IPv6 adoption as a first step. We aren't likely to get rid of NAT without it. Consumer pressure could be placed on ISPs and web hosts to push things forward.
If our systems were safer, and could be relied on as a resource to share resources with others, which we want to share, we wouldn't need to broker the process out to others.
Its only because OS vendors fell asleep at the wheel - dazed, perhaps, by the potential to shift the problem out to the end-developers - and stopped building good OS features.
Software sucks because business models must frequently be addressed before customer's needs are addressed. Of course this is a simplification of the process, but no company continues writing software if their business models for that software fail and they run out of money or is threatened with shutdown if they don't comply with the government.
To "reinvent" the "web" (or what I call the Intercloud), business models must be removed from the equation. New models of work storage and exchange must be created to allow developers to write code for the people who need it. When a user relies on a feature, there should always be a clear path for them to a) continue using that feature for as long as they see fit and b) enter into a contractual agreements with a developers to develop new features they need. This should be able to be done without a corporation or business model getting in the way.
It also implies all the software down the stack is reinvented in the same way to support this new methodology. Deployments/installs, for example, will need to be done differently moving forward.
This is obviously bad news for the "startup" scene, but good news for humanity. Things are getting complicated and clearly don't scale well doing it the old way. It's time for a change.
> To "reinvent" the "web" (or what I call the Intercloud), business models must be removed from the equation.
This goes into my "and everyone gets a pony" set of solutions.
You may also want to check out IPFS and Sandstorm for other examples on this topic.
Not saying people shouldn't be paid, but how we currently pay for software is broken and creates not just wrong incentives, but completely backwards incentives where going directly against the interests of the users is the most profitable path.
Isn't all of this already possible to do with the Web as we know it, and people just don't do it that way due to general lack of interest on both consumer and provider sides?
Issues creep up when you need to support dynamic resources, which rely on :
a) user input b) stateful server
With b), a will feel more comfortable sharing data when there's trust built up in b). This seems to go directly against legacy-web as centralization was the solution to the trust problem.
With new technologies like bitcoin showing trust can be based in mathematics, and ethereum showing that interactions can be based around mathematical rules, we definitely have the technological raw power to built a scalable, trustable, non-censorable alternative to www.
I do hope these new technologies are not too-hampered by how ubiquitous and entrenched www is.
The distributed network idea has been around for a long time, and ideally it's how the web will go.
But it's much more of a technical and social challenge than today's server-based web.
To win users it has to be significantly better than what's available today - not just another way to do the same things, but with a few extra complications and unreliabilities.
I think IPFS is already sufficient for that, and indeed someone has already built a chat application.
https://github.com/haadcode/orbit
The underlying IPFS event database implementation:
https://github.com/haadcode/orbit-db
I think that there's something related to subscribing to changes without any central point of failure that's not currently possible with the IPFS implementations, but I also think that this is planned.
But we need something like a Decentralized Firebase. And that is exactly what we're working on at https://github.com/amark/gun . I met Tim Berners-Lee last year at the Extensible Summit and chatted for a bit, and it is pretty incredible to know that he is rock solid in his principles and values. That takes a pretty level head.
20 years is still a very short timespan, and we should probably admit that for huge societal changes like this we simply need more experimentation and more time. The rapid changes in technology sometimes lead people to the misconception that everything else would move similarly fast, but our human world is still slow. I'd rather see more people just try out different options on the web as it is and then have the most successful win rather than putting a bunch of clever people in a room and plan for everybody else.
Satoshi (whether one person, or a group) did it with Bitcoin. The concept is now ingrained into every architect's toolbox. While Bitcoin itself won't revolutionize the web, it is a huge step forward in how to think of decentralization in an environment where it is too costly (technically, financially) to effectively decentralize.
Perhaps the next step will be combining said decentralization with anonymity.
Then in 50 years, maybe we'll all have our own anonymity-preserving "cloud boxes" that follow us wherever we live, just like routers do now (save the anonymity and storage).
Maybe it's because the emphasis of the article is put on Berners-Lee and thereby creating a notion of authority, or because the group meeting and church-like environment somehow looks and sounds very much like design by committee, but the article didn't really convey that path to me.
Urbit looks really promising in that aspect
I can't see that friction is the problem. It's easy to implement payment on the web nowadays. The issue is getting people to want to actually pay, rather than going and looking elsewhere for the same thing for free.
The challenge with the web is as is noted at the end, not one of technology, but one of society. Making a better web means making how people work with each other through the web, what we expect of it, and how we want it to work better.
Unfortunately, time and time again we've seen people prefer free, lousy web content to even a ludicrously small payment for something really good.
How you fix that (getting people to value what they get from the web and to be willing to pay for it), I don't know. Spotify, Netflix, The FT et al have shown that it's possible to get people to pay, but I can't imagine even the majority of the web going that way for now. Hopefully that changes in the future.
I find it difficult that they constrained the concept to payoff = $monthly_amount/$number_of_clicks. From both sides I would prefer to be able to set and see prices, not arbitrary equal results for all the sites I click. That had kept me from clicking in most cases, because I wouldn't want to give the equal amount to go to some 1-paragraph blogpost as to a 1 hour podcast. Even though I would want to contribute for both.
Which is why something like Brave is so exciting.
No decentralized technology is immune to this herd behavior. Even Bitcoin is probably doomed to become dominated and effectively controlled by some popular mining pool or online wallet provider. This will not change until new ethics of decentralization forms in the society. And that will happen eventually, but it could take a lot of time.
Traditionally access to the WWW had a pretty high threshold of combined factors: Cost for hardware and communications, time and effort to understand how things worked, and a still pretty basic group of sites and such. Lee is wrong in that the Web used to be 'more open' from a general social sense - it was AOL chat rooms before Snapchat, it was GeoCities before Facebook, and on and on and on. Large organizations like Dropbox have made it competitive and a more intelligent decision than trying to set up a personal server and jump through all sorts of intellectual and digital hoops to make it work.
Or, in other words, the easier something is to use, the more idiots are going to get their hands on it and use it. The Web is simply a reflection of the human species. Both stunningly beautiful and tragically ugly, it's certainly evidence to me that Utopia is, fundamentally, irrational and likely impossible without exlcusion and selection bias.
The network effect is just too strong, and most people don't care enough about privacy and openness to switch to different networks.
Given the nature of network effects, centralized services will always be more valuable to users than decentralized services.
Also I've read that distributed networks (or "meshes"?) are much harder to get working well than centralized ones. I don't know much about them, though. We might need to wait for another battery breakthrough, too, if my phone will be doubling as a server. I guess we would also need more blockchains, sharding and encryption like with Tor, and a greater comfort with eventual consistency.
“The web is already decentralized,” Mr. Berners-Lee said.
“The problem is the dominance of one search engine, one
big social network, one Twitter for microblogging. We
don’t have a technology problem, we have a social
problem.”
One that can, perhaps, be solved by more technology.
This is a very confused article. It's a social problem! But we're going to solve it with technology!I'm sure Tim Berners-Lee has a great understanding of the situation, but since it didn't come across in the article let's try to build our own description of the problem here in the comments. To do this we'll go through the most interesting projects in the "fix the web" space and steal their key insights.
# Camlistore - All Your Data Should Be in One Place
I probably have important data in two dozen different places. Google, FB, Dropbox, Reddit, GitHub, Mint, Stack Exchange, Amazon, etc. This is crazy!
All my personal data should go into a personal data store. I'm not sure how we'll ever approach a sane system without this step. Camlistore is all about making that data store.
More info here:
https://camlistore.org/doc/overview " Camlistore is your personal storage system for life. "
# Urbit - Everyone Should Have a Name
Right now only techies own their names. We do it in two ways -- the total ownership way where we make a private key and identify ourselves with it, and the "technically renting but basically ownership" way where we buy a domain. You can reach me at <myname>@<mydomain> today, tomorrow, and probably for the rest of my life.
Most non-techies get by with Gmail and a FB page. This isn't the worst, but it's not ideal.
And for every different service we use we get a different name. I don't want 20 names! I want to use my name! (Or sometimes one of my pseudonyms, which Urbit has first-class support for).
In Urbit everyone has a name[1]. Even better, this name maps to their computer, so if I know my friends name I can connect to their computer -- the foundation of getting an actual peer-to-peer network back from the current mess.
[1] Connected to a private key and human readable! But often silly, eg: ~gumdob-tumlub
# Sandstorm - Everyone Needs a Server
Servers are necessary to be real internet citizens. I think this is basically self-explanatory. If your entire internet presence disappears when you close your laptop lid you're basically beyond helping, and will always need some kind of walled-garden to watch out for you.
The problem is that Linux servers are a pain to host. With Sandstorm you can set up a server with one click. You can install apps with one click. This is . . . basically such an obviously good idea it's hard to find more to say about it.
If there are more interesting projects in this space please mention them, I'm going back to coding:)
EDIT: I wasn't really sure what to write for a conclusion, but now I've thought of one: The web evolved, what we get next will be _built_. This is very exciting.