Why don't we do the following for HTML6, and introduce one profile for sites, and one for apps.
- Sites: HTML + CSS, Javascript if at all only for presentation purposes (like DHTML over a decade ago). Can be viewed with a radically stripped-down web browser. All you need is the layout engine and components for display and networking. No WebGL, no sound API, and no shenanigans like ambient light sensors or vibration (wtf!). Think of Google's AMP.
- Apps: The whole package that is offered nowadays. We can even go past this and rethink the division between web and native apps. Why can't a web app use sockets? Why can't a native app use the HTML layout engine or live in a tab? Google is planning to blur the gap between web and native with their new "instant apps".
Whichever side you're on, browser vs. native, the sheer frequency of this discussion proves that at least a clear distinction IS needed. Continuing the status quo of web/browser/standards bloat cannot be good.
Somebody really needs to set down some global rules of thumb. I think that when your webpage starts needing sidebars and subwindows and popups and notifications (yes, looking at you, Facebook) then at that point it should just be a native app.
Let the web and its browsers focus on "sites" and "pages" and let the OS do "apps." After that it's up to the operating systems to make discovering and accessing apps as easy as typing in a website's address.
As a user, I want to sign in just once on each of my devices (iCloud/Apple ID lets me do that), and just type in an app's name (say Cmd+Space and "facebook") and then start using it right away as the OS begins downloading it incrementally, just as a browser does a website, except with full access to the OS's features, efficient use of my hardware and battery, and instant access to all my data without a separate login.
[1] https://news.ycombinator.com/item?id=11735770
One problem is that some interactive news articles can make very impressive use of WebGL, like the interactive climbing map that accompanied an article about the Dawn Wall freeclimbing record: http://www.nytimes.com/interactive/2015/01/09/sports/the-daw...
Like Lynx?
In terms of sites versus apps, I think the browser vendors are responsible for making this happen. Adobe AIR was the closest effort I saw of marrying web technologies with apps.
Sadly AIR never took off and whilst I appreciate the intention, it left a huge looming question of what do we actually do with all this new web technology?.
One answer I came up with in recent years was what you suggested: of partitioning off the people who want to work with the technologies and keep them separate from the text+image+CSS based web we've all grown to love.
Similar to how the demo-scene was an offshoot of game development...
So we're mostly lacking the site-only aspect. To some extent that can be achieved with addons that strip or block certain APIs.
It doesn't cost anything to keep these features around, why kill them off? Code is cheap, it doesn't cost you, the user, anything to have the features in your browser...
It is a nice study showing how far these niche new features have reached, but that shouldn't affect how we are working on the new ones (except perhaps improving the security models of these).
There's some good reasons to want this, in fact.
>It doesn't cost anything to keep these features around, why kill them off?
This is also why the features list of a basic Windows or Office install is miles long. Once developed, there's no cost other than maintenance of those features (updates, security, etc).
I think its hard to argue against complexity in software. The ultra-complex usually win for rational market reasons.
I wouldn't necessarily agree that the conclusion is to just rip stuff out of the Web platform, though (although there is plenty of stuff I'd love to drop). Rather, we need to implement the features in a secure way. This isn't rocket science. Notice, as usual, that the majority of these security issues are straightforward memory safety issues† in C++: e.g. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=firefox+svg
† Food for thought for those claiming that "modern C++" solves these problems.
Around a billion.
1% would mean 10 million websites are using each possible feature. Okay, it's not quite that even, and a lot of popular sites (like many news sites) stick to the most basic features, but all these browser features are there for the other various cases. Like aforementioned web apps. Various tech demos. Sites with very specific use cases.
In other words, it's because the internet is massive and varied.
99% of the world doesn't use Calculus. Maybe we should stop teaching it in school.
Lame article.
It reminds me of Sturgeon's law https://en.wikipedia.org/wiki/Sturgeon's_law
If 99% of everything is crap, you can bet a sizeable wager the 1% wheat from the chaff are using HTML5 and Javascript APIs.
Also if a content silo counts as a 'top website' then this is an outlier and not to be included. Facebook is a walled garden and not indexable. Facebook is parasitical to the web. Twitter and Google are not the web either.
The hard part of any design is knowing what features can be removed not added. HTML 5 and JavaScript is fluff for most websites.
PS: HN even uses <table id="hnmain"... o the horror.
Yeah but HN can be progressively enhanced and it's also one of the few exceptions to the rule.
<table id="hnmain"..
Indeed.Worth noting the small eco-system of HN redesigns which all leverage HTML5+JS in some way :)
All sites should be like an electric stairs: the stairs still works when the power is cut off, but to the inconvenience of the users.
I mean, a similar study would probably find "top 10,000 apps don't use 80% of OS features." Just because not a lot of people use a feature doesn't mean it shouldn't exist.
Can someone give me a good, non-gimmicky example of what this would be used for?
Most speculatively, imaging the environment with compressive imaging[1]. One might be able to flash some patterns on the screen and look at light sensor output to take a picture.
Giving web browsers access to sensors on our devices is sort of scary.
[0] http://arstechnica.com/tech-policy/2015/11/beware-of-ads-tha... [1] http://arxiv.org/pdf/1305.7181.pdf
If we stopped adding new capabilities to JavaScript even as new sensors and w/e go mainstream we'd start needing Flash all over again.
Not saying it's the most important case ever, but it would be important if you needed it
A great example that comes to mind is the drm component that Netflix require to deliver html5 instead of that spotlight .. thing. I cannot think of any other site that has needed it - or at least, any other site I visited before it was available, that suffered for it.
And still, I consider it a requirement. That feature that I require for exactly one site.
In other words, its how often these popular blocking extensions prevent the JS APIs from firing.
Its not blocking SVG, its blocking (mostly fingerprinting) JS libraries from running SVG JS methods
So its not that extensions block SVG directly, its that AdBlock Plus and Ghostery block a bunch of libraries, and those libraries use the SVG methods to finger print (and do other stuff)
Obviously, we could prune the execution of some of the JS which is backwards compatible to the early 90s, and some of the html which is based on IBMs 1960s.
My super high-level first pass optimization rec for the w3c: If a feature exists for 3 years and a random sampling of 100,000 websites has usage stats of less than < 1% it is automatically deprecated. If it is >1% but less than 5%, it is automatically phased out in 2 years of the spec.
[0] NaN === NaN returns false?. 0.0001 + 0.0002 !== 0.0003? weird.
NaN means "no idea what we've got here". When you get to say that phrase, would you expect it to be used for exactly one thing and one thing only? To me "I have no idea" means the possibilities are endless (infinite). We can only compare for equality when we know what we are looking at, otherwise it's just "status unknown". Yes, it might be equality - the chances are infinitely small though.
If you take the argument a step further and say "but I got the two NaNs that I compare doing the exact same operation, so even if I don't know what I've got from a mathematical point of view whatever it is it should be equal". In that case you are not actually comparing the NaNs but the path(s) that got you there.
I must say I find the whole NaN, null, 0 vs. undefined interesting on so many levels. There is a world of a difference between knowing you've got nothing (null or 0) and not knowing what've you've got at all.
"null": I have no bank account. "0": my balance is 0. "undefined" or "NaN": I lost my memory after yesterdays binge drinking and don't know who I am and if I got a bank account or not. Knowledge vs. no knowledge.
I cannot support this.
> ... even though fairly close to Gecko-based browsers like Mozilla Firefox in the way it works, is based on a different layout engine and offers a different set of features. It aims to provide close adherence to official web standards and specifications in its implementation (with minimal compromise), and purposefully excludes a number of features to strike a balance between general use, performance, and technical advancements on the Web.
[0] https://thestack.com/security/2016/02/03/chromodo-browser-di...