https://builders.mozilla.community/ https://mozillabuilders.slack.com/
Every couple months I get an urge to give up on tech and start subsistence farming in the middle of nowhere.
The founder and CTO then proceeded to berate me in the channel for using a blocker. He had made a lot of money over the years from writing code that helped ads figure out what content to target for a user.
I still use an ad blocker, but have added Disable Javascript to all my browsers as well, just in case they try to detect I'm blocking things.
- Smart devices - buying an appliance with a closed-source, embedded device that relies on Internet connectivity and the solvency of it's manufacturer in order to operate it, patch it, secure it, and maintain it is the antithesis of what this planet needs right now. When the CA Root certificate(s) installed in your no-name smart TV expires and the OEM doesn't exist/doesn't care to provide you a firmware update, these devices will become less than worthless and most likely landfill. We need industry to adopt an open framework for smart devices that helps prolong their lifespan eg: a public Linux repo for updates to the underlying OS - the OEMs can deploy their own user interface, but end-users should be able to pick and choose if they wish (and most wont').
- Trust - in particular X.509 certificates. A lot of progress has been made in making trust via digital certificates the default rather than a paranoid exception, with a large portion of the web being delivered over HTTPS, RPKI for BGP currently being deployed in large operators and DNSSEC showing some (admittedly slow) signs of adoption. What is still a major problem in this area is the complexity of certificate management and renewal. The work LetsEncrypt and the EFF (certbot) have done in automating this process is fantastic, but these are still a long way from mainstream usage.
Someone wise said that tools that make writing easier turn bad writers into worse writers.
Goodhart's law means that nearly all content is broken. It is judged by its view count and Google rank, so all it is good for is getting clicks and ranking well.
So much effort, so much money, ploughed into creating really really awful content which hides away the 10% that isn't crap (Sturgeon?).
Moreover, the internet makes distribution nearly free and allows a nearly unlimited number of people to access information and digital media from all over the world, but lengthy (70 years or more) copyright terms make it illegal to do so in many cases. Instead, thousands or millions of person-hours are spent on the impossible task of trying to make bits behave like physical objects in order to satisfy legal and business requirements. When an organization such as the internet archive tries to make a digital library whose collection isn't bound by the constraints of physical libraries, they are sued by publishers for copyright infringement and potentially liable for $150k in statutory damages per occurrence.
Static web, and run as an app web.
I kinda have this by running with javascript off (this solves all sorts of other issues, though much of the web doesn't work that could work.) When I need to 'run as app' I allow js for that one visit.
This is probably just me being picky though.
I feel the web is too centralised, with half a dozen or so platforms essentially being gatekeepers of content on the web.
People link out on their sites much less tha in the past, in the belief that it raises the chance of penalising them on Google and hurt their rankings. Last I looked search engine are typically responsible for delivering around 50% of visitors to a site, and Google has a near monopoly in many countries.
Wikipedia while great provides a less than obvious set of rules and regulations before adding data into it. It typically tends to rank first on all major search engines for any query.
Social media have become moral compasses in what is OK and what is not OK to talk about.
A more diversified web moving away from these 'decision makers' IMO would make it a healthier place.
The "Let the output of your program be the input of mine." philosophy was really good. Web applications do the exact opposite. All of the interesting shit is missing.
Keeping parts of a thing in separate places can be necessary at times but it usually is not. There is the layout of an article website and there are article in it. The articles with their videos and images are separate things as much as browsers are separate things from web pages (perhaps even more so).
The article can be a separate file. Like an iframe but as inline text. No <head>, no <script>, no <style> but if it has any of those they are ignored.
The other perspective also works. The data for a physical object could be a single file. One or more images, videos, docs, the meta data. Say a product, its price and description can be put inside the image file and be made accessible as a js object/json. Standards can be created, adopted and popularized. If the picture has one or more cars the meta data can have all the usual properties of a car with standardized key names. This seems chaotic and undesirable for large shopping platforms but if an object has a description, contact information and a price it can be sold. Setting up a small store could be as simple as dropping images into a page. One would want to parse out some of the meta data of course but there can be much more data available.
It makes reasoning about the website much more like we reason about the real world.
Denying the search provider a page view might work. Have them return xml. Use the OS, browser or custom theme. Build a search extension platform on top of it, sit back and watch what people come up with. (I would filter out websites that need more than x requests besides images and video, have more than y kb js, z kb css, make big news websites into a check box in the side menu, I can think of 30 extensions right now)
I made a search bar for Opera one time (widget) that had a row of submit buttons (icons) with different targets. It sat on top so one could just click the next button. It is hard to explain but it can be quite surprising to re-send your query some place else.
MyIE use to be able to open X search results in background tabs. A mouse gesture would close one and move to the next.
As I've mentioned before, my favorite flavor of Google is Google Scholar. I'd also like to be able to slice/subset search results to get scientific, non-commercial, public interest, etc..
Mobile-everything craze. No we don't need to turn everything to information-sparse, inflexible mobile replicas. Give us more density, there's still a zoom-in function
kill the cloud. re-decentralize
Walled gardens - not being able to build things that integrate multiple services. Instagram live videos, combining amazon and walmart listings, google places "here now", or unified client for iOS/SMS/Discord/Slack/Teams are all off limits to ambitious developers.
Centralization - Seems like home email servers are increasingly being ban-hammered by the major providers like Gmail, ProofPoint, etc so you have to have your mail sent by a hosted server. CloudFlare is also seemingly becoming a single point of denial/failure. Chromium can now effectively dictate W3C standards because whatever they implement instantly becomes the ground truth for web dev.
DNS/BGP - Sorely in need of a major update.
IoT - It's shockingly difficult to find IoT devices which can compute locally/privately/on-the-edge. Everything sends all the data back home and stores/processes it there.
Tracking - If websites insist on having an OAuth or email verified signup, I'd like my browser to quickly and quietly create a new profile for every domain, or even every visit to that domain. And otherwise keep all tracking/fingerprinting down to effectively zero. I'm really tired of being tracked everywhere all the time.
I've been trying to get a blackhat webspam site down (it has child porn and piracy on it, besides spam links). But no luck so far, they effectively fool Googlebot and ruin search rankings.
Not in the first years of its existence, but in last like 10 years.
Yes, I'm aware that HN's like that, but if the price for fixing that whole mess is this small, then go ahead.
I’ve written about this here: https://medium.com/anti-content/the-shape-of-the-internet-34...
We have a patient bill of rights for example.
Email should become a pull system, where you get a notification that a message is waiting for you on server X. Server X is trusted? Then I'll take a look, oh, it's fakegoogle@serverx, I'll block them specifically. I never see that content again, period. No one else can put a message into fakegoogle@serverx's outbox so I can be confident that I'm not accidentally blocking valid content (no address spoofing). If I trust someone, like rawgabbit@rawgabbit.com, then I can automatically retrieve the content until you prove untrustworthy (either by your deliberate actions, or by having weak security that grants others access to your system).
This also handles not just spam, but promotional content. If someone wants to send out promotional content to 1 million customers, I can optionally retrieve it or just "unsubscribe" by never retrieving it. It's on them to continue hosting content that's never retrieved so at some point they'll detect that Jtsummers has stopped getting their content and just stop sending it to me.
this is a social not a technical problem -> imho. its broken beyond repair.