Are there any plans to move in this direction? It seems like if you can do it for the full app, you should hypothetically have the capability to make it library-specific. Or perhaps there are non-obvious blockers that make it too hard?
If there are plans to do this, isn't it better to do it sooner rather than later? Better to get library authors in the habit of specifying permissions/policies now while the ecosystem is still small. If you wait too long, it will be a ton of work to retrofit all the existing libs.
It attempts to do what you're essentially describing. It was built by the MetaMask team, where supply chain attacks are an obviously huge risk.
I've spent some time trying to get it working in an app, but haven't been able to get it all the way working. It's still pretty beta and not well documented.
Deno's approach seems most promising so far since it's really ideal to have this built in to the core runtime, but it's not really very useful yet as implemented and I don't know whether taking it further is a priority for them.
Even if you have to 'eject' and provide overly broad permissions to certain libraries, I'd imagine these would be quite a small percentage and you'd still get the huge win that the 90% (or whatever) of your dependencies that don't need any system or network access at all and don't have the kind of issues you describe can effectively be removed as viable targets for attack.
For callbacks, Deno could provide a wrapper function that is only available to the top-level app (not dependencies) and causes the permissions in the callback to be evaluated at the app level, not the dependency level. There may be a better way, but that's one idea.
Static checking would be great too. I think a combination of static and runtime enforcement would be ideal.
To me, it seems like you'd need a new language.
adding permissions will do nothing except add ridiculous overhead and complexity such that to get anything done devs will just give all permissions
I don't see why permissions have to add "ridiculous overhead and complexity". Most dependencies need very limited (if any) system or network access. Locking those down would be a huge win, and it makes reviewing updates in large dependency trees realistic since you can zero in on permission changes.
`redis` and `ioredis` `npm` packages don't work with Deno, even with the Node compat layer (I tried). So you have to use the Deno driver for Redis. But you look up the library for this, and it's experimental: https://deno.land/x/redis@v0.25.5
Same deal with Postgres. `pg` would not compile at all. Knex also didn't work (this was an older project). I'm assuming this is because these two packages use native Node.js plugins.
I like Deno a lot, and the out of the box TypeScript support is a gamechanger, but I had a really tough time working with it and actually being productive.
You should take a look at Postgres.js [0] which supports Deno and TypeScript. Version 3.x was released recently and discussed in this HN thread [1].
The client is implemented in JavaScript. Queries can be written using JavaScript template literals.
[0] https://github.com/porsager/postgres [1] https://news.ycombinator.com/item?id=30794332
I'd love to hear specifically what didn't work and how close the Deno team is to maybe fixing those sorts of issues. Not that they owe that to us open-source wise, just curious as like... otherwise you are right. We're starting the ecosystem again over from scratch.
ioredis commands hang randomly. I couldn't do GETs or SETs.
I'm hoping Deno is such a dramatic improvement on the foundation vs Node that it's enough to convince library authors to jump over; the jury's still out on whether that flywheel will get started
There's a lot of crap in the node.js ecosystem, no denying it.
But there's just a lot of packages in general. Many of which are really, really well designed and good.
Yes, there are a lot of packages available - but that gives you choice.
For me, a javascript developer that mainly use Node.js for work, Deno is interesting and I want to use it but the hosting part is what prohibits me from using it. In node it's easy to run production code with pm2, you can cluster it and it's super easy to configure it so that it will run one node process per available core.
With Deno, you can't do this because there is no clustering available so you kind of have to run it on single core machines to get maximum performance out of your hardware. In other words, on cloud solutions like Deno deploy or a Kubernetes cluster configured to run it on single cpu docker containers.
I am not interested in that and as long as it is that way, running Deno is unfortunately a waste of my hardware. Sure there are web workers and they are great for stuff but if my process dies for some reason I don't want that to halt the application.
pm2 start index.ts --interpreter="deno" --interpreter-args="run --allow-net --allow-write"
Of course that's not the most representative example, because a compiler can almost entirely dodge the ecosystem problem, but I thought I'd offer it anyway
Yep, I'm feeling this right now. I was recently tasked with updating an internal node app that hadn't been touched in about 4 years and it's seriously one of the least fun things I've had to do in my nearly 10 year career. After hacking away at for it a couple weeks, I told my boss that it needs a ground-up rewrite.
The bazaar approach of Node and NPM has created an absolute hellscape to develop in.
I’ve been using Node.js pretty much daily in production for almost a decade now, and it’s never been a total disaster, not even close.
Hahahah!!! So true!! ;D .. i think i've written a couple of those!
Deno has some JavaScript/TypeScript in it. On GitHub https://github.com/denoland/deno is 22.8% JavaScript and 13.2% TypeScript, and https://github.com/denoland/deno_std is 68.2% JavaScript and 31.6% TypeScript.
So to me the title is misleading about the name (Deno is certainly not named Deno.js), but not about what Deno is written in.
wat
(Deno seems worth checking out though!)
In the current market, unless you're a big company that gets flooded with applications on a daily basis, why would you ever reduce your hiring pool arbitrarily? If you're a 13-person startup with good funding, you want all the candidates you can possibly get. Excluding potentially great engineers because they've never worked with Deno doesn't make any sense.
I do like Deno personally, but this reason is not a great one to do so.
Pygame for example was very popular with hobbyist game programmers in the early 2000s as it provided a much easier way to get started with game programming than C, which was the most popular alternative back then (Unity 3d only released in 2005 and Unreal Engine only became free in 2015).
What I do remember from the early 2000s, though, was legions of aspiring game programmers struggling with C and C++.
Really though this seems almost like resume keyword checking level of a candidate quality check. Actually it sounds exactly like that...
Yeah, no.
I wonder if more people just assume that to be true, heh. I kind of was expecting it, weirdly enough.
Hint: "node" is "edon" backwards. Not sure if that name is taken for something Javascripty ... * goes to check * yeah, I found [1] which seems to be 4 years old, tagline "Run browser JS in the terminal".
"node".split("").sort().join("")
Verlan is itself verlan of l’envers (backwards). It’s super common to make slang words this way.
Deno is a simple, modern and secure runtime for JavaScript, TypeScript, and WebAssembly that uses V8 and is built in Rust.
Only to have that immediately followed by really poor practice of suggesting this as the installation method: curl -fsSL https://deno.land/install.sh | sh
This is not strictly related to Deno -- lots of software does this -- but if you're going to suggest your thing is more secure than the other guys' thing (which is implied by calling your thing secure), you shouldn't then be immediately throwing that credibility away.Yes, the page offers a link to the "Releases" page at their github repository. However, anyone familiar with any kind of UX will understand immediately that this is effectively burying the link and subtly makes the statement that you don't really want to bother with that other way of doing things. They also don't provide a gzipped/bzipped tarball for the linux install but a zip file instead, adding an additional barrier/dependency.
I understand this is an area where security is losing the tug of war to ease of distribution/access but it pains me to see it on any project, let alone the potentially good ones.
There's a reason why code signing exists as a security measure.
The most obvious case: someone compromises the installation script on the actual real deno server. Right now the webserver there is returning an HTTP/307 to an HTTP/302 to the "current" installation script file. Any compromise of the webserver makes this very dangerous.
Contrast that with proper signed packages, code signed sources, etc. There it requires compromise of the developer's systems and signing keys, which at least can be a far harder thing to attack if they're doing things securely.
2. copy/pasting includes invisible characters that aren't seen until executed
both of these things happen regularly
orthogonally, curl|sh (usually) circumvents the package manager and makes uninstallation difficult