Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.
If you're an Electron developer (like the apps mentioned), I recommend:
* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.
* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.
* You probably want to rotate your certificates if you ever gave anyone else access.
* Lastly, you should probably be the only one with the keys to your update server.
It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).
You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.
Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.
We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).
That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.
For the vast realm of <300$/year products, the ones that actually use updaters, all your suggestions are completely unviable.
Sure. I’d rather have it be provided by the platform. It’s a lot of work to maintain for 5 OSs (3 desktop, 2 mobile).
> we should try our best to release complete software to users that will work as close to forever as possible
This isn’t feasible. Last I tried to support old systems on my app, the vendor (Apple) had stopped supporting and didn’t even provide free VMs. Windows 10 is scheduled for non-support this year (afaik). On Linux glibc or gtk will mess with any GUI app after a few years. If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.
> Touching files on a user's system should be treated as a rare special occurrence.
Huh? That’s why I built an app and not a website in the first place. My app is networked both p2p and to api and does file transfers. And I’m supposed to not touch files?
> If a server is involved with the app, build a stable interface and think long and hard about every change.
Believe me, I do. These changes are as scary as database migrations. But like those, you can't avoid them forever. And for those cases, you need at the very least to let the user know what’s happening. That’s half of the update infrastructure.
Big picture, I can agree with the sentiment that ship fast culture has gone too far with apps and also we rely on cloud way too much. That’s what the local first movement is about.
At the same time, I disagree with the generalization seemingly based on a narrow stereotype of an app. For most non-tech users, non-disruptive background updates are ideal. This is what iOS does overnight when charging and on WiFi.
I have nothing against disabling auto updates for those who like to update their own software, but as a default it would lead to massive amounts of stale non-working software.
Linux, I got burned again yesterday. Proxmox distribution has no package I need in their repository.
I am trying to use Ubuntu package - does not work.
I try to use debian - too old version.
How do I solve this? By learning some details of how the Linux distributions and repositories work, struggling some more and finding customly built version of .deb. Okay, I can do it, kinda, but what about non-IT person?
Software without dependencies is awesome. So, docker is something I respect a lot, because it allows the same model (kinda).
Auto-updaters are the most practical and efficient way of pushing updates in today's world. As pointed out by others, the alternative would be to go through app store's update mechanism, if the app is distributed via app store in the first place, and many people avoid Microsoft store/MacOS app store whenever possible. And no developer likes that process.
Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.
For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.
Have a shoe-box key, a key which is copied 2*N (redundancy) times and N copies are stored in 2 shoe-boxes. It can be on tape, or optical, or silicon, or paper. This key always stays offline. This is your rootiest of root keys in your products, and almost nothing is signed by it. The next key down which the shoe-box key signs (ideally, the only thing) is for all intents and purposes your acting "root certificate authority" key running hot in whatever highly secure signing enclave you design for any other ordinary root CA setup. Then continue from there.
Your hot and running root CA could get totally pwned, and as long as you had come to Jesus with your shoe-box key and religiously never ever interacted with it or put it online in any way, you can sign a new acting root CA key with it and sign a revocation for the old one. Then put the shoe-box away.
You can pick up a hardware security module for a few thousand bucks. No excuse not to.
I've noticed a lot of websites import from other sites, instead of local.
<script src="scriptscdn.com/libv1.3">
I almost never see a hash in there. Is this as dangerous as it looks, why don't people just use a hash?
2. Because that requires you to know how to find the hash and add it.
Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.
I have always put Windows signing on hold due to the cost of commercial certificate.
Is the Azure Trusted Signing significantly cheaper than obtaining a commercial certificate? Can I run it on my CI as part of my build pipeline?
I wrote @electron/windows-sign specifically to cover it: https://github.com/electron/windows-sign
Reference implementation: https://github.com/felixrieseberg/windows95/blob/master/forg...
There's plenty of magic. I think that Electron Forge does too many things, like trying to be the bundler. Is it possible to set up a custom build system / bundling with it or are you forced to use Vite? I guess that even if you can, you pull all those dependencies when you install it and naturally you can't opt out from that. Those dev dependencies involved in the build process are higher impact than some production dependencies that run in a sandboxed tab process (because a tiny malicious dependency could insert any code into the app's fully privileged process). I have not shipped my app yet, but I am betting on ESBuild (because it's just one Go binary) and Electron Builder (electron.build)
I built one code signing system after being the “rubber duck” for a gentleman who built another, and both used HSM cards and not cheap ones. Not those shitty little USB ones. One protected cellphones, the other protected commercial aviation.
I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.
I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?
Not sure if this is FUD spread by the EV CA's or not though?
github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.
IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.
As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"
By design, the gh cli wants write access to everything on github you can access.
I’m not sure how much of this is “standard” for an org though.
It won't have licenses or anything, so if somebody wants to distribute it outside my website they will be able to do it.
If I just want to point to a exe file link in S3 without auto updates, should just compile and upload be enough?
This vulnerability was genuinely embarrassing, and I'm sorry we let it happen. After thorough internal and third-party audits, we've fundamentally restructured our security practices to ensure this scenario can't recur. Full details are covered in the linked write-up. Special thanks to Eva for responsibly reporting this.
Hubris. Does not inspire confidence.
> We resolved the vulnerability within 26 hours of its initial report, and additional security audits were completed by February 2025.
After reading the vulnerability report, I am impressed at how quickly you guys jumped on the fix, so kudos. Did the security audit lead to any significant remediation work? If you weren't following PoLP, I wonder what else may have been overlooked?
Yes, we re-architected our build container as part of remediation efforts, it was quite significant.
Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.
> we've fundamentally restructured our security practices to ensure this scenario can't recur
People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".
To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.
So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).
Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.
Our disclosure write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...
Were the logs independent of firebase? (Could someone exploiting this vulnerability have cleaned up after themselves in the logs?)
> No malicious usage was detected
Curious to hear about methods used if OK to share, something like STRIDE maybe?
> Completed a review of the logs. Confirming all identified activity was from the researcher (verified by IP Address and user agent).
These kinds of "never happen again" statements never age well, and make no sense to even put forward.
A more pragmatic response might look like: something similar can and probably will happen again, just like any other bugs. Here are the engineering standards we use ..., here is how they compare to our peers our size ..., here are our goals with it ..., here is how we know when to improve it...
Who knows what else was vulnerable in your infrastructure when you leaked .encrypted like that.
It should have been on your customers to decide if they still wanted to use your services.
They were compensated, but doesn't elaborate.
If they didn't pay you a cent, you have no liability here.
So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.
Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.
I use firebase essentially for hobbyist projects for me and my friends.
If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.
Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.
Kudos to cursor for compensating here. They aren't necessarily obliged to do so, but doing so demonstrates some level of commitment to security and community.
Just want to make sure I understand this. They made a hello world app and submitted it to todesktop with a post install script that opened a reverse shell on the todesktop build machine? Maybe I missed it but that shouldn't be possible. Build machine shouldn't have outbound open internet access right?? Didn't see that explained clearly but maybe I'm missing something or misunderstanding.
Like, effectively the "build machine" here is a locked down docker container that runs "git clone && npm build", right? How do you do either of those activities without outbound network access?
And outbound network access is enough on its own to create a reverse shell, even without any open inbound ports.
The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.
Unfortunately, in some ecosystems, even downloading packages using the native package managers is unsafe because of postinstall scripts or equivalent.
If you're providing a build container service then you pretty much have to run untrusted code (the customer's) in the container, yes? So then the problem is really just the bad Firebase config... ?
- You can have a submission process that accepts a package or downloads dependencies, and then passes it to another machine that is on an isolated network for code execution / build which then returns the built package and logs to the network facing machine for consumption.
Now sure if your build machine is still exposing everything on it to the user supplied code (instead of sandboxing the actual npm build/make/etc.. command) you could insert malicious code that zips up the whole filesystem, env vars, etc.. and exfiltrates them through your built app in this case snagging the secrets.
I don't disagree that the secrets on the build machine were the big miss, but I also think designing the build system differently could have helped.
I don't get it. Why would it be "todesktop's fault", when all the mentioned companies allowed to push updates?
I had these kind of discussions with naive developers giving _full access_ to GitHub orgs to various 3rd party apps -- that's never right!
From ToDesktop incident report,
> This leak occurred because the build container had broader permissions than necessary, allowing a postinstall script in an application's package.json to retrieve Firebase credentials. We have since changed our architecture so that this can not happen again, see the "Infrastructure and tooling" and "Access control and authentication" sections above for more information about our fixes.
I'm curious to know what the trial/error here was to get their machine to spit out the build or if it was in one-shot
This service is a kind of "app store" for JS applications installed on desktop machines. Their service hosts download assets, with a small installer/updater application running on the users' desktop that pulls from the download assets.
The vulnerability worked like this: the way application publishers interact with the service is to hand it a typical JS application source code repo, which the service builds, in a container in typical CI fashion. Therefore the app publisher has complete control over the build environment.
Meanwhile, the service performs security-critical operations inside that same container, using credentials from the container image. Furthermore, the key material used to perform these operations is valid for all applications, not just the one being built.
These two properties of the system: 1. build system trusts the application publisher (typical, not too surprising) and 2. build environment holds secrets that allow compromise of the entire system (not typical, very surprising), over all publishers not just the current one, allow a malicious app publisher to subvert other publishers' applications.
It also is, they are responsible for which tech pieces they pick in constructing their own puzzle
From an Electron bundler service, to sourcemap extraction and now an exposed package.json with the container keys to deploy any app update to anyone's machine.
This isn't the only one, the other day Claude CLI got a full source code leak via the same method from its sourcemaps being exposed.
But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
You've always been able to do the first thing though: the only thing you can do is obfuscate the source map, but it's not like that's a substantial slowdown when you're hunting for authentication points (identify API URLs, work backwards).
And things like credentials in package.json is just a sickness which is global to computing right now: we have so many ways you can deploy credentials, basically 0 common APIs which aren't globals (files or API keys) and even fewer security tools which acknowledge the real danger (protecting me from my computers system files is far less valuable then protecting me from code pretending to be me as my own user - where all the real valuable data already is).
Basically I'm not convinced our security model has ever truly evolved beyond the 1970s where the danger was "you damage the expensive computer" rather then "the data on the computer is worth orders of magnitude more then the computer".
It truly is a community issue, it's not a matter of the lang.
You will never live down fucking left-pad
Does this info about the fix seem alarming to anyone else? It's not a full description, so maybe some important details are left out? My understanding is that containers are generally not considered a secure enough boundary. Companies such as AWS use micro VMs (Firecracker) for secure multi tenant container workloads.
I know that sounds kind of sad that my brain can't focus that well (and it is), but I appreciated the cat.
I would've expected IDE developers to "roll their own"
-paid operating system (rhel) with a team of paid developers and maintainers verifying builds and dependencies.
- empty dependencies. Only what the core language provides.
It's not that great of a sacrifice. Like 20$/mo for the OS. And like 2 days of dev work which pays itself off in the long run by avoiding a mass of code you don't understand
Do they have to?
Isn't this notion making developers sloppy?
I made Signal fix this, but most apps consider it working as intended. We learned nothing from Solarwinds.
(eg: companies still hosting some kind of integrity checking service themselves and the download is verified against that… likely there’s smarter ideas)
The user experience of auto-update is great, but having a single fatal link in the chain seems worrying. Can we secure it better?
We have reviewed logs and inspected app bundles. No malicious usage was detected. There were no malicious builds or releases of applications from the ToDesktop platform.
Is there an easy way to validate the version of Cursor one is running against the updated version by checking a hash or the like?This is why we need to remove incompetent product managers that have no clue and somehow are in the position to control what developers can work on.
This was an excellent conclusion for the article.
Bit too hyperbolic or whatever... Otherwise thrilling read!
This is completely incompetent to the point of gross negligence. There is no excuse for this
With that culture supply chain attacks and this kind of vulnerability will keep happening a lot.
You want few dependencies, you want them to be widely used and you want them to be stable. Pulling in a tree of modules to check if something is odd or even isn't a good idea.
2. Release your product.
What?! It's not some kind of joke. This could _already_ literally kill people, stole money and ruin lives.
It isn't even an option to avoid taking reaponsibility for the decisions which lead to security and safety of users for any app owner/author.
It's as simple as this: no safety record to 3rd party - no trust, for sure. No security audit - no trust. No transparency in the audit - no trust.
Failing to make the right decision does not exempt from the liability, and should not.
Is it a kindergarden with "it's not me, it's them" play? It does not matter who failed, the money could has been be stolen already from the random ones (who just installed an app wrapped with this todesktop installer), and journalists could have been tracked and probably already killed in some dictatorship or conflict.
Bad decisions does not always make the bad owner.
But don't take it lightly, and don't advocate (for those who just paid you some money) "oh, they are innocent". As they are not. Be a grown-up, please, and let's make this world better together.
They can even charge for it ;)
Solution: more LLMs
Snap out of it
https://docs.github.com/en/code-security/code-scanning/intro...
These people, the ones who install dependencies (that install dependencies)+, these people who write apps with AI, who in the previous season looped between executing their code and searching the error on stackoverflow.
Whether they work for a company or have their own startup, the moment that they start charging money, they need to be held liable when shit happens.
When they make their business model or employability advantage to take free code in the internet, add pumpkin spice and charge cash for it, they cross the line from pissing passionate hackers by defiling our craft, to dumping in the pool and ruining it for users and us.
It is not sufficient to write somewhere in a contract that something is as is and we hold harmless and this and that. Buddy if you download an ai tool to write an ai tool to write an ai tool and you decided to slap a password in there, you are playing with big guns, if it gets leaked, you are putting other services at risk, but let's call that a misdemeanor. Because we need to reserve something stronger for when your program fails silently, and someone paid you for it, and they relied on your program, and acted on it.
That's worse than a vulnerability, there is no shared responsibility, at least with a vuln, you can argue that it wasn't all your fault, someone else actively caused harm. Now are we to believe the greater risk of installing 19k dependencies and programming ai with ai is vulns? No! We have a certainty, not a risk, that they will fuck it up.
Eventually we should license the field, but for now, we gotta hold devs liable.
Give those of us who do 10 times less, but do it right, some kind of marketing advantages, it shouldn't be legal that they are competing with us. A vscode fork got how much in VC funding?
My brothers lets take arms and defend. And defend quality software I say. Fear not writing code, fear not writing raw html, fear not, for they don't feel fear so why should you?
Join me my brother or sister
"Quality over quantity" should be the way, but I think it has failed in every single sector. Food. Healthcare. Education. Manufacturing. Construction. ...
Quality is expensive.