I take responsibility for what happened here. My RubyGems.org account was using an insecure, reused password that has leaked to the internet in other breaches.
I made that account probably over 10 years ago, so it predated my use of password managers and I haven't used it much lately, so I didn't catch it in a 1password audit or anything.
Sometimes we miss things despite our best efforts.
Rotate your passwords, kids.
1) Find high-value target libraries
2) Grab the usernames of accounts with push access
3) Check those against password dumps
I feel really stupid about this, but like I said it was an oversight. I apologize and will try to do better.
that's.... rare. Well done.
Here is a summary of the exploit re-pasted from a great comment [1] written by @JanDintel on the github thread:
- It sent the URL of the infected host to the attacker.
- It sent the environment variables of the infected host to the attacker. Depending on your set-up this can include credentials of services that you use e.g. database, payment service provider.
- It allowed to eval Ruby code on the infected host. Attacker needed to send a signed (using the attacker’s own key) cookie with the Ruby code to run.
- It overloaded the #authenticate method on the Identity class. Every time the method gets called it will send the email/password to the attacker. However I'm unsure which libraries use the Identity class though, maybe someone else knows?
So... it potentially comprised your user's passwords AND (if you were on Heroku or similar like many Rails apps are) system-level access to all your attached data stores. About as bad as it gets.
[1] https://github.com/rest-client/rest-client/issues/713#issuec...
The first hijacked version was released on August 13th.
So any gem with more than 50,000 downloads should force to gem maintainer to have MFA set up before they can publish a new version or do anything with that gem.
Because, having MFA is not about protecting gem maintainers, it's about protecting users. So, gem maintainers should not be allowed to be careless with security by not using MFA. It's not their choice to make.
Also worth noting that whilst MFA helps, it's not a panacea as MFA isn't generally compatible with automated CI/CD processes, so API keys will still be required, and can be leaked/lost/stolen.
One thing that would be inconvenient but would protect against that would be to have the api work as usual, but need to use MFA and login to the website to approve a new release (and have information there listing the ip and time of upload). That would only make sense for heavily used gems like this one but it seems that it would stop most issues?
2FA is generally trivial for maintainers to take into use. There is simply no excuse to not require it at this point for all new uploads. The status quo of hoping maintainers never re-use their passwords / use weak passwords / have their machines hacked is clearly not working since security incidents like this keep happening every other week with Rubygems/npm/etc.
"safeguards against malicious changes to project ownership, deletion of old releases, and account takeovers. Package uploads will continue to work without users providing 2FA codes."
They are also working to enforce 2FA on uploads:
"But that's just for now. We are working on implementing per-user API keys as an alternative form of multifactor authentication in the setuptools/twine/PyPI auth flows. These will be application-specific tokens scoped to individual users/projects, so that users will be able to use token-based logins to better secure uploads. And we'll move on to working on an advanced audit trail of sensitive user actions, plus improvements to accessibility and localization for PyPI. More details are in our progress reports."
From: http://pyfound.blogspot.com/2019/06/pypi-now-supports-two-fa...
these guys with libraries that have a massive user base really have no excuse besides laziness for not having 2fa on their account.
We all just hope that nothing bad will happen or that it will be noticed fast enough. Accounts get compromised, maintainers quit and transfer their project, bad actors might even pay the dev of some lesser-known dependency…
I have no easy solution for this problem and of course I too use external dependencies in my projects – but it feels like it's only a matter of time till disaster will happen and most of us just ignore this problem till then.
One of the most ridiculous things about most package repositories (npm, rubygems) is how opaque they are. It's only a mere courtesy to link to the github repo from a package, and a gentlemen's agreement that it actually represents the code that will get run with 'npm/gem install'. There are various ways to go about this like the package repo requiring linkage to an actually git repo that it builds from.
Commonly pitched solutions like 2FA are useful but don't do anything to stop the case of a malicious actor actually having publish rights, like the trivial attack where you simply offer to take a project off someone's hands. But diffing releases and reading source code should be absolutely trivial at the very least.
I did a talk for AppsecEU back in 2015 on this topic and found good material talking about it as a risk going back years before that...
Is there any fix at all? Aside from something like multiple-account code signing/release verification I cannot think of something that couldn't be compromised in some way.
At the end of the day you have to trust someone and trust that they trust someone else. The problem is you have no way of vetting the entire dependency chain. You may have reviewed gem/package A but you aren't going to (realistically) review all of its dependencies and those dependencies' dependencies.
At this point it's all a "many eyes" approach. And it seems to be working relatively effectively.
https://news.ycombinator.com/item?id=20377136
This is major new line of attack, and web app infrastructure is critically weak to it. We rejected distro-controlled package management in favor of pip and gem and npm years ago (for good reasons), but as this sort of attack becomes much more common (which it will), we might find ourselves missing the days of strong central control.
Rubygems should have acted on the Strong_password news, but missed the opportunity. I hope that they can get their act together now that they are lucky enough to have a second chance before this style of attack really explodes.
All the major language repo's are free at point of use, and I don't get the impression the maintainers are exactly rolling in money, so it doesn't seem likely that they can easily ramp up on that front.
While I'm generally a fan of distribution-provided packages, they would not have helped in this case. Distributions simply lack the manpower to audit all upstream releases for these kinds of issues.
Version 1.7.0 was released to rubygems on 8th July 2014, and 2.0.0 on 2nd July 2016, so anyone who has started using rest-client or run a `bundle update` recently is unlikely to be affected.
The impact could have been significantly greater had the hijacker pushed a new versions of 1.8.x or 2.x as well, so it's very fortunate the breach was spotted now.
I only have a passing familiarity with ruby gems, so I may be completely wrong.
This situation definitely lends itself to push for 2FA by default on all rubygems.
cd ~/code # Where all my projects live
grep --include='Gemfile.lock' -r . -e 'rest-client \(1\.6\.1[0123]\)'
https://github.com/rest-client/rest-client/issues/713#issuec...With the software supply chain being as complex as it is, and the large number of moving parts, we're only going to see more of these ...
[0] https://github.com/rubygems/rubygems.org/wiki/Gems-yanked-an...
- Would you pay money for access to a package repository that had good security practices? How much would you pay? Would you accept delays in library updates to allow for security checks? If so, how long a delay would be acceptable.
- Have you ever looked into the security practices of open source applications or libraries that you wanted to use and had the information you did or did not find affect your decision to use that software?
- How often do you use the inventory of all the libararies you have, to periodically check on the provenance of those libaries and that they are maintaining good security practices?
Ultimately these problems (like most) are one of incentives. It's very easy to build software very quickly using the huge number of open source components that are freely available.
Whilst speed of development and price are the primary considerations, it's not going to be surprising that security takes a lower rung on the ladder.
Additionally, GitHub provides a bundler-audit like service for free. And, identifying those that are t CVEs yet seems like something scriptable but also obfuscatable (on the attacker end).
I don't think any team I've been on (federal or private) would pay extra for this kind of service given the frequency with which we do updates and the amount of effort that needs a careful developer spends when doing these updates. The most recent breaches have been noted well I'm advance of any updates we would have done.
I'd be glad to be wrong about it because I think a few more tools in this space would only help the community.
It would be nice if a service could summarize the differences in actual gem releases, to make things like the changelog and the diff easily digestable for all the available updates though (versus a scan/lint). That would let a developers more easily identify these kinds of breaches than cloning and diffing.
There should be a very limited number of known external URLs that your production system needs to hit. Whitelist them on a proxy. Block the rest. Dump the blocked requests into a log. Put alerts on a log. It will get most of data exfiltration attempts or attacks such as this. Remember, when your goal is not to have a perfect security -- your goal is to have a better security than someone else so that someone else gets to be a chump and not you.
If there was a "published with mfa" flag on every gem release and it would allow a Bundler setting to block installing gems without 2FA.
Of course, this would also help attackers find targets. But maybe its worth the trade-off?
What about all the attacks where the malicious actor is someone with publish rights, like friendly package takeover? Your proposal makes that even more effective since now the attacker gets a nice "published with mfa" badge.
I'm wondering why wouldn't RubyGems implement some basic form of malware detection? This type of code shouldn't be too hard to classify.
https://guides.rubygems.org/security/
> However, this method of securing gems is not widely used. It requires a number of manual steps on the part of the developer, and there is no well-established chain of trust for gem signing keys. Discussion of new signing models such as X509 and OpenPGP is going on in the rubygems-trust wiki, the RubyGems-Developers list and in IRC. The goal is to improve (or replace) the signing system so that it is easy for authors and transparent for users.
But to answer your question, yes there is a system for signing gems, though it's not widely used: https://guides.rubygems.org/security/
When consuming APIs and not thinking about this, we are only building technical debt and security issues for the future.
Today's example is really bad since it targets a well-used meta API open-source library, but how many of those issues are already present on hundreds of other obscure open-source API clients?
Then you get the delight of likely jurisdictional issues, if it turns out the attacker is not a resident of the same country as the victim that reported it.
And the same address is linked to a Malta based company that appears on the "Panama Papers".
For a long time, environment variables have been evangelized as the secure place to store credentials and things, but that just gives third party scripts a known place to look.
You could argue that might actually be more secure to store your secrets in a separate, custom config file that gets read into the rails app via an initializer or something.