Now that more background is coming out, I think he probably did the Rails community at large a huge favor here. Had this just been fixed quietly on GitHub, that would certainly be better for GitHub's PR but the wider community might never have realized the lurking horror that the Rails team appears to have been unlikely to do anything about other than point people to the existing docs.
This situation shows that pointing people to those docs was clearly an inadequate solution. If GitHub (arguably the poster child for Rails apps outside of 37signal's own apps) could fuck this up, anyone using Rails could. All of this exposure to the problem is net positive for everyone using Rails other than GitHub and the core Rails team, IMO.
And I say all this someone who has never professionally developed for Rails. My experience with Rails consists of a couple half-done toy projects. I find it pretty surprising that Github makes this mistake. But I don't think they should be burned at the stake for this. The bigger problem was how they were initially handling the issue, which they're trying to rectify now.
[1] However, Hartl recommends using attr_accessible at the model level and DHH says this preventative measure should be implemented in the controller, ie:
class PostsController < ActionController::Base
def create
Post.create(post_params)
end
def update
Post.find(params[:id]).update_attributes!(post_params)
end
private
def post_params
params[:post].slice(:title, :content)
end
endPlease read this post written by meric, then give the counterarguments if you have them:
http://news.ycombinator.com/item?id=3665083
"I don't believe it is the stunt that costed github five to six figures. The loss of wealth was already there from day 1 when Github developers did not read Rails documentation and/or when Rails decided to make attributes publicly accessible by default. Today it is merely a "correction" where instead of Github's customer losing confidential company information without knowing it is now Github bearing the costs upfront, as it should be. In the "emperor has no clothes" story would you say it was the kid who pointed out the emperor had no clothes caused the emperor's embarrassment?"
You can also compare the whole context with the misfeature of PHP:
EDIT: not 'escaping', but using hashes or formatted strings, etc., you get the idea.
In general we tend to see security flaws as programming flaws. In other words, the programmer makes a mistake and therefore there is a security hole. The problem with this approach is that programmers make mistakes all the time.
Certainly it is impossible to take all the weight off the programmer's shoulders. Mistakes will always allow a program to be misused, misdirected, and so forth. However, most security issues are best solved as architecture problems, not as developer problems.
For example input sanitation is generally a bad idea* beyond certain things we should never see in inputs. It's far better to find ways of making the input safe to the database and to the web interface that doesn't depend on it being sanitized on input.
* This is because you can only sanitize based on how you want things to go on output, whether you are outputting from your program to the db or to a user interface of some sort. If you sanitize for HTML, you can't use the same info reasonably well for LaTeX, etc.....
So I am of the considered opinion that data should be checked for basic sanity on input (no termination sequences in the middle of input strings, etc), and escaped on use or output. If you have a framework to do this, then you centralize that logic so you don't have to think about it every time. This drastically cuts down on things like XSS, SQL Injection, and the like.
This sort of thing again strikes me as something the framework should prevent. That's not necessarily a flaw of Rails if you use Rails as a toolkit for your own application frameworks. However, it is an architectural flaw, not a programming mistake.
Rails, by default, does things like escaping input and output strings, CSRF protection, masking password fields in the logs, etc. So why doesn't it do the same with attribute assignment?
(1) the nature of the suspension was not communicated to Egor at the onset of the situation, nor,
(2) noted in the blog post [1] describing how Github "detected the attack",
I am inclined to believe that this is a response to the furious reaction to their suspension decision and was not, as this post implies, the game plan from the beginning.
It's healthy that they've reversed their suspension but the lack of transparency (not to mention potential dissembling) on the decision process regarding the revocation is still troubling.
[1] https://github.com/blog/1068-public-key-security-vulnerabili...
It takes time and energy to come up with responses such as this (not a lot, but every bit counts in an emergency), and those are resources that you should be using the solve the problem. Not to mention that saying the wrong thing is worse than saying nothing at all.
It's basic emergency management: 1) stabilize the situation, 2) figure out what's going on, 3) fix it, and then 4) explain what happened to the stakeholders. I see nothing wrong with github's actions here.
1. Making sure he doesn't continue breaking into your service (by suspending his account)
2. Fixing the security flaw he used to break into your service
3. Appraising your users to the situation.
I feel for the kid--he's just 18, and if he gets some good judgment to go along with his technical skill he'll go far. But I don't understand the nerdrage at Github. People trust Github's service and their software to protect proprietary code; their response has been everything you could hope for in the interests of 100%-1 of Github stakeholders, at the expense of not communicating well to Egor why and how long they were suspending him for breaking into their service.
I think we can all agree that it certainly didn't help them fix the vulnerability or communicate with users, right? (Actually, it arguably did the reverse...) But it also did absolutely nothing to stop him from breaking into the service; the exploit works for any user, and Github allows anyone to create an account instantly and for free. Until the exploit was fixed, anyone including him could have created an account and exploited their service.
A better defence of Github would be that they couldn't have been expected to know that, and so they shouldn't be slammed for doing something unproductive and pointless that distracted them from the three core priorities you list above. And I agree! If something looks fishy, banning everyone involved, and sorting it out later is actually a pretty decent idea...even if (as here) it proves to be a complete waste of time in retrospect.
Edit: spelling
That you have to use attr_accessible is known throughout the Rails community since "ever". Only toy apps don't use it.
It's like saving passwords in plaintext, only arguably even worse.
@user.public_keys.build(...)
.. where @user is retrieved in a role based manner (that is, you only get the right @user if you are authorized to get it.)Ultimately, this is less an issue of mass assignment specifically and more an overarching one of allowing a user to perform an action in the guise of another. But, of course, these mistakes are commonly made by developers of all skill levels! :-) (me included)
Are suspensions not by definition temporary?
As a general rule, it's extremely hard for organizations to admit their own mistakes.
One thing we do in LSMB is to "declassify" security issues when we do a full disclosure (usually two weeks after the patch is released).
The idea is that we fix, patch, announce the patch, wait, then issue full disclosure.
GitHub has no business demanding his, or your, agreement to a legal contract that prohibits you from exercising your best judgment in such a case.
Furthermore, "responsible disclosure" is a propaganda euphemism for "allowing irresponsible vendors to cover their asses, possibly at the expense of their users". Terms like "responsible disclosure" have no place in a serious discussion. Please see the blog post by the Google security team at http://googleonlinesecurity.blogspot.com/2010/07/rebooting-r... for further details.
If this is how you react to someone who WANTS to tell you about a serious problem, how what percentage of the people who don't love you enough to put a tattoo on themselves are likely to report an issue versus sell this to one of the many buyers of 'sploits who exist out there?
The reality is that these folks generally don't want to hurt you, they just want you to understand the thing you won't admit. When it happens, and you've got egg on your face, grow a pair and cop up to the fact that you/the system failed, and GIVE PROPS. Fix the issue, move on, and award the guy who did you a solid by finding an issue his 15 minutes of fame.
He doesn't deserve props from github, he just exploited their app to make a point (to rails core) he never disclosed anything to github, from what I can tell.
http://help.github.com/responsible-disclosure/
"white hat researchers are always appreciated"
They need to stop trying to cover their ass and just apologize for suspending the guy.
Did this take place in private? I can't see any evidence of this from his issue on the rails repo.
For those of you who didn't see, they were quick to update people and be on the ball: https://github.com/blog/1068-public-key-security-vulnerabili...
The security community has long has an accepted standard of responsible disclosure, which involves informing the vulnerable party beforehand and allowing them time to fix the problem before publicly disclosing it.
Publishing a vulnerability before giving those vulnerable a chance to fix it is irresponsible, using it to compromise a system is criminal. He was getting off light from getting his account suspended, GitHub could push for a criminal prosecution resulting in deportation and serious jail-time for his actions.
It doesn't matter what he did after the compromise (whether it was benevolent or not), the compromise of an account not held by him puts him clearly into the "black-hat" category.
That's not what "black-hat" means.
Gain unauthorized access to an account and using it falls under pretty much any standard definition of "black-hat" and in practical terms breaks computer security laws in pretty much all legal jurisdictions which have them.
There was a great post a couple of days back that in effect said: It's not a matter of _if_ your security will be compromised but _when_. By being open to your users disclosing this information you're helping to keep your product secure. IMHO 37signals does a good job of this by linking and giving credit to those that have discovered security flaws in their apps (http://37signals.com/security-response).
Github is actually pretty similar given that the commercial side of Github is fuelled by the free, open-source side. There is a feeling of community ownership. Going straight for the ban without some thought and soul-searching is confusing fixing an issue with making a more detailed judgment about whether the user ought to be on the site at all.
IMHO, Github played it about right.