Including ones like:
1 AND ("1" = SUBSTRING(select social_security_number from employees where employee_name = 'Angela Smith', 1, 1))
You can use variations on this to...
a) Ask our librarian for a series of about 50 books and hear whether or not she has them in stock.
b) Read Angela Smith's Social Security number right out of the database.
There apparently exist a lot of people on HN who would prefer to think that, despite my near-magical ability to correctly divine the SSN of any employee (or any other piece of data in the DB) with a SQL injection attack, the fact that I'm just looking at a book listing page in a totally authorized fashion means I must not be doing anything wrong.
You: Can you provide me with Angela Smith's email address?
Librarian: Sure, here you go.
Later, Librarian's manager: You weren't supposed to give out that information!
Librarian: Oops. I had the wrong access rules.
Librarian's manager: Let's call the cops on that guy.
It's his fault that you gave him the information he wasn't
supposed to have.There is a difference between:
You: Can you give me the email address of user 50?
Librarian: Sure, here you go
Librarian: Oh balls, I wasn't supposed to hand that over, that could have been anyone!
And You: Can you give me the email address of user 50?
Librarian: Sure, here you go
You: Hmm
You-irc: Hey guise! The librarian is giving out everyones email addresses, this is totally breaking privacy laws right?
You-irc: lols, I'm going to get all of them! This could be used for a massive phishing operation
You-irc: or even make their stock price drop, we could short it
You: Hey librarian, can you give me the email address of user 51?
Librarian: Sure, here you go
You: Hey librarian, can you give me the email address of user 52?
Librarian: Sure, here you go
You: Hey librarian, can you give me the email address of user 53?
Librarian: Sure, here you go
...
You: Hey librarian, can you give me the email address of user 1023821?
Librarian: Sure, here you go
He didn't grab one or two, then send the information to AT&T to get them to fix it. He deliberately collected a significant amount of data he knew was personal information and gave it to someone else. That alone would be enough. If he just wanted to verify that the attack worked, get the code of someone else who gives you permission, show that they can be easily generated and you're done. You don't need more than a few to prove the point.The service was clearly not intended to be a directory of email addresses for people to use. It was clearly there to return the email address to the user of the iPad with that ICC-IDC code (which, unlike my example, aren't obviously guessable)
I'm not going to say anything about the sentence, but I do think he was guilty.
He knew what he was doing was illegal and didn't care, he got caught and tried to justify his actions by blaming AT&T for having a faulty configured server.
Not good enough for me and the jury agreed.
Me: Can you provide me with all of guelo's money?
Bank: Sure, here you go.
Also, when I approach your house, Me: I have these lock picks. Will you let me in?
Lock: Sure thing, boss!For me this is rather a good argument against using home banking (which I indeed don't use for security reasons - and as a computer scientist I'm surely not technologically backward).
UPDATE: if money is lying around on the street you are not be allowed to keep it (the same as I should not be allowed to keep the "money lying around in the internet"), but you can claim for getting the legitimate finder's reward.
My lock isn't authorised to give permission to anyone. Lock picks are forcing it, not requesting it.
You're drawing a false dichotomy based on premises that nobody in this thread has actually raised. It's entirely reasonable to a) disagree with the law, b) believe that SQL injections can be illegal based on some other rationale, and c) disagree with others on the appropriate penalty for SQL injections.
Set aside the SQL injection. Suppose there's a bug in Apache's path parsing such that using "\" instead of "/" causes it to interpret it as an escaped string, which somehow (bear with me) causes it to run exec("/bin/rm -r /"). Now some n00b comes along and uses "\" in the path, because he's used to paths on MSDOS; crashing the server. Whose problem is it? The client's, for sending a malformed request? How do you expect the client to know that the "\" will trigger a catastrophe? Or what if the client made a mistake, and while he thought it was "some query string" in his cut buffer, it turned out to be "; drop table *" (or something like that). Now whose problem is it?
If the server willy-nilly takes any input and doesn't check it, it is the server's fault.
Weev intentionally exploited an information disclosure flaw. Should he have gone to jail for that? No, I don't think so at all. But the scenario you're presenting has no relation to what happened here.
And if you're going to bring up the UserAgent spoofing, let me remind you that most browsers have done something like that for > 15 years.
The closest analog I can think of would be giving the giving the librarian drugs to modify his behavior before asking him to perform some act or provide some information he normally would not. Giving the librarian a brownie before requesting access to the staff lounge would probably not alter his behavior nor be treated as a crime. Giving him a brownie laced with scopolamine before requesting access to the staff lounge would be, even if scopolamine had no dangerous side-effects.
[0] One might reasonably expect an SQL injection string to return a legitimate resource on a documentation site or general-purpose search engine, for example.
Therefore, I see SQL injections as sloppy programming, but physical break-ins as sloppy ethics. IMHO YMMV IANAL KTHXBYE.
What if your site uses Wordpress or some CMS, and it has a SQL injection zero day that is then exploited to gain access? Even if you did due diligence, kept your kernel and all your software up to date, and generally secured the server and the application as best you could, you could still be entirely unaware of flaws lurking within.
It'd be more comparable to the lock on your front door being vulnerable to easy lockpicking with a paperclip and 4 seconds. You're still not "allowing them to break in" by being sloppy (it's not like you left the door unlocked), but the manufacturer of the lock was sloppy and as a result, someone is able to break in without any "brute force".
When using a proprietary, paid for web service or app you can blame the service provider.
When hosting OSS code on your own server, exactly this is what the NO WARRANTY section in the license is about, thus making it fully your responsibility to go over the code or to accept that bugs and security vulnerabilities happen.
Edit:
To all those talking about the skill level of the individual - if you are using a proprietary service, you can easily point the finger at the service provider. In the case of OSS code, the license is there to remind you that you are taking responsibility for being competent enough to use the code yourself.
If your house was broken into because the lock was shoddily installed by a locksmith, you might have some legal recourse (though, IIRC, you may be required to validate & disclaim the install) but if you were to install the lock yourself, you have nobody to blame.
OK, but did they do anything wrong with SQL-injecting/breaking-a-window? I mean, if someone smashes your window can you call the cops and/or sue them for damages? In order to call the cops on the window smasher, you have to acknoledge they did a wrong.
Not if you make a distinction between using a service and breaking a service. Analogize with entering vs. breaking and entering. In many cases it is valid to punish someone for bypassing security. But if a system has no security by design, then there is nothing to bypass. SQL injection, on the other hand, is always bypassing the design of the code, and loses any presumption of authorization.
P.S. Yes there will be edge cases. There are always edge cases. But this is not an edge case. The lack of security was definitely design, not software-bug.
> Analogize with entering vs. breaking and entering.
From Free Dictionary [1]:
breaking and entering v., n. entering a residence or other enclosed property through the slightest amount of force (**even pushing open a door**), without authorization.
Emphasis mine. If pushing a door counts, so does changing the user agent header or auto-incrementing IDs. The key here is "without authorization". As that analogizes with the Weev situation and our librarian, much of the debate I see here on HN seems to hinge on whether that means authorization in some technical sense or in the sense meant by that breaking and entering definition. I submit that it means the latter, in part because that reflects how we see authorization in other contexts (like doors we're not supposed to enter) and in part because I don't think the first view holds up to scrutiny on its own terms.Let me explain what I mean by that. Consider two situations:
1. A server responds to requests for email addresses without checking whether the user is authorized to receive that information.
2. A server responds to all requests without checking whether they contain injected SQL.
In pure technology terms, these aren't really any different. In either case, the server is simply failing to check that the request has certain properties (came from the right user/does not contain context escapes). But of course case 2 is particularly nefarious, because passing SQL directly to the database is clearly not the intent of that interface, and that it's not the intent is obvious to the SQL injector. So the intent of the software is what matters. But then it's what matters in case 1 too. The lack of technical enforcement of this intent is not the issue.
Which leads me to some clarity about this:
> Not if you make a distinction between using a service and breaking a service.
I believe this use/break distinction exists, but the distinction isn't something that's determined by the code or the vaguer "design of the code"; it's determined by the purpose of the service. The service as it would be described functionally, not technically. The service was not meant to be used by Weev and his scraper any more than our librarian's database was meant to be used by SQL injection, or any more than your unencrypted traffic was meant to be intercepted by my packet sniffer (you did nothing to not authorize me to see it!). To drive that home, the library's hapless database admin who foolishly decides to update the list of books using her own SQL injection bug is not hacking, because she is authorized to fiddle with the database, even though, in your terms, it's bypassing the design of the code.
In other words, authorization is not the same as the technical artifacts involved in authorization. More generally, I don't think being bad at making software justifies people accessing it when they know it's not meant for them.
As the law is actually applied here, I am somewhat sympathetic to Weev, on the grounds that I don't think it was a serious crime (imagine if it were bank account information!), but that's a quantitative issue, not a qualitative one. I can also see that there are many cases where it isn't obvious whether something is meant for you to access or not (like a door that seems to lead into a public place but which turns out to be a private space), and I can imagine there being issues there. But this isn't one of them.
[1] http://legal-dictionary.thefreedictionary.com/breaking+and+e...
I'm not so sure about that. If it's not pushing to request record 334, why is it pushing to request record 335?
But I digress. Normally making standard web requests is analogized to looking, without touching. You have explicit authorization to go through the front door, and anything 'bad' you did inside was restricted to what you looked at.
>I believe this use/break distinction exists, but the distinction isn't something that's determined by the code or the vaguer "design of the code"; it's determined by the purpose of the service.
But then you get into the realm of having TOS be a legal, no matter how inane they are. This seems a far worse alternative.
>To drive that home, the library's hapless database admin who foolishly decides to update the list of books using her own SQL injection bug is not hacking, because she is authorized to fiddle with the database, even though, in your terms, it's bypassing the design of the code.
That's why I only said they lose the presumption of authorization. If all you know is someone SQL injected, you have to resort to other means to figure out if it was authorized. For example, if they already have equivalent access through non-code-bug means, and they simply prefer SQL injection, then there is no problem. But if they were doing it to avoid audit logs, there might be a huge problem.
>In other words, authorization is not the same as the technical artifacts involved in authorization. More generally, I don't think being bad at making software justifies people accessing it when they know it's not meant for them.
When it comes purely to accessing it, when it's non-HIPAA/etc. data, I don't think there needs to be very much justification.
And I don't see 'has no password' as a technical artifact. Details of web servers don't need to be involved here. The design is wrong on a fundamental, user-understandable level.