I used to work at a large public university. One day, a grad student brought me his laptop and asked if I would take a look at it because "the Internet [was] really slow." It turned out that his computer was part of a botnet controlled via IRC, and it was being used to attack hosts on the Intertubes.
After sniffing the IP address + port of the IRC server and the channel name and password the botnet was using, I joined the channel with a regular IRC client. "/who #channel" listed thousands of compromised clients, including hundreds with .edu hostnames. (One university had a dozen hosts from .hr.[university].edu in the channel. Sleep tight knowing your direct deposit information is in good hands.)
There was no way I could notify everyone, so I concentrated on e-mailing abuse@ the .edu domains. In my e-mails, I explained who I was and where I worked, that one of our computers had been compromised by hackers (yeah yeah terminology), and that in the course of investigating, I found that computers at their university had also been compromised by the same hackers. I also included a list of the compromised hostnames at their university and the IRC server's information so their networking people could look for other compromised hosts connected to the IRC server if they wanted to. Relatively basic IT stuff.
I didn't get replies from the majority of the universities I sent messages to, including the .hr.[university].edu one. I got a few thank yous, but I got just as many replies from IT Security Officers and CIOs (including at big name universities) accusing me of hacking their computers and demanding that I stop immediately or face legal action.
Those people just didn't understand, and they were in charge of (or ultimately responsible for) their universities' IT security efforts... It was completely mind-boggling to me at the time.
You like the magic and you need a few practitioners but when things start getting weird, it's pitchfork o'clock.
I have a lurking feeling that in spite of all of the technologist/futurist optimism in our community, we are likely underestimating the pushback from the world at large when enough people at the same time are finally put out of work due to the same technological innovation we strive so furiously for in our own lives.
Had you known about it, you could've got in touch with the "watch desk" and passed this information along. The watch desk has contacts for security folks at the majority of .edu's (in the US, anyway). I'd guess that about half of these "zombies" would have been offline in less than 24 hours.
I know this doesn't do you any good now, but in the event that someone else reading this discovers a security issue at a .edu in the future, I'd recommend contacting the watch desk before anyone else (either via phone or PGP-encrypted e-mail). They will, depending on severity, for example, call the .edu's security people's cell phones at 3 a.m. and wake them up, if it is warranted.
I was a member of REN-ISAC when I worked at a .edu. It is a vetted and very trusted community. Breaches of trust are dealt with quickly and severely. Any information you pass off to REN-ISAC will remain in good hands.
I tried to explain that she was wrong, but you can guess how well that went.
Engineers aren't in charge, anywhere, other than tech companies.
Next semester, though, I refused to sign the new AUP (which included a clause allowing the computer center staff to seize any computer I was using, even at my off-campus home), and they kicked me out of school. (Actually what happened was they locked my course registration account, and wouldn't reinstate it until I signed the policy in their presence. I refused.)
(Sadly, I can't find the full-disclosure thread for this bug. I guess I posted it to my blog, which I deleted after being threatened by school administrators. Oh well. That was 9 years ago!)
It all depends on what rules there are, and how they are enforced/interpreted.
For me, it was a way to steal the AFS space of the previous user (basically, they didn't expire the token... oops). I actually found the initial vulnerability by accident (something crashed due to network problems, reconnected and went, "WTF, those aren't my files!"), but I did find a good way to reproduce it on demand (yank Ethernet cord at proper time). Thankfully, I had read enough stories like this way back then and submitted the bug anonymously. This was ~2000 or around then, mind you.
I also tried to get university management to switch people over to using SSH way back in 1998, but it was something like 4-5 years before they eventually did so. I'm guessing they had no idea what I was talking about or why it even mattered back then, even though anyone could see everyone's passwords going over the wire with all the people who had to telnet for various reasons. Maybe they assumed that log file they were writing our activity to would catch anybody doing anything weird? It was cleverly named "resugol"--read that backwards if you're confused.
FWIW, the exams are quite thought-provoking nearly 10 years later, here's a link to them: http://cr.yp.to/2004-494.html
That scared the crap out of me though and I realized this was a VERY bad idea. Something as harmless as trying to help someone make their website more secure can get you more jail time than robbing a bank.
I also, completely accidentally, logged into another student's account at my university (a big university too). The school gives you an ID number. Your initial password is the same as this ID, and you're supposed to change it later. I didn't remember my ID correctly, swapped two numbers in it, and ended up in someone else's account. Home address, phone number -- all sorts of information staring me in the face. Will I report this issue? Heck no!
It's weird how many of these I discover by accident. My school also had a hackathon hosted by eBay and PayPal. In fact, one of the programmers from PayPal was there. During the hackathon, I stumbled upon a way to get account information without authentication (security tokens were being seriously misused). The PayPal guy was shocked and asked me to send him all the information on what I had found. Never did get any sort of reward out of that... (and I lost the hackathon too).
This meme of "more jail time than robbing a bank" needs to end.
The federal penalty for possessing a firearm while robbing a bank is a mandatory minimum of 5 years and a maximum of life in prison. The mandatory minimum means that a judge could not sentence an armed bank robber for less than 5 years for each bank robbed while holding a gun (you don't even need to show it; just having it is enough). To make it worse, each 5-year gun sentence must run _consecutive_ with each other sentence (ie., be added on after you serve the other sentences). [1] If you brandish the gun, it becomes a mandatory minimum of 7 years, and if you fire it you get a mandatory minimum of 10 years [1].
Contrast that to all of the hacking charges we've discussed recently where the mandatory minimum is zero (a judge could sentence a convicted defendant to no penalty, or to probation).
To go further, the US Sentencing Guidelines [2], which are all-but-mandatory for federal judges (there's a constitutional out, but in effect most defendants are sentenced according to the Guidelines) gives "wire fraud" a base offense level of 7 (of 42+), which gives a sentencing range of either 0-6 months or 4-10 months, depending on how much economic harm is caused. Compare that to robbing a bank, which is a base offense level of 22, brandishing a firearm adds +5 for an offense level of 27, and if you actually make off with any cash add another +2 for an offense level of 29 (of 42+). The sentencing guidelines call for a sentence of 87-108 months (7-9 years) for a first-time bank robber, per bank, assuming that nobody gets hurt---plus the mandatory additional 5+ years for having a gun.
Realistically, bank robbers face a lot more time than even malicious computer criminals.
[1] See section (c) of 18 USC 924 http://www.law.cornell.edu/uscode/text/18/924
What's more, you don't even have to have a gun for it to be classed as "armed robbery". In the UK, just the threat of having a fire arm is enough (you could be brandishing a water pistol or even just making a gun gesture behind your unzipped coat).
http://www.spiegel.de/international/zeitgeist/berlin-bank-ro...
Not that robbing a bank is all that profitable vs. the risk and penalty's.
I wondered about the security of that solution, so I checked some random ID numbers to shockingly find out that about 80% of people didn't change their passwords! (I don't remember if you were actually prompted to change it upon first login, or you just had to do it by yourself). I could log in multiple times from the same IP to different accounts.
I hesitated whether to notify someone about it, or to loan a copy of "Mathematical analysis 1" or sth like that for some 100 people in the middle of the holidays within half an hour. That would be hilarious, but they would inevitably throw me out the university if they found out, so I didn't risk the action, neither notifying anyone due to the horror stories here and there.
Long story short, my manager disabled the firewall and we were hacked that night. I was let go the following day unceremoniously. I discovered soon after that the company blamed me for the attack, saying I turned the firewall off and hacked the servers myself.
The school immediately started expulsion proceedings without even contacting me. Fortunately, my advisor personally addressed the issue and had everything dropped. The drama only lasted a few days, but the schools brain dead response to the issue gave me zero confidence in their ability to review anything objectively. I was so disgusted I refused to walk in the graduation ceremony, much to my parents disappointment.
The actions of Mr. Al-Khabaz were unlawful and unethical. If he only accidentally found the flaw and reported it to the responsible person, things would be fine. But security testing without the permission of the system owner is the same as unauthorized access attempt!
I work as a security professional for 7 years, and I recently did a guest lecture on the college discussing the example like this. Most students were not aware where the problem is. Maybe it would help imagining how would story like this look in the physical world: Let's suppose you come back home and find someone picking on your door lock with a lock picking tool. You ask him "what are you doing?" and he says "I'm just checking is your lock safe. I do it for your security." Would you believe him? Or would you call the police immediately, without asking him anything? Let's add to this that security testing tools can sometimes degrade the tested system's performance or sometimes even crash it. In this case, it's not just unauthorized access attempt, but successful denial-of-service attack!
Never, ever, do a security testing of the system without the written permission of the system owner. If you get the permission, you will probably be asked to sign an NDA in return. You will also need to provide some information, like source IP address you're using and emergency contacts that can be used to stop the testing in case of problems (like crashes, etc.). This is the only lawful and ethical way to do these kind of procedures on someone else's system.
I'm not discussing if the penalty is OK in this case. It really doesn't matter if most people here cannot tell what he did wrong in the first place.
Not that I disagree with you: always ask for permission in writing from an authorized person before performing any kind of scan or security testing.
When someone is scanning your system and you haven't authorized it, you will definitively treat it as malicious. In a given moment, you don't care about attacker's inside motives, because your system is under attack and you better act accordingly.
I know a story about a guy who lost his job because of the unauthorized Nessus scanning in his company. Every story with a convicted hacker has some kind of a scanning tool (at least nmap) that was used in scanning phase, you can bet on it. Every scanning tool is an attack tool. In fact, scanners are most useful tools for any kind of attack, because they minimize amount of manual effort needed.
I don't know much about Canadian law, but most current laws forbid unauthorized access and _atempts_ of doing it.
Orthogonal to this fact is the question of what happens when an authority is brought in to solve the conflict. And something young hackers need to learn as early as possible is that you are not entitled to a due process in every possible context. It would be unlawful if you were not given the chance of a just trial in the context of a criminal or civil lawsuit, but this does not translate well into private institutions.
In particular case of a student unauthorized access within a university, this problem is compounded by the fact that such University and its representatives play the rules of prosecution, judge, jury and (sometimes) defense. You also have to consider that the people doing this are not professionals of law procurement but are pulled out of their real jobs to sort out some random mess, thus the only constrain is their common sense. I've even heard the first hand report of a case in my university where the faculty member supposedly playing "defense" was the most gung-ho about giving the boot to the guy in question (who ended up getting a one term suspension, but got to keep his scholarship, so it could have gone much worse).
This is probably not "fair", but it is the way it is and nobody seems interested enough to make it change. Education has a number of stakeholders with sometimes conflicting preferences and goals, so this is not a trivial problem.
But the point is that once your actions put you in the harms way, the abstract concepts of "fairness" and "proportionality of the punishment" are academic at best. My opinion is that legality is the bare minimum standard society imposes to keep barbarism at bay, but it is pretty rough itself. So it is in your best interest to conduct yourself in such a way that appeals to "the rules" happen as little as possible.
P.S. Since the author of the article is known for partnering with students defending organizations, the whole story can be one sided, and it would be good to judge after hearing another side. E.g. it could be not the first issue, or there's traces of something more than just security inspection.
The main problem with unauthorized testing (putting aside technical problems) is that person who performs it is in _very_ difficult position explaining her intentions. She already did what is considered the _second_ stage in hacker attack. Until she can prove her good intentions, this is rightfully treated as a malicious attack.
This is what my equation means. I think everybody on this forum should be aware of this. Don't get yourself in trouble for not knowing this.
what can happen when production Web applications are tested including:
Email floods
Junk data inserted into databases
News feeds filling with random input
Log files filling up
Accounts getting locked out
Internet bandwidth consumption
Scans that take longer to complete
High server and database utilization
Incident response teams and managed security providers having to deal with alerts
Final cleanup needed after the fact
In a general sense It's not difficult to find instances of behaviour that, while lawful are far from ethical, so those to things don't necessarily travel together. Some examples: http://en.wikipedia.org/wiki/Sexual_Sterilization_Act_of_Alb... http://en.wikipedia.org/wiki/Canadian_Indian_residential_sch... Obviously this could be a long list...
In this specific instance it seems that his information was exposed by this flaw along with everyone else's. Wanting to verify the safety of your own information feels like a pretty reasonable and ethical thing.
I think I would rephrase your example a little: "Let's suppose you let someone store their stuff at your house you come back home and find them picking on your door lock with a lock picking tool. You ask him "what are you doing?" and he says "I'm just checking is your lock safe. I do it for your security." Would you believe him?"
If you are in business of finding vulnerabilities in IT systems, you should be aware of it. If for noting else, to save yourself form situations like this.
This guy is not a security professional (yet), but running vulnerability scanners on other people systems definitely puts him in context.
More accurate would be catching your tenant picking every single apartment's lock to prove that their personal lock is vulnerable.
Warning the system owner doesn't give you the ability to run pen tests if they do not wish you to do so.
I'm not sure he should be expelled, but definitely reprimanded.
The industry and the legal system doesn't have a pigeon hole for that. You'll be labeled as "hacker" (and not in a positive sense of it). Either disclose the vulnerability immediately to get recognition, hoping it is public enough they'll be ashamed of going after you, or or sell and profit from it. You are already treated as a criminal by these large institutions, so if you go in that direction might as well make some money.
After testing this on my own account, I reported it right away to the university. They thanked me and fixed the problem within days.
But after reading these horror stories, I feel extremely lucky that they didn't do something much stupider. My entire academic career could have been destroyed, as well as my professional one if they'd decided to press frivolous charges.
People who go after security bug reporters tend to never fix the bugs in question. They're, like, too righteous for it.
Two days later, Mr. Al-Khabaz decided to run a software program called Acunetix, designed to test for vulnerabilities in websites, to ensure that the issues he and Mija had identified had been corrected.
If you find a security flaw in a system and report it, receiving positive feedback doesn't automatically imply that you have permission to conduct further tests. A web application vulnerability scanner can cause damage to production systems.
Almost anyone can just download a scanner and run a wild test using default settings. But its illegal to do it without prior authorization.
While his intentions were good, I think it was a bit naive of him to take upon himself the responsibility to make sure the flaws were fixed and conduct a test. Even when you have permission to conduct a test you stick to the scope and limits of the agreement. You cant just keep leapfrogging networks as you find holes.
Manually finding holes/bugs accidentally and reporting them is different from running a vulnerability scanner.
I dont think he should have been expelled without giving a chance to explain his story and the way they did it was not ethical. The management over reacted, especially considering there was no damages mentioned in this case.
http://testlab.sit.fraunhofer.de/downloads/Publications/tuer...
> While his intentions were good, I think it was a bit
> naive of him to take upon himself the responsibility to
> make sure the flaws were fixed and conduct a test.
Given that his own personal information could have been exposed by this exploit, it's just as likely that he was acting out of self-preservation rather than merely due to feelings of personal responsibility. The only naive bit here is that he obliterated his plausible deniability via 1) not allowing more time between submitting the report and attempting the scan, and 2) not masking his IP behind seven proxies.While I doubt his intentions were malicious, it certainly seems like he got curious / excited from his first find and went looking for more.
With that being said, I definitely feel for the guy. I can certainly understand the intrigue and curiosity that would lead him to continue his exploration. It sucks that they decided to bring the hammer down so hard.
In the second scenario, you probably are hurting innocent people.
So if you have a moral compass, you should maybe bother being an anonymous white hat.
That said, please don't think this is going to end your career. There are a lot of companies and startups that would love to have you for your kind of initiative. Not having a degree that you don't seem to need anyway will not be a sticking point with them. And the option of starting your own consultancy is a possibility - you already have some publicity that can help with initial gigs.
If you'd like to try your hand at a job, do check out ThoughtWorks (www.thoughtworks.com). We don't usually stand on ceremony or make a fuss about qualifications.
We're a little far away (Australia), but otherwise you'd get in the door for an interview at the very least.
Furthermore, any student faced with potential expulsion would have been entitled to a series of quasi-judicial hearings and assistance in preparing their defence. To expel someone for non-academic reasons from a publicly-funded institution (which Dawson is) should not be taken lightly and surely never in a fashion where the accused is not permitted to present their case.
This happened to me twice in college, minus the expulsion part. In the less interesting case the University sent around a form to be used in nominating student speakers for commencement. It included a drop down that was keyed off of student id. Student ids were regarded as private.
The school required everyone to either buy health insurance from them, or provide proof of insurance. They had a webapp where you could report this data. The login required your student id, name, and birth date (thanks Facebook). If you visited the app after using it, the form auto-populated with your health insurance information. I brought it to the attention of the University and they took down their nomination app in a matter of minutes.
In the more exciting incident, someone at Sungard called my university and asked them to have the campus police arrest me. (Edit: Quite boring, really http://seclists.org/bugtraq/2008/Jan/409)
Now they are.
What message does this send to other students at Dawson? Don't be curious; don't go out of your way to do a favour for the safety of your peers; keep your mouth shut and we'll hand you your degree.
Someone give him a scholarship to a legit university!
Unfortunately, if they were at all competent they wouldn't be teaching at a place like that. CS programs at minor universities are notoriously poor and staffed by whoever they could get, and it's not going to be anyone that can make decent pay working on current technology.
While I'm sure they wouldn't get the cream of the crop, there's reportedly an excess of under-employed & under-paid PhD's and post-docs in a number of STEM fields (again, specifically in academia).
Since I wasn't really trying to hide anything, so one of the IT guys must have seen me with shell access and reported me. My punishment was having my ethernet turned off in my dorm room (even though the incident occurred in a computer lab while the dorm's ethernet was turned not ready for use yet). I appealed the decision and met with the Dean, and she said I was considered a threat to the school so I should be happy that my punishment wasn't worse.
Anyways, the rest of the year in the dorm was spent playing a cat and mouse game. I used my computer on my roommate's LAN port, so they ended up shutting off his ethernet as well.. I felt bad about that, especially since they refused to give him internet access for the rest of the year. So I ended up making a 50 foot ethernet cable and running it through the bathroom into another person's room (Two 2-person dorm rooms were connected by a common bathroom). That got shut off, so I bought a new LAN card (to get a new MAC address) and connected to another ethernet drop. I was able to get online for the rest of the year, but that sure left a sour taste in my mouth for my school.
Edit: I remember one close call... over a break (I was one of the few people in the dorm), water came out of the shower drain and flooded our rooms. I came back from spending the day out to see the Dean going into our room to inspect the damage, and I quickly had to hide my 50 foot cable that went through the bathroom.
[1] http://www.intel.com/support/wireless/wlan/sb/CS-031081.htm
Point being, if you can't hold up to a white hat scan, you're likely already hacked. Security is how you enforce your policy. But it's only white hat until data is compromised, and that's where the prosecution comes in.
In the meantime, until we can make this understood, we need to make the workaround understood: if you find a security flaw in a system you don't own, and you haven't been formally hired for the specific purpose of finding that flaw, ignore it and get on with your life; it's not your problem. Going out of your way to help people in normal circumstances is noble. Going out of your way to help people who will reward you with a knife in the back is a mistake. Don't make that mistake.
"Ethan Cox is a 28-year-old political organizer and writer from Montreal. He cut his political teeth accrediting the Dawson Student Union against ferocious opposition from the college administration and has worked as a union organizer for the Public Service Alliance of Canada."
ie: http://security.stackexchange.com/questions/14978/is-scannin...
Yes, even Google and Microsoft have bugs in their software. This isn't an excuse to bully people who tell you about the bugs in yours. The difference between you and Google is that Google pays people who find bugs in their software, especially serious security flaws, even if they aren't employed by Google, rather than threatening them with legal action.
I can understand Ahmed's youthful curiosity about whether the vulnerabilities that he identified had been fixed...But he had handed off the info to the Dawson College IT team and the ball was no longer in his court.
Running Acunetix against the college's/SkyTech's server(s) was a pretty dumb move. But hell, when you are in your early 20s, that's when you are supposed to make dumb mistakes.
I'm all for teaching moments, but this "One Strike And You Are Expelled" issue irks me.
Ultimately, this is about Edward Taza of Skytech Communications being sleazy and manipulative by threatening a scared, inexperienced 20 y/o college student with expensive legal action and implying the possibility of jail time unless he signed a non-disclosure agreement.
The EFF should probably take a look at this.
I would now never report a security flaw without a iron clad set of laws in place to protect the rights of white-hats, whether we are licensed and approved security researchers or not.
If you are going to be a lying asshole and deny something, do yourself a favor and deny it outright. Don't try to imply that you were just having a friendly conversation about "legal consequences" right before you solicit someone to sign a non-disclosure agreement. No one in the world will believe you weren't trying to intimidate this poor kid into compliance.
No, it didn't, because he was blackmailed into the NDA. It's completely unenforceable. It was signed under duress and only benefited one party.
It's not like it magically binds your tongue. It just makes it easier to sue you if you violate it. The fact that the student could win in a suit is irrelevant. He couldn't afford the time and money to fight.
Before he signed the NDA, they would have had a harder time suing him. Perhaps he could have spent merely $10k and gotten it quickly dismissed. After, the company could make it arbitrarily expensive for him to fight it. If he could have eventually proved coercion (which I'm honestly skeptical of) then he would have been off the hook -- after years of stress and massive lawyer bills.
Sensationalist journalism is what it is. After a little bit of research, I discovered it's written by someone who used to be in Dawson's Student Union, so I guess he has a teeth against the administration.
"Ethan Cox is a 28-year-old political organizer and writer from Montreal. He cut his political teeth accrediting the Dawson Student Union against ferocious opposition from the college administration and has worked as a union organizer for the Public Service Alliance of Canada."
After all, there is private data insufficiently safeguarded. Some poor girl could end up getting stalked if the right kind of sleeze came across this.
The threats by the Skytech CEO Edouard Taza; the college not allowing the professors to hear the student before voting; his transcripts vandalized with zeroes so he cannot continue his studies elsewhere... What exactly is the relationship between Skytech and this college?
I've signed the petition to reinstate Hamed:
http://www.hamedhelped.com/petition/
Hamed, stick to your guns. You did the right thing.
Now why is this story different this time? I'm not too sure since I've left a couple years ago, but my guess would be that the college administrators have taken this decision. Knowing Edouard Taza, I doubt he would have pushed for this student to be expelled, since he clearly has a great future in software and could be one day employed at Skytech to fix even more security holes.
Edit : hadn't finished reading the article, it seems the professors decided to kick the student out : "Following this meeting, the fifteen professors in the computer science department were asked to vote on whether to expel Mr. Al-Khabaz, and fourteen voted in favour." To me what this says is their computer science department is full of idiots. Any good CS professor would have understood that Hamed didn't have any malicious intent.
He got kicked out of CEGEP. He'll survive unharmed. Sad that he thinks getting publicity is worth it though.
A friend of me just had a summer internship in a security firm and learned a trick or two. And, looking at the html/javascript code of a page, there was an obvious entry point that gave access to anyonela else account provided you had their student number (i.e. skip the password step).
So my friend showed it to me and I suggested he tell the IT department. Obviously, the next thing we know, he's accused of "Hacking" and get menaced by the IT department.
A couple days later, we check back the website and realize that a trivial encryption is added.. I.e. you have to reverse the student number or something like that. And, obviously, just on the client-side.
A little bit pissed, we decided to take our revenge of being menaced for just being nice. So we create a web page where it explains the story (That we found an entry point, that we told the IT, etc.) and then, we say "Try it!" [<enter student number>] which directly logs you in into their account.
We e-mail that page to the main directors of the school by suggesting a quick fix. And, we make sure to CC the IT departments.
The day after it was fixed and we received a real "thanks" from the authority. I guess the trick is to contact a higher authority rather than directly contacting the IT department.
http://www.dawsoncollege.qc.ca/home
Basically, they say Ahmed did more than just what is reported in the article, and they can't publicly say what he did - because that's private info about Ahmed that they're legally obliged to protect.
Now I'm not taking a position in favor of the college or in favor of Ahmed. I'm just saying, it's not all black (or white). The National Post article is biased and we're missing some info. We should remember about that before going crazy on the witch hunt.
Based on other stories of bureaucratic ignorance it's easy to jump on the administrative / cover-up blame train, but something about this doesn't quite mesh, and the fact that the story's only alibis are 1) Ahmed and 2) a generic students' rights organization makes it difficult to digest.
The faculty told me that there are other things that caused this and they are unable to discuss them with me.
I wish it were possible to get that information but I know them and I trust them.
We just don't know.
How long did it take sony to fix their issues? Oh, right, it took someone to explose it publicly. Heh. It's unfortunate how broken some IT organizations are and that they would rather kill the messenger than fix things.
You got the picture. In big companies it might take some time.
Essentially is is very broken system that destroys itself.
It is like you need a manager to watch over a manager that watches over a manager.
It is funny to work at such companies, I got fired from one when I said everything I think about them.
That's not to say that the expulsion still doesn't reek of BS, but Ahmed's hands are not completely clean here.
Again, the school is on record as giving him kudos for reporting the error - it's perfectly reasonable to assume that someone will not launch offensive penetration testing tools at your site, without notice or permission, just because they have reported the bug in the past.
He could have tested the bug without the pentest software, besides. Just because someone points out a crack in your window doesn't give them carte blanche to try breaking it after you said you fixed it.
Let's assume it was not SQLi but an authorization application logic bug ie: by changing parameter passed by browser allowed access to whole record set. He did the right thing and told the vendor -- but after the fact he ran a tool that probably simulated SQLi on every damn parameter! Like smashing a car window after telling the owner he has left it unlocked.
Even a brain dead sysadmin would notice it In the logs, and likely whatever SIEM would fire a high priority alert.
He did this without auth and the company did the right thing here. In this post aaronsw world we can't just assume that every n00b clown whitehat hacker is totally innocent of all crimes even if done with the best intentions. People need to take responsibility for their actions. An ignorant click can be just as criminally negligent as stabbing a dude in the face.
Stop it.
Stop. It.
Based on the article, your life probably doesn't feel so good right now. Sorry to see a bright person in such a situation.
Give me a ring if you are looking for an internship, job or start-up experience in Montreal. We are in town (walking distance from Dawson actually). By the nature of our business, we also have good connections with academia if that can help (www.tandemlaunch.com).
My login is my name so you can reach me at [firstname].[lastname]@tandemlaunch.com
Just go to the school paper or town paper and let them report it.
He did great up to the point where he tried to pen-test after reporting it. I understand the intellectual curiosity to see if people are doing their jobs and it's too easy to armchair quarterback but if you bring attention to yourself by reporting a problem you can be sure they will watch you and not necessarily the problem.
The best action to take while you find a security flaw is to do nothing. Let some one evil abuse the flaw and make the guys miserable enough to realize the importance of a responsible disclosure.
Without this the guys ego is going to take this as- 'How dare he point a problem in my/our work' and not 'Thanks for saving my life before some body could screw me'.
Remind me to never, ever use Omnivox, or any Skytech software, ever.
What a clusterfuck. Since when do CEGEPs expel students for running security checks?
I've seen things like this happen before. You find a bug, you report it, they tell you "oh we're getting on it immediately". Some time goes by and you think, hey, did they fix it? You look, discover "nope", think "man I bet those guys would fix it if I lit a fire under their ass" and try and use the bug to deface the site, or something.
this is logic that makes sense to a 20 year old (speaking as a former 20 year old..). I've seen that happen before. the article doesn't say this, but perhaps reading between the lines the second attempt did not have a pure motivation behind it...
We brought it to the attention of the head of the IT Department by email. Later that week, the head visited our morning class to discuss this with us.
He discussed the issue to the class and actually acknowledged his appreciation for students like us for reacting promptly and responsibly over the issue.
I'd give it a shot if they fired their president, but that's an unrealistic expectation.
On a serious note, can't he appeal to any education monistry outside college?
To me this stinks of the "closed mind" problem.
"included the clause that urges you to get advice from a lawyer"
I can't recall being offered a NDA with this language.
Its really dumb that we're this far into the internet age already and companies and organizations can still play it so fast and loose with security and personal information. It's irresponsible and negligent.
The one in my country gets anonymous report exploits for state-run software. Not sure what they do wit them though :)
He is a student, how can be have a "Professional conduct issue"
its all a flow chart if you make a mistake in school no matter if its tech stuff like this, or anything really. we live in a world of corporations, lawsuits and lawyers, insurance & liability - no room for grey area anywhere in there. wheres the incentive for the school to care? they already got your money.
the worst part for the students is - they can have all sorts of good feelings built up towards their professors & classes. then the administration comes in and manages to sour all those feelings. those same professors, who may think the world of you, cant do a thing because at the end of the day its c.y.a. - and youre all alone.
college kids need to get educated about how college justice works if you screw up - its always too late when they do learn.....lets spend money on athletic complexes instead right?
Dawson statement on the article: The reasons cited in the National Post article for which the student was expelled are inaccurate. The process which leads to expulsion includes a step in which a student is issued an advisory to cease and desist the activities for which he or she is being sanctioned, particularly in the area of professional code of conduct.
And saying crowbar is intentionally misleading people into thinking that damage is somehow being done during the check.
The longer people are punished for helping, the worse our "cyber security" will digress moving forward.
1. Exploit discovery.
2. Automated service attack.
From the information given, it seems Al-Khabaz did exactly or better than what was expected of him for the first.
But why, if he was simply check for the existing vulnerability after informing of the first, did he launch an automated attack?
I suspect Dawson College has sound reasons to treat him the way they did for both instances.
The Streisand effect has struck again :)
http://www.cbc.ca/news/canada/montreal/story/2013/01/21/mont...
I would have given him a second warning.
What did this kid expect?
Hackers are the new Sicilians and blacks.
Don't snitch.
Snitches get punished both by:
The person being snitched upon
The person who is being snitched to.
This article and many other comments herein support this view.But that, of course, is a privilege exercised by the downvoter and rarely ever happens!
For it to be a SQL injection, he'd have to have been looking for vulnerabilities.
API not validating correctly. If this is true, it will take a long time to fix.
[he was] working on a mobile app to allow students easier access to their college account [.] -> Did he have authorisation? -> From who did he have authorisation? -> Omnivox does not seem to have a public API.
“I saw a flaw which left the personal information of thousands of students, including myself, vulnerable,” "I felt I had a moral duty to bring it to the attention of the college and help to fix it, which I did. I could have easily hidden my identity behind a proxy. I chose not to because I didn’t think I was doing anything wrong.” -> Did he try to fix it, or only bring it to the attention of the college? -> Did he inform the college he tried/would try to fix the flaw? -> Did he try to fix the flaw after or before meeting with the college?
"Mr. Paradis congratulated Mr. Al-Khabaz and colleague Ovidiu Mija for their work and promised that he and Skytech, the makers of Omnivox, would fix the problem immediately" -> Mr. Paradis is Dawson's Director of Information Services and Technology -> I precise only because it is not clear from the article if he works at the college or at Skytech
"Mr. Al-Khabaz decided to run a software program called Acunetix" "to ensure that the issues he and Mija had identified had been corrected" -> Did they use acunetix the first time? -> If yes, did the college know? Did skytech noticed? -> Otherwise, why? They found the flaw without acunetix
"Taza explained that he was quite pleased with the work the two students did identifying problems, but the testing software Mr. Al-Khabaz ran to verify the system was fixed crossed a line."
The administration of Dawson College clearly saw things differently, proceeding to expel Mr. Al-Khabaz for a “serious professional conduct issue.
Following this meeting, the fifteen professors in the computer science department were asked to vote on whether to expel Mr. Al-Khabaz, and fourteen voted in favour. Mr. Al-Khabaz argues that the process was flawed because he was never given a chance to explain his side of the story to the faculty -> Was there other incidents that could have influenced the judgment? -> College rarely want to expel students who ace all their courses. Especially in CS with the high rate of failure.
-> According to the college : The process which leads to expulsion includes a step in which a student is issued an advisory to cease and desist the activities for which he or she is being sanctioned
-> This, along with the "He said that this was the second time they had seen me in their logs" tend to indicate he probably ran the test multiple times. Or, the first time he foud the flaw, skytech took him for an attacked and the college warned him to stop developpement on his application. This would indicate that he had no authorisation in doing so.
This is illegal! Most people seem to be missing this.
If you're going to break the law at your own University at least cover your tracks.
Don't annoy the crap out of them(Rightly or wrongly) then go on to black hat them.
Throw the person who found the software bug into the lake, if they float, then they were a witch, and deserve to die.
And people wonder why security is so poor and Chinese hackers find it so easy to hack into all our stuff. Because America Punishes people who focus on bulletproof secure code.
I guess we'll need to hire some special interests to pay-off the news networks cnn/fox/msnbc/etc to add the "Hackers are not witches" to their narratives. We would probably need bribes on the order of billions.