My partner and I plan to launch a healthcare-related web app in the coming months. We'll be hosting on AWS, with the database on an encrypted EBS volume, all conncetions over HTTPS and we should have two-factor authentication by SMS. We're mostly using the MEAN stack.
I'm not technical, so I'd appreciate some guidance on best security practices that are relevant and feasible for a startup. I doubt we'll have anything financially useful to steal, but my main concern is avoiding leaks of private patient data, of which we might store a limited amount.
1. Is there a checklist/best practices guide somewhere? I'd like to avoid making obvious mistakes that would be embarassing in retrospect, though I know it's hard to defend against someone skilled and determined.
2. Any experience with hiring a firm (like Matasano) for penetrating testing? Rough estimate of cost? When is the right time to consider this?
3. How and when to start a bug bounty program? Is there a standard way to determine severity and payouts?
Thank you!
Lock out the root AWS keys as much as you can (ours requires a MFA token that's stored in a safe) and only use IAM users with restricted permissions for day to day operations.
Everything should have an audit trail, preferable with all the logs shipped off the servers to a centralized store (that way if a server is compromised the attacker can't also edit/delete the logs)
Script all your boxes through config management so that you can handle updates/security patches in a uniform manner and quickly.
Restrict who has access to root/DB in production. When you grant access keep an audit trail of why they have access and revoke it if it's no longer necessary. Have a good development environment setup so people don't develop the habit of developing against production.
Pentest + bug bounties are good. Once you get to a certain point you'll probably also need to have a general security/HIPAA audit as well.
I'd recommend encrypting from the boot volume up and not just your EBS volumes. Otherwise you have to worry about things like PHI in logs, core dumps, etc. being put onto unencrypted storage.
In terms of hardware/OS: Turn off everything incoming except for HTTPS, SSH, and ping (optional). Make sure everyone uses SSH keys (no passwords)
In terms of programming, focus on security roles is tricky at first. So you want to be careful in describing how user roles or user permissions work in your site.
Create a staging server with test data that mimics your production site (nearly exactly). Any penetration company company will ask you to sign a "This won't hurt anything", when smashing up your server.
Another place to focus is how backups are copied, who can access the data, etc..
This is a really big topic. Your insurance company when you apply will have an excellent check list.
OP mentioned using AWS, in which case Amazon's built-in "Security Groups" feature can be used for restricting access to the instance by port or possibly by protocol. Naturally, however, one would not want any dangerous outbound traffic, such as unencrypted/unauthenticated automatic updates, so there also is merit in controlling which services and programs are running.
Here are a couple of resources that I tend to hand out to startups that we do work for at Matasano. No charge :-)
Not trying to be a salesperson, but I feel like most startups get more value out of sitting down with a security consultant for a couple days and talking about architecture and dev processes then they do getting a full penetration test. Like the presentations say, the big risk in the early days is lack of interest, not security. I feel like a startup's big security concern it doing something that's going to make them have to rewrite everything later on.
http://chris.improbable.org/2009/9/24/indie-software-securit... (old presentation from tqbf. We might one day put it back on our blog. Don't hold your breath. Anyway, the slides and presentation aren's great IMO, but the blog post is!)
http://firstround.com/review/Evernotes-CTO-on-Your-Biggest-S...
Most HIPAA recommendations seem to be a good idea to do anyway.
[1] http://www.hhs.gov/ocr/privacy/hipaa/faq/securityrule/2001.h...
I sent you an email, and I'd be happy to answer any of your questions or help you out (no charge). Down the line, if you decide you'd like to explore a security audit, we can help with that too.
For now, I'll answer your questions:
1. I wrote a basic checklist for startups looking to improve their security. You can find it here: http://breakingbits.net/2015/02/28/security-for-startups/. It's not comprehensive, but I tried to cover the most common issues I saw with startups. Ryan McGeehan also wrote a wonderful checklist for incident response after something does happen. They're two sides of the same coin - preparation and damage control. Check that out here: https://medium.com/@magoo/security-breach-101-b0f7897c027c. Your specific company will have more to do for each of these based on the context of your team, product and size.
2. It's difficult to give a good estimate of cost, and I'm not trying to be a salesman here. It depends on length and scope of work. Are we doing code review or just blackbox testing? Five days or 10? The entire application attach surface or just a few critical pieces of functionality? Budget about $10,000 for a white label pen test lasting a week, but it could be more or less. I'm summoning 'tptacek here in the hopes he might have more nuance to contribute to this answer.
3. How and when to begin a bug bounty program is similarly variable. I have a lot of experience working with Series A companies, and bug bounties are a personal specialty of mine (I have research directly on the subject). In my opinion, you should not have bug bounties until you have at least one full time security engineer. You don't want to pay out a bounty for someone reporting that your cookies lack an HttpOnly flag. On the other hand, server-side request forgery usually warrants a payment. If you have developers who could tell you the difference in those two in terms of both definition and severity without looking it up, you're okay to have a bug bounty. If not, don't pay people for their reports just yet. (This is personal philosophy from my experience - many people will have different opinions).
On the other hand, always have a responsible disclosure program. It is perfectly okay to not reward people for reporting security vulnerabilities. I'll repeat this: it is perfectly reasonable to not give out payments for reports. Don't let financial rewards enter the program until you have reached a certain level of internal security maturity.
And no, there is no standard way to determine severity and payments. Google and Facebook pay a lot for bugs that some companies wouldn't even accept. It totally depends. That said, as a general guideline, if a reported bug 1. is valid, 2. compromises users, it's worth something.
Like I said, I sent you an email. Let me know if you have any questions, I'd be happy to help you with anything you need to know.