I am still sad that the business didn't work out :( I thought there was going to be a follow up blog on where they ended up getting acqui-hired?
What did happen is we're all getting new full-time jobs elsewhere, since we aren't making enough money to pay ourselves. Most of us actually haven't started our new jobs yet (taking a little break) but we'll have a blog post when it happens in a couple weeks.
I'm pretty ambivalent about these "we got a security review, they said we're good" updates, even when they include the actual contents of the report (the final contents of the reports you actually see are almost always negotiated between the client and the testers).
It is a real problem for the industry that there's no clarity to be had about what it means to have had an assessment, what the different assessors capabilities are, how engagements are scoped, &c. I tend to mistrust organizations that use audit results to claim a clean bill of health --- or anything like that --- but more and more projects do that now, so I don't know how valuable that rule of thumb will remain.
I'm not sure this blanket statement -- probably derived from the world of SaaS -- is necessarily helpful in the context of Sandstorm. Keep in mind that Sandstorm is meant to host internal-facing services. One doesn't normally expect that an external attacker will have authority to create a full user account and install their own apps, which is necessary to exploit this particular vulnerability. (It's actually the app, not Sandstorm itself, making the requests; Sandstorm failed to prevent apps from making requests to the private network.)
On Sandstorm Oasis, the service we run which does allow arbitrary visitors to create full user accounts (possibly the only Sandstorm server worldwide that does this), the SSRF did not provide access to anything sensitive.
I'm of course not saying it wasn't a problem -- I described the severity as "high" in the post.
> I'm pretty ambivalent about these "we got a security review, they said we're good" updates
To be clear, I never made any such claim. The post reports facts, which is that a security review occurred, and some pretty tricky-to-find bugs were found and fixed. I'm sure there are other bugs to be found.
I'd very much like to receive further reviews from other parties.
If the goal is not to run internet facing services, why is the project so focused on security? In the enterprise, there is already F5, NIDS etc so nobody can get in. Is sandstorm trying to prevent employees from hacking the company or something?
This is not really obvious from any of your marketing copy or documentation, nor would it be a realistic expectation if it were. I think you need to secure like your users don't know or understand your intentions.
It's usually not. What can you do via HTTP protocol inside the network? Check out ssrf bible.
Fundamentally, I believe the security consulting industry is due for a radical shift, probably instigated and led by Hackerone and Bugcrowd. Unfortunately there is a lot of inefficiency in the industry that allows consulting firms to exist as they do now.
For the most part, my clients come to me for an assessment because they have a measurable business need - lucrative customer A is demanding an external third party assessment. This is the primary use case for which I feel comfortable - my time at Accuvant (now Optiv) left me deeply uncomfortable with the rote way that security assessments could be nosebleed expensive for frankly questionable work (e.g. $10k/week/assessment for reviewing brochure websites for large companies - for the most part employees knew what they were doing, it was just overpriced and unnecessary).
In a lot of ways security assessments are inflated in price because they're somewhat like insurance. Truly exceptional vulnerability researchers could and probably should be earning half a million to a million a year. Watching them work is a beautiful blend of art and science. They are underpaid. On the other hand, merely competent or outright mediocre "penetration testers" are overpaid by way of de facto rent collecting.
If I were to run a productized software firm now and no particular customer demanded a third party assessment, I'd honestly never commission one. Instead, I'd open a bug bounty program and dial the rewards up, then welcome specific people to come find vulnerabilities (people like Frans Rosen of Detectify, Jack Witton of Facebook, Egor Homakov a competitor of mine and in this very thread ;) or Bitquark at Tesla - not sure of his real name off the top of my head).
I have utter confidence that for essentially everything but cryptanalysis, a generously priced bug bounty is plainly superior than any given firm's commissioned assessment in raw results. It's not quite as turnkey or comforting, but it's effective. Hackerone and Bugcrowd even field reports for managed programs these days. I believe this wholeheartedly enough that I would (and have in the past!) advise potential new clients against the interests of my firm in this direction if they didn't require the assessment for an external third party or regulatory compliance.
Once they really perfect the researcher signal/noise rating system, Hackerone and Bugcrowd are going to take the top 100-1000 researchers on either platform and wrap their current activities into a neat layer of turnkey abstraction, call it a formal assessment and legitimately disrupt the pricing of the security consulting industry.
With that said, I don't think it's really true that a capability-based programming language would have avoided these problems.
1. For the Nodemailer problem, no ambient authority was used to split the email into two addresses. A capability-based implementation could have done the same thing. This is more of a langsec issue in that the API was a bit foot-shooty.
2. If the zip implementation were completely rewritten in a capability language, then sure, this vulnerability could have been avoided. It also could have been avoided if zip accepted NUL-delimited filenames rather than newline-delimited. It's not really practical to rewrite the world in another language, unfortunately.
3. SSRF can be avoided using capabilities (forcing the attacker to present a capability, not just an address, to any third-party server they wish to access). Ironically, though, this is a networking issue, not an in-process issue, so what we really need is stricter application of capabilities at the network layer, rather than a capability programming language. Sandstorm is actually willing to push capabilities at the network layer. The trouble is, the network is often used to talk to the rest of the world, which isn't usually capability-based. Hence, we have to make compromises.
4. The Linux kernel bug would maybe have been avoided if the kernel were written in a capability language, but that's a pretty enormous undertaking. Alternatively, it could have been avoided if we forced all our apps to be written in capability languages, but that would mean that no existing codebase could be ported to Sandstorm, which is far too large a cost. That said, I would like to have some special support for apps written in capability languages someday, e.g. to let the user know that this app is extra-safe.
Put simply, going all-capabilities is just not practical today, and we have to make compromises in order to make meaningful progress.
I note that Cap'n Proto did receive some scrutiny from security guru (and personal friend) Ben Laurie in the past:
https://capnproto.org/news/2015-03-02-security-advisory-and-...
My next job is at a company that is a heavy user of Cap'n Proto, so expect more progress on this front going forward.