It's usually designers, people who can raise money, and generalists who can stitch together apis. It's not generally platform, db, or security minded people. The proliferation of things like vercel and supabase have exacerbated this.
So you get people deploying API keys client side and dbs without rls. Or deploying service keys client side when they should be anon. I mean really basic stuff.
Claude Code will do this, and actively encourage bypassing any verification before pushing to prod. I saw that first hand with its attempted handling of a major CIAM provider, and then Vercel using whatever OAuth provider in the ol' transitive breach
That is common knowledge now, right? Or am I just smoking yellow tops
In my personal assessment some individuals within leadership at this startup were highly risk-tolerant. I speculate that had those individuals been in leadership at other companies not subject to HIPAA, security practices would have been as lax and irresponsible as what's being described as the norm in this thread.
However, because of HIPAA, security practices at this company were fair-to-middling. There were certainly weak areas and mindless box-checking a la SOC-2, but it wasn't a complete shitshow. Those of us in the engineering deparment who cared were able to raise concerns and not have them dismissed, and were generally allowed to do things the right way.
My takeaway: when there are actual severe penalties for privacy breaches, startups may not be so cavalier with your data.
I don't have any concrete recommendations other than that one really good senior+ engineer is more important than a legion of juniors early on. Basic security doesn't require an extra hire; it requires somebody experienced enough to build your product right.
I'll bet at some point someone contact this company and said "hey I'm being shown the wrong course" or "I can't access the material I just uploaded."
I've never seen anyone who got the basics right compromised because of some esoteric security issue. I'm sure it happens and probably will happen more now that it can be automated but it's usually a case of a system being left wide open.
Let me guess though. They are SOC2 and ISO compliant right ?
Tried a bunch of open source pentesters, including strix (though we never managed to get strix to actually complete.) this project called shannon was the only one that we managed to get working reliably and it definitely smoked the output of one of the $10K pentests we did, (we had just discovered shannon after we had gotten the pentest firm's report, so it gave us a good baseline comparison). caveat: this was white box and our pentest firm did greybox, but neverthless I was still very unimpressed by what I got from the pentest firm. $50 vs $10K is not even a comparison lol with far far better results and sent our cto into near heart attack mode.
i think the days of pentesting firms are over - especially with mythos/5.5-cyber etc like capability coming into play. very exciting times ahead!
The vulnerability itself appears to be something anyone with mitmproxy would have spotted within minutes of looking at the platform; apparently, rotating object IDs worked everywhere in the app, and there was no meaningful authz.
It's interesting if AI systems can "spot" these, in the sense of autonomously exercising the application and "understanding" obvious failed authz check patterns. But it's a "hm, ok, sure" kind of interesting.
https://www.wiz.io/blog/azure-active-directory-bing-misconfi...
1. I didn't see mention of a bug bounty program giving limited authorization. How do independent researchers do this with legal safety? Especially when DoD is involved?
2. If a researcher discovered a vulnerability at a DoD contractor, and the contractor didn't seem to be resolving the problem, is there a DoD contact point that would be effective and safe for the researcher to report it?
Well that’s pretty damning.
I tried engaging and replying to them, and it inevitably turns into: "Yeah, we don't actually have the vulnerability, but you are totally vulnerable, just let us do a security audit for you".
I have a pre-written reply for these kinds of messages now.
I get tons of these messages too and the ones that do include details are the kind of junk you get from free "website vulnerability scanners" that are a bunch of garbage that means nothing -- "missing headers" for things I didn't set on purpose, "information disclosure vulnerabilities" for things that are intentionally there, etc... You can put google.com into these things and get dozens of results.
How do people find these vulnerabilities within the immense scope of the whole internet? Are they going around with some kind of generic API scanner that discovers APIs?
The CEO seems more interested in insulting people than securing his company’s product.
Yes. I know Andreessen-Horowitz and I don’t know a16z. Reading the title i thought it will be about the cryptography serialisation specification. Turns out i was mixing it up with ASN.1.
> Their website is literally a16z.com
I hear now. Before this if pressed i would have guessed that they probably have a website indeed. If you would have twisted my arm my guess would have been andersenhorovitz.com (yup, with the typos. I learned the correct spelling today from your comment.)
> exceedingly relevant for the HN audience
We contain multitudes.