If most engineers just took a second to read the ones that were directly pertinent to their projects and tried to be cognisant of some mitigations, I'd find substantially less low-hanging-fruit vulnerabilities in the first review pass. Doing so actually makes my job significantly more difficult, and forces me to dig deeper - which is a good thing. Instead of writing up for the 100th time some input validation spiel, I can spend time searching for more complex bugs, writing protocol fuzzers, and doing real analysis in the time I have for the review.
Those on the security side often only think "its really not that difficult to make it secure, just follow these guidelines and you'll be fine", but they don't realise the myriad of other issues that the developers are dealing with.
EDIT: Security needs to be encouraged from the top down. If management is onboard with follow secure practices then they need to also understand that that means things might take a little longer to complete.
I think it's an unfair assumption to say that I'm scolding the teams I work with by trying to educate them on security issues - the fact that that is your takeaway from my last post says a lot about another issue with security organisations: that there often exists some adversarial nature between software engineering and product security. I moved into security from product engineering. I try my best to be the ally of the engineers, and educate them on bug classes so they can learn to deal with issues throughout the software development lifecycle, from early threat modelling and design issues to creating implementations that minimize weaknesses.
A helpful solution, as the child poster says, is to have frameworks that stop you from doing the dumb stuff, but smaller organisations sometimes don't have that luxury, and even still you can shoot yourself in the foot with your frameworks (Using front-end frameworks as an example, I've seen way too many extraneous uses of dangerouslySetInnerHtml and DOMSanitizer bypasses when I was a consultant.)
> scanners that are easy to use, helpful, and don't have lots of false positives.
I've written a fair amount of automation tooling to glue together COTS/OSS SAST/DAST applications. Most of even the better commercial tools still yield insane amounts of false positives and require human interaction to make sure that the bug is actually exploitable. Common web security tools such as Burp Suite Pro's scanner are effectively useless for most modern web apps. Some languages and architectures are better than others, some companies' internal rulesets are better than others, and it's a struggle to get something that works for even the majority case. Some of the most mature technology companies I've worked at are still trying to build these tools across their infrastructure, and they have better success in some languages/platforms than others, and they don't have the resources to keep up with the new languages / platforms / frameworks engineers want to use for this project or that. It's an uphill battle, and education is still just as important as tooling.
Besides myself, I've never came across another developer in an InfoSec team. Their background ranges from networking, desktop jockeying, manual sysadmin, audit, script running pentesters and mangerial, but never really developers, nor anyone who has been setting up automation for an operations function.
I think this is partly because I contract, and the sort of orgs that bring me on are already struggling, but I think it's also just a common theme that InfoSec teams don't build, and so people that do don't want to be there.
This is what leads to a lot of things we don't like. The demands to follow processes that don't really help, the buying of random products and demanding you integrate, etc. They simply lack of knowledge in Product Development leads to a lot of bad habits.
Much like you suggest, my job is too easy really. The builders also flee these orgs, because dealing with bullshit bureaucracy isn't fun, so with what's left all I can really do is suggest: use a framework that deals with security considerations, and don't deviate; follow this guidence such as CIS Benchmark; Use scanning tooks and look into the input; basically basic stuff, then come back to me when I have something to look at.
As an example: https://cheatsheetseries.owasp.org/cheatsheets/AJAX_Security...
I'm fascinated to know how this could actually be exploited. But there's no hint or reference to that. It's just "don't do this".