As stated in the announcement and Tom's email to -hackers, the reasons for advanced notifications are as follows:
* People watching for vulnerabilities and contributors are going to notice that we've stopped automatic updates -- it's better for our project to just tell them all why
* Upgrading relational databases is often not trivial -- we want to give our users time to schedule an upgrade rather than just dropping an important update suddenly
The way they are doing it now entices hackers who don't know the exploit but happen to have a recent clone of the repo to look for the big hole in hopes of finding it ahead of the fix. Granted, hackers are probably already doing that sort of thing on high profile services like Postgresql to begin with, but in my experience it is easier to find something exploitable when you already know something exploitable exists than it is when you're just randomly poking around. At the very least it makes it easier to stay motivated and focused.
Warning ahead of time is thus often very useful - it allows the infrastructure to prepare to make the changes quickly. This is the same reason that folks like Microsoft consolidate most patches into standardized cycles.
I disagree with that. That information is highly valuable. Auditing is a risky time investment; you may not find anything useful. Audit time is a finite resource and you want to allocate it where there are vulnerabilities that are useful. There is no way to know that ahead of time.
> Security holes are numerous and the ones that have escaped detection generally continue to do so - the rate of co-discovery is very low in the field.
The rate of co-discovery is fairly high once a second party has been tipped off to the general location and nature of a bug. Most competent auditors will spot the same bugs, especially if the second one already got confirmation that it does in fact exist.
If I had to guess where it is, though, I'd bet it was in a PL module. I'm sure there is quite a bit of activity around finding NativeHelper-like situations.
Given the precautions that have been implemented, my bets are on authentication. This would mostly affect TCP/IP enabled hosts, which is fortunately not a default configuration (tested on Ubuntu).
You say that now, then one day, you wake up and all the blue-eyed islanders are gone!
Is Postgres working with downstream teams to have everything in place to do a coordinated security release? For instance, are they working with the likes of Debian's security team (for example) to not only make the direct source pullable, but also have releases available to as many users as possible in the platform preferred formats?
If they are, how do they do keep this under wraps? It seems like the kind of thing that would require a fairly wide "pre-disclosure", and managing trust in a large network gets hard.
Diving further though, at what level do you say you trust the system though? Do you trust your compilers to not inject malicious code? (see http://c2.com/cgi/wiki?TheKenThompsonHack) Do you trust peripheral devices? It's very easy to install a physical key logger into a system. Do you trust your chipsets? Compromised chipsets exist and can be used against you. (http://blogs.scientificamerican.com/observations/2011/07/11/...)
It's a tough situation to deal with. This is part of the reason layered security solutions are typically employed. Even if one system has a zero-day, ideally multiple layers should increase the overall complexity of triggering it. One of those layers are security teams and blackout periods where information is not released to the general public, even if they aren't always effective.
two exploits discovered, one sent to half the team,
the other sent to the other half
That would only work with brazen leaking. If a security team member were selling 0-days to organizations that intended to make extremely limited and careful use of them, it might never become public that exploits were being leaked."Do you trust your chipsets?"
Certainly not. I do believe that the recent tiny bytes sequences in any TCP (UDP ?) packet that can lock Intel ethernet cards is actually a backdoor allowing the state to perform DoS at will. I do also believe Huawei and ZTE are state-sponsored espionage companies (I've certainly seen weird things like a keylogger inside a 3G Huawei USB device sold I bought in Europe).
But I do believe that even if I'm, say, a Debian or OpenBSD dev working on OpenSSL it's amazingly complicated for the chipset to modify source code and be able to make to the DVCS unnoticed. I also think that as long as the source code isn't corrupted there are ways to create non-backdoored builds.
It's the same thing with program provers that can verify that certain piece of code are guaranteed to be free of buffer overrun/overflow: what proves that the compiler itself hasn't been tampered with? But still... With DVCSes and many eyeballs I'm not that much concerned about the compilers typically used nowadays to be tampered with.
They can tell Debian what is basically in this mail, and Debian can be ready to accept a new package.
As long as the debian infrastructure is fully automated (it is), the actual time delay from pgsql's announcement to it hitting debian servers would be only an hour or so.
Seriously - the entire premise of IT security (no matter the color of your hat) is the assumption that there is no such thing as a secure computer.
EDIT: Here is the answer: http://www.postgresql.org/support/versioning/
8.0 was EOL'd in 2010, but 8.4 will go through July 2014.
Be ready to patch as soon as it's out; this could be a big deal.
Even if your DB server is properly restricted, you should still patch quickly, but there is no way that it should be reachable unless you're already heavily compromised.
A bug in query parameter parsing that would allow SQL injection attacks?
Worse would be a vulnerability that you could trigger just by manipulating query parameters. Then almost every postgres-backed website would be vulnerable.