1. Named protocol-level prepared statements in transaction mode has a PR that's pretty close to being merged: https://github.com/pgbouncer/pgbouncer/pull/845
2. SET/RESET tracking in transaction mode. For this we need some changes in the postgres protocol to ask for postgres to tell pgbouncer about setting updates. This is being worked on here: https://www.postgresql.org/message-id/flat/CAGECzQQOOhH1Rztu...
3. The single threading issue can be worked around by using multiple processes that listen on the same port using so_reuseport=1 https://www.pgbouncer.org/config.html#so_reuseport
4. The pg_dump issue can actually be solved already by installing the Citus extension and using track_extra_parameters (otherwise you have to wait for the same postgres protocol addition that's needed for the other SET/RESET commands) https://www.pgbouncer.org/config.html#track_extra_parameters
Your large-scale app probably won't need that by default. You pool connections in pgBouncer (or your application) because they're slow and expensive to set up. If you run a standard N-tier architecture and you have fifty 'backend apps' (say web apps) that talk to a communal PG server, you want pgBouncer to avoid connection churn. If your scale's a lot smaller than that then you don't need pgBouncer and you can get by with your app pooling a handful of conns --- subject to the usual vagaries that come with hand-wavey explanations. Measure, monitor, etc.
It's an interesting and illuminating article, but the take-away is don't do weird statement-based transaction multiplexing unless you know exactly why you want that!
If you're running into "connection limits" on Heroku or whatever... maybe now is the time to stop letting other people manage your DB and just run it yourself? You're clearly big enough to run up against that.
Or, maybe, release your session back to the pool (app/bouncer/whatever) instead of sitting on it. For OLTP stuff like 99% of all web apps, that's key. With 100 connections and a p99 request-response cycle with DB hits being <50ms, you can handle a lot of traffic.
We use transaction level sharing. Practically, this means we occasionally see problems when some per-connection state "leaks" from one client to another when someone issues a SQL statement that affects global connection state, and it affects the query of a subsequent client inheriting that state. It's annoying to track down, but given the understanding of behavior, developers generally know how to limit their queries at this point. Some queries aren't appropriate for going through pgbouncer, like cursor based queries, so we just connect directly to the DB for the rare cases where this is needed.
Why so many connections? Say you make a Go based service, which launches one goroutine per request, and your API handlers talk to the DB - the way the sql.dB connection pooling works in Go is that it'll grow its own pool to be large enough to satisfy the working parallelism, and it doesn't yield them for a while. Similar things happen in Java, Scala, etc, and with dozens of services replicated across multiple failure domains, you get a lot of connections.
It's a great tool. It allows you to provision smaller databases and save cost, at the cost of some complexity.
microservices talking to the same db... thats not microservices thats a disaster. you basically combine the negatives of the microservice world with the negatives of the monolith - tight coupling.
> With 100 connections and a p99 request-response cycle with DB hits being <50ms, you can handle a lot of traffic.
You would be shocked and appalled at how little many devs know about DBs, or how long something _should_ take. I've had to explain that triple-digit msec latency on a simple SELECT is in fact not normal, and they should fix it. And that's just using it, not running it. There are a massive number of orgs that don't want to have to ever think about things like backup strategies or patching, and so they happily fork money over to a cloud provider to do it for them.
This is top advice actually. I would usually always suggest this! I've felt a lot other services too that can run better on the local/server deployment.
HN discussion from a few months ago, with lots of commentary relevant to any pgBouncer scenarios: https://news.ycombinator.com/item?id=36393030
It has savepoints and those nest fine.
(perl's DBIx::Class can be configured to automatically convert transactions into savepoints if it's already inside another transaction; presumably any other ORM-like thing could also do that for you in theory but whether the one you're currently using -does- is left as an exercise to the reader)
And that article also lists a bunch of ORMs that handle subtransactions out of the box, for anybody curious.
Even if you are using a language with a built in connection pooler like Go or Java being able to manage your connections external to your application is enough of a benefit to keep the pooler separate.
Please sell me why you'd use one vendor to host your DB and another one to host your app. If they are in different racks (forget about different data centers), that's going to kill you on latency. The only way around that we've seen is to vectorize all your queries.
This blog post refers to this misleading nature at https://jpcamara.com/2023/04/12/pgbouncer-is-useful.html#pre... .
> PgBouncer says it doesn’t support prepared statements in either PREPARE or protocol-level format. What it actually doesn’t support are named prepared statements in any form.
IMO that's still not accurate. You can use a named prepared statement just fine in transaction mode. start a transaction (so you aren't in autocommit), use a named statement, works fine. you just can't use it again in another transaction, because it will be "gone" (more accurately, "unmoored" - might be in your session, might be in someone else's session). Making things worse, when the prepared statement is "unmoored", its name can then conflict with another client attempting to use the same name.
so to use named prepared statements, you can less ideally name them with random strings to avoid conflicts, or you can DEALLOCATE the prepared statement(s) you used at the end of your transaction. for our users that use asyncpg, we have them use a uuid for prepared statements to avoid these name conflicts (asyncpg added this feature for us here: https://github.com/MagicStack/asyncpg/issues/837). however, they can just as well use DEALLOCATE ALL, set this as their `server_reset_query`, and then so that happens in transaction mode, also set `server_reset_query_always`, so that it's called at the end of transactions. Where pgbouncer here IMO entirely misleadingly documents this as "This setting is for working around broken setups that run applications that use session features over a transaction-pooled PgBouncer." - which is why nobody uses it, because pgbouncer claims this is "broken". It's not any more broken than it is to switch out the PostgreSQL session underneath a connection that uses multiple transactions. Pgbouncer can do better here and make this clearer and more accommodating of real world database drivers.
From fly.io's docs [1]:
> This Is Not Managed Postgres
[1] https://fly.io/docs/postgres/getting-started/what-you-should...
About 5 years ago we migrated to postgres and installed pgBouncer because of course you do. Sometime last year I started seeing some weird issues like the article points out, and the more I looked into the gory details, the more shocked I became. (The fact that statement level isolation even exists is… wow)
So I did an experiment and deleted it. Things kept working just fine. We’re not living in a microservices world so application-level pooling is enough — I don’t think I’ve ever seen us use half of available connections.
additionally with microservices, managing connection pooling can be difficult across legacy software, service versions, teams, etc.
PGBouncer lets me have a front-end and manage that at an infra level.
For Postgres a good way to scale is to just use a bigger server. You can get a lot of very fast storage and lots of CPU cores inside a single server today. And if that isn't enough you're far, far into territory where you can't give generic answers and it depends a lot on your specific use case and what you do with that database.
1. "Server level connection pooling" in the article (all clients share one pgbouncer instance) and
2. "Statement mode" if you can live without transactions (connection is re-pooled after every statement) otherwise "Transaction Mode" (connection is repooled upon transaction commit or rollback).
This is real world experience from a moderately complex application.