The dream is a single data mesh presenting an SQL userland where I can write and join data from across the business with high throughput and low latency. With that, I can kill off basically every microservice that exists, and work on stuff that matters at pace, instead of half of all projects being infrastructure churn. We are close but we are not there yet and I will be furious if people stop trying to reach this endgame.
That exists, and has for years: an extremely large DB loaded to the gills with RAM and local NVMe drives. Add some read replicas if you need them, similarly configured. Dedicate one for OLAP.
Ignoring the storage size limits, the real issue as you scale up is that the I/O schedulers, caching, and low-level storage engine mechanics in a large SQL database are not designed to operate efficiently on storage volumes this large. They will work technically, but scale quite a bit more poorly than people expect. The internals of SQL databases are (sensibly) optimized for working sets no larger than 10x RAM size, regardless of the storage size. This turns out to be the more practical limit for analytics in a scale-up system even if you have a JBOD of fast NVMe at your disposal.
We built an HTAP platform as a layer over Cassandra for precisely that reason round about when Gartner invented the term.
In finance and fintech, there are ample use cases where the need for transactional consistency and horizontal scalability to process and report on large volumes come together, and where the banks really struggle to meet requirements.
I dug out an old description of our platform, updated it a bit, and put it on Medium, in case anyone is interested: https://medium.com/@paul_42036/a-technical-description-of-th...
1) Cloud data warehouses like Redshift, Snowflake, and BigQuery proved to be quite good at handling very large datasets (petabytes) with very fast querying.
2) Customers of these proprietary solutions didn't want to be locked in. So many are drifting toward Iceberg tables on top of Parquet (columnar) data files.
Another "hidden" motive here is that Cloud object stores give you regional (multi-zonal) redundancy without having to pay extra inter-zonal fees. An OLTP database would likely have to pay this cost, as it likely won't be based purely on object stores - it'll need a fast durable medium (disk), if at least for the WAL or the hot pages. So here we see the topology of Cloud object stores being another reason forcing the split between OLTP and OLAP.
But how does this new world of open OLTP/OLAP technologies look like? Pretty complicated.
1) You'd probably run PostGres as your OLTP DB, as it's the default these days and scales quite well.
2) You'd set up an Iceberg/Parquet system for OLAP, probably on Cloud object stores.
3) Now you need to stream the changes from PostGres to Iceberg/Parquet. The canonical OSS way to do this is to set up a Kafka cluster with Kafka Connect. You use the Debezium CDC connector for Postgres to pull deltas, then write to Iceberg/Parquet using the Iceberg sink connector. This incurs extra compute, memory, network, and disk.
There's so many moving parts here. The ideal is likely a direct Postgres->Iceberg write flow built-into PostGres. The pg_mooncake this company is offering also adds DuckDB-based querying, but that's likely not necessary if you plan to use Iceberg-compatible querying engines anyway.
Ideally, you have one plugin for purely streaming PostGres writes to Iceberg with some defined lag. That would cut out the third bullet above.
Yep. At the scope of a single table, append-only history is nice but you're often after a clone of your source table within Iceberg, materialized from insert/update/delete events with bounded latency.
There are also nuances like Postgres REPLICA IDENTITY and TOAST columns. Enabling REPLICA IDENTITY FULL amplifies you source DB WAL volume, but not having it means your CDC updates will clobber your unchanged TOAST values.
If you're moving multiple tables, ideally your multi-table source transactions map into corresponding Iceberg transactions.
Zooming out, there's the orchestration concern of propagating changes to table schema over time, or handling tables that come and go at the source DB, or adding new data sources, or handling sources without trivially mapped schema (legacy lakes / NoSQL / SaaS).
As an on-topic plug, my company tackles this problem. Postgres => Iceberg is a common use case.
[0] https://docs.estuary.dev/reference/Connectors/materializatio...
CDC from OLTP to Iceberg is extremely non-trivial.
The current Iceberg architecture requires table reads to do so many small reads, of the files in the metadata tree.
The brand new DuckLake post makes all this clear.
https://duckdb.org/2025/05/27/ducklake.html
Still Iceberg will probably do just fine because every data warehousing vendor is adding support for it. Worse is better.
Fully using PostGres without awareness of Iceberg would require full decoupling, and a translation layer in between (Debezium, etc). That comes with its own problems.
So perhaps some intimacy between the PostGres and Iceberg schemas is a good thing - especially to support transparent schema evolution.
DuckLake and CrunchyBridge both support SQL queries on the backing Iceberg tables. That's a good option. But a big part of the value of Iceberg comes in being able to read using Spark, Flink, etc.
> pg_mooncake is a PostgreSQL extension adding columnstore tables with DuckDB execution for 1000x faster analytics. Columnstore tables are stored as Iceberg or Delta Lake tables in your Object Store. Maintained by Mooncake Labs, it is available on Neon Postgres.
Seems to summarise the reason this article exists.
Not that I really disagree with the premise or conclusion of the article itself.
My work involves a "disaggregated data stack" and a ton of work goes into orchestrating all the streaming, handling drift, etc between the transactional stores (hbase) and the various indexes like ES. For low-latency OLAP queries, the data lakes can't always meet the need either. I haven't gotten the chance to see an HTAP database in action at scale, but it sounds very promising.
The timeline is a bit off - Oracle V2 was released in second half of 1979, so although it technically came out at the very end of the 1970s, it isn’t really representative of 1970s databases. Oracle V1 was never released commercially, it was used as an internal name while under development starting circa 1977, inside SDL (which renamed itself RSI in 1979, and then Oracle in 1983). Plus Larry Ellison wanted the first release to be version 2 because some people are hesitant to buy version 1 software. Oracle was named after a database project Ellison worked on for the CIA while employed at Ampex, although I’m not sure anyone can really know exactly how much the abandoned CIA database system had in common with Oracle V1/V2, definitely taking some ideas from the CIA project but I’m not sure if it took any of the actual code.
The original DB2 for MVS (later OS/390 and now z/OS) was released in 1983. The first IBM RDBMS to ship as a generally available commercial product was SQL/DS in 1981 (for VM/CMS), which this century was renamed DB2 for VM/VSE. I believe DB2/400 (now renamed DB2 for IBM i) came out with the AS/400 and OS/400 in 1988, although possibly there was already some SQL support in S/38 in the preceding years. The DB2 most people nowadays would encounter is the Linux/AIX/Windows edition (DB2 LUW) is a descendant of OS/2 EE Database Manager, which I think came out in 1987. Anyway, my point - the various editions of DB2 all saw their initial releases in the 1980s, not the 1970s.
While relational technology was invented as a research concept in the 1970s (including the SQL query language, and several now largely forgotten competitors), in that decade its use was largely limited to research, along with a handful of commercial pilots. General commercial adoption of RDBMS technology didn’t happen until the 1980s.
The most common database technologies in the 1970s were flat file databases (such as ISAM and VSAM databases on IBM mainframes), hierarchical databases (such as IBM IMS), the CODASYL network model (e.g. IDS, IDMS), MUMPS (a key-value store with hierarchical keys), early versions of PICK, inverted list databases (ADABAS, Model 204, Datacom)-I think many (or even all) of these were more popular in the 1970s than any RDBMS. The first release of dBase came out in 1978 (albeit then called Vulcan, it wasn’t named dBase until 1980)-but like Oracle, it falls into the category “technically released in late 1970s but didn’t become popular until the 1980s”
I thought this was such an important point. Sooooo many dev hours were spent figuring out how to do distributed writes, and for a lot of companies that work was never needed.
For example, look at how Google Cloud SQL's aptly name "High Availability" configuration supports high availability: 1 primary and 1 standby. The standby is synced to the primary, and the roles are switched if a failover occurs.
- Availability: spin up more read replicas.
- Durability: spin up more read replicas and also write to S3 asynchronously.
With Postgres on Neon, you can have both of these very easily. Same with Aurora.
(Disclaimer: I work at Neon)
Certainly neither products have much obvious need for OLTP workloads. Hell, neither have any need for transactions at all. You're just paying them for raw CPU.
One of the biggest problems with having more data is it's just hard to manage. That's why cloud data warehouses are here to stay. They enable the "utility computing" of cloud compute providers, but for data. I don't think architecture is a serious consideration for most people using it, other than the idea that "we can just throw everything at it".
NewSQL didn't thrive because it isn't sexy enough. A thing doesn't succeed because it's a "superior technology", it survives if it's overwhelmingly more appealing than existing solutions. None of the NewSQL solutions are sufficiently sexier than old boring stable databases. This is the problem with every new database. I mean, sure, they're fun for a romp in the sheets; but are they gonna support your kids? Interest drops off once everyone realizes it's not overwhelmingly better than the old stuff. Humans are trend-seekers, but they also seek familiarity and safety.
It seems that now people is converging to this pseudo-math database solution namely Postgresql with its battle-hardened object-relational technology that's IMHO a local minima [1].
The world need a proper math based universal solution for the database technology similar to relational. But this time around we need much more features, we want it all including analytical, transaction, spreadsheet, graph, vector, signal, etc. On top of that we want reliable distributed architecture. We simply cannot add on indefinitely upon Postgresql because the complexity will be humongous and the solutions become sub-optimal [2].
We need strong database foundation with solid mathematical basis not unlike the original relational database technology.
The best candidate that's available now is D4M by the fine folks at MIT that has been implemented in Matlab, Python and Julia [3]. Perhaps someone need to write C++, Dlang or Rust version of it to be widely acceptable.
It's funny that the article started by mentioning the article inspiration was from the popular article on big data is dead and by doing so is prematurely dismissing the problem. The book on D4M, however embrace the big data problem by its head by putting the exact terminology it the title [4].
[1] What’s the Difference Between MySQL and PostgreSQL?
https://aws.amazon.com/compare/the-difference-between-mysql-...
[2] Just Use Postgres!
https://www.manning.com/books/just-use-postgres
[3] D4M: Dynamic Distributed Dimensional Data Model:
[4] Mathematics of Big Data: Spreadsheets, Databases, Matrices, and Graphs (MIT Lincoln Laboratory Series):
https://mitpress.mit.edu/9780262038393/mathematics-of-big-da...
Clickhouse performance for Postgres workloads?
Prof. Viktor Leis suggested [0] that SQL itself - being so complex to implement and so ineffectively standardized - may be the biggest inhibitor to faster experimentation in the field of database startups. It's a shame there's no clear path to solving that problem directly.
Why wouldn't it? The resources needed to run the backend of Cursor come from the compute for the AI models. Updating someone's quota in a database every few minutes is not going to be causing issues.
Today big boxes are big. Really big. Stuff like 128 cores, 1TB RAM, and dozens of terabytes of incredibly fast RAIDed flash storage is available out there.
They're also more reliable than they used to be. Hardware still fails of course, but it doesn't fail as often as OG spinning disk did.
dynamo and mongo are huge, redis and kafka (and their clones) are ubiquitous, etc etc
Rich Hickey argued [0] that place-orientation is bad and that a database should actually just be an immutable value which can be passed around freely. That's fairly in line with the conclusions of the post, although I think much more simplification of the disaggregated stack is possible.
[0] https://www.infoq.com/presentations/Deconstructing-Database/
Well no, not according to your own source:
This setup consists of one primary database and dozens of replicas.
Are they just fine? There have been several instances in the past where issues related to PostgreSQL have led to outages of ChatGPT.
OK but let's pretend it's acceptable to have outages. It's fine apart from that? However, “write requests” have become a major bottleneck. OpenAI has implemented numerous optimizations in this area, such as offloading write loads wherever possible and avoiding the addition of new services to the primary database.
I feel that! I've been part of projects where we've finished building a feature, but didn't let customers have it because it affected the write path and broke other features.It's been less than a week since someone in the company posted in Slack "we tried scaling up the db (Azure mssql) but it didn't fix the performance issues."
Network round trip? Scaling the instance aint gonna help. Row by agonizing row? Maybe some linear speedups as you get more IO, but cloud storage is pretty fucking slow. Terrible plan/table/indexing/statistics? Still gonna be bad with more grunt. Blocking and locking and deadlocking the problem? Speeding up might make it worse :)
If people have exponential problems they don't think "let's just get more machines" they think "lets measure and fix the damn thing" but for some reason it doesn't apply to most people's databases.
It’s because RDBMS effectively hasn’t changed in decades, and so requires fundamental knowledge of how computers work, and the ability to read dense technical docs. If those two clauses don’t seem related, go read the docs for HAProxy, or Linux man pages, or anything else ancient in the tech world. It used to be assumed that if you were operating complex software, you necessarily understood the concepts it was built on, and also that you could read dozens of pages of plaintext without flashy images and effects.
That’s not to say that all modern software assumes the user is an idiot, or has terrible docs – Django does neither, for example.
> Network round trip? Scaling the instance aint gonna help. Row by agonizing row? Maybe some linear speedups as you get more IO, but cloud storage is pretty fucking slow.
See previous statement re: fundamentals. “I need more IOPS!” You have a 1 msec read latency; it doesn’t matter how quickly it comes off the disk (never mind the fact that the query is probably in a single thread), you have the same bottleneck.
HTAP in sql server for reference.
We are building this platform as well. There are 2 aspects to it - the "enterprise way" and the "greenfield way". The greenfield way will win out in 10-15 years, but unless you have capital to last that long, as a startup we need to go the Enterprise way first until we are big enough to go the unified HTAP-style way. The Lakehouse - open columnar data - is here to stay. It needs a better connection to OLTP than Kafka, but it will take time between A and B.
This is an architectural decision of the cloud providers to some extent. Linux can drive well over 1 Tbps of direct-attached storage bandwidth on a modern server but that bandwidth is largely beyond the limits of cheap off-the-shelf networking that disaggregated storage is often running over.
https://learn.microsoft.com/en-us/azure/azure-sql/database/h...
Like realtime dashboards/reports as the transactions are coming in.
Think of a SaaS with high usage.
The analytics you're referring to use the more slow moving "ETL all the source data together" and then analyze it.
Different use cases.
Clearly, the objectives and limitations of OLAP and OLTP differ so much that merging the two domains in a fantasy.
It's like asking two people to view through the same lens.
In fact, relational databases did nothing in the 1970s. They didn't even exist yet in commercial form.
My first prediction as an analyst from 1982 onwards was that "index-based" DBMS would take over from linked-list DBMS and flat files. (That was meant to cover both inverted-list and relational systems; I expected inverted-list DBMS to outperform relational ones for longer than they did.)