The log(N) performance of a B-tree is not just extremely hard to improve on (for searches), it's impossible to improve on. The lower bound for searching (in the DAM model) is log(N)/log(B), and B-trees meet that.
But B-trees are also log(N)/log(B) for insertions, which is, it turns out, pretty damn slow. There's an optimal trade-off curve between insertions and queries (the fastest data structure for insertions is to just log them, but that forces all queries to read all the data), and B-trees are on it, but there are many more interesting points on that curve.
There is a family of data structures that matches B-trees' performance on queries while blowing it completely out of the water for insertions. The COLA and Cache-Oblivious Streaming B-tree are where you'll find them in academia, and the implementation I work on has a very marketing-flavored name: the Fractal Tree. All that means is that we took the theory and started implementing it and came up with enough innovations in the implementation that it sort of needed a new name, but it's spiritually the same concept.
I've written a fair amount about this that I'll link to at the end, but here's a brief description so you don't think I'm making this all up.
Basically what we do is we take a B-tree and, on all the internal nodes, we stick large buffers that accumulate messages. Want to insert something? Just stick it in the root's buffer, you don't need to do any I/O to find the proper leaf node it needs to go in (yet). If the buffer's full, flush it down by taking all its messages, sorting them between the root's children, and putting messages in the buffers in the children. This flushing can cascade as you'd imagine, and splits and merges work about the same as in a B-tree.
So now what's the analysis?
Well, the tree has the same shape as a B-tree, so searches have to look at the same log(N) number of nodes (which in practice is almost always just 1 for the leaf node after cache hits on the internal nodes, both for B-trees and for Fractal Trees, but the asymptotic analysis also tells you they have the same query cost).
For insertions though, let's walk through the process. The tree has height log(N), which means that, for an insertion to get "fully inserted", meaning it reaches the leaf node and won't be flushed any more, it has to get flushed down log(N) times. And what's the cost to flush a buffer full of messages? That's just O(1) I/Os, for the parent and children (and in practice it's actually only 2 I/Os because you can just flush to a single child, not all of them). But a buffer flush does work for O(B) messages at once, so the amortized cost to flush a single message down one level is just O(1/B). Do that log(N) times, and the insertion cost is O(log(N)/B), which is in practice around 100x less expensive than the B-tree's O(log(N)/log(B)) insertion cost.
On top of this, while for B-trees you want small leaf nodes because you're going to be reading and writing them all the time, for Fractal Trees, since the goal is to get a lot done with each I/O, you actually want large leaves, on the order of a few megabytes each. This has two nice effects:
1) While range queries on a B-tree can slow down as the tree ages and the leaves start to get randomly placed on disk, range queries on a Fractal Tree stay fast because each time you do a disk seek, you get to read and report a few megabytes' worth of data. This basically solves the "B-tree fragmentation" problem that makes database users run optimize table, or vacuum, or reIndex() or compact() operations like madmen.
2) Compression algorithms (like zlib, our default) can compress large blocks of data much more effectively than they can compress small blocks. So InnoDB, which has small blocks like most B-trees, if you turn on compression, apart from eating CPU as it tries and fails and re-tries to compress your data to fit it into its block size, it'll only get at most about 4x compression. In contrast, TokuDB (our MySQL storage engine using Fractal Trees [4]) routinely gets 10-20x compression without breaking a sweat.
I have some blog posts about this [1] and [2], and our benchmarks page is [3]. We also have a version of MongoDB in which we've replaced all the storage code with Fractal Trees, we call it TokuMX [5]. Don't mind the marketing haze, it's all serious tech under the hood.
[1]: http://www.tokutek.com/2011/09/write-optimization-myths-comp...
[2]: http://www.tokutek.com/2011/10/write-optimization-myths-comp...
[3]: http://www.tokutek.com/resources/benchmarks/