I would say they are reasonably efficient.
Of course many orders of magnitude slower than keeping it all in in memory maps and doing the traversal there, but fast enough to not be a limiting factor.
Traversing a medium depth DAG with a million nodes to find orphaned nodes takes less than a second on average hardware.
One thing to be aware of is that SQLite has lots of tuning options, and they are all set to very conservative values by default.
E.g. the default journal mode is FULL, which means that it will flush all the way to disk after each write. The default cache size is tiny.
With a bit of tuning you can get quite decent performance out of SQLite while still having full ACID guarantees, or very good performance for cases where you can compromise on the ACID stuff.
I have not yet found a situation where nosql databases like leveldb offer an orders of magnitude advantage over SQLite, and SQLite is so much more powerful and robust...
https://danluu.com/file-consistency/