I'm the lead author of JVector, which scales linearly to at least 32 cores and may be the only graph-based vector index designed around nonblocking data structures (as opposed to using locks for thread safety): https://github.com/jbellis/jvector/
JVector looks to be about 2x as fast at indexing as Lantern, ingesting the Sift1M dataset in under 25s on a 32 core aws box (m6i.16xl), compared to 50s for Lantern in the article.
(JVector is based on DiskANN, not HNSW, but the configuration parameters are similar -- both are configured with graph degree and search width.)
I agree that Usearch is fast, but it feels pretty dishonest to take credit for someone else's work. Like maybe at least honestly profile what's going on with USearch vs pgvector (..and which settings for pgvector??), and write something interesting about it?
The last time I tried Lantern, it'd segfault when I tried to do anything non-trivial with it, and was incredibly unsafe with how it handled memory. Hopefully that's at least fixed.. but lantern has so many red flags.
Not sure if it's fair to compare USearch and pgvector. One is an efficient indexing structure, the other is more like a pure database plugin. Not that they can't be used in a similar fashion.
If you are looking for pure indexing benchmarks, you might be interested in USearch vs FAISS HNSW implementation [1]. We run them ourselves (and a couple of other tech companies), so take them with a grain of salt. They might be biased.
As for Lantern vs pgvector, impressed by the result! A lot of people would benefit from having fast vector-search compatible with Postgres! The way to go!
It's wasn't a trivial integration by any means, and the Lantern team was very active - suggesting patches into the upstream version to ease integrations with other databases. Some of those are tricky and have yet to be merged [2]. So stay tuned for the USearch v3. Lots of new features coming :)
[1]: https://www.unum.cloud/blog/2023-11-07-scaling-vector-search... [2]: https://github.com/unum-cloud/usearch/pull/171/files
``` postgres=# CREATE INDEX ON sift USING ivfflat (v vector_l2_ops) WITH (lists=1000); CREATE INDEX Time: 65697.411 ms (01:05.697) ```
You are right that there are many trade-offs between HNSW and IVFFLAT.
E.g. IVFFLAT requires there be significant amount of data in the table, before the index is created, and assumes data distribution does not change with additional inserts (since it chooses centroids during the initial creation and never updates them)
We have also generally had harder time getting high recall with IVFFLAT on vectors from embedding models such as ada-002.
There are trade-offs, some of which we will explore in later blog posts.
This post is about one thing - HNSW index creation time across two systems, at a fixed 99% recall.
Though index creation is not a big deal, I want good queries rapidly for cheap. So IMO RDS with pgvector is the easiest approach.
Your comment makes it sound like Marqo is a way to speed up pgvector indexing, but to be clear, Marqo is just another Vector Database and is unrelated to pgvector.
> https://github.com/lanterndata/lantern/blob/040f24253e5a2651...
> Operator <-> can only be used inside of an index
Isn't the use of the distance operator in scan+sort critical for generating the expected/correct result that's needed for validating the recall of an ANN-only index?
There's some context on the operator <?> here: https://github.com/lanterndata/lantern?tab=readme-ov-file#a-...
We think our approach will still significantly outperform pgvector because it does less on your production database.
We generate the index remotely, on a compute-optimized machine, and only use your production database for index copy.
Parallel pgvector would have to use your production database resources to run the compute-intensive HNSW index creation workload.
Still, very impressive
Some Postgres offerings allow you to bring your own extensions to workaround limitations of these restrictive licenses, for instance Neon[1], where I work. I tried to look at the AWS docs for you, but couldn't find anything about that. I did find Trusted Language Extensions[2], but that seems to be more about writing your own extension. Couldn't find a way to upload arbitrary extensions.
I will add that you could use logical replication[3] to mirror data from your primary database into a Lantern-hosted database (or host your own database with the Lantern extension). This obviously has a couple downsides, but thought I would mention it.
[0]: https://github.com/lanterndata/lantern/commit/dda7f064ca80af...
[1]: https://neon.tech/docs/extensions/pg-extensions#custom-built...
[2]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Postg...
[3]: https://www.postgresql.org/docs/current/logical-replication....
[1] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Postg...
AWS is also a big proponent of pgvector, so it is more likely they would put more money into that, in my non-expert opinion.