1) There are two primary ways to have models generate embeddings: implicitly from an LLM by mean-pooling its last hidden state since it has to learn how to map text in a distinct latent space anyways to work correctly (i.e. DistilBERT), or you can use a model which can generate embeddings directly which are trained using something like triplet loss to explicitly incentivise learning similarity/dissimilarity. Popular text-embedding models like BAAI/bge-large-en-v1.5 tend to use the latter approach.
2) The famous word2Vec examples of e.g. woman + king = queen only work because word2vec is a shallow network and the model learns the word embeddings directly, instead of it being emergent. The latent space still maps them closely as shown with this demo, but there isn't any algebraic intuition. You can get close with algebra but no cigar.
3) DistilBERT is pretty old (2019) and based on a 2018 model trained on Wikipedia and books, so there will be significant text drift in addition to being less robust with newer modeling techniques and a more robust dataset. I do not recommend using it for production applications nowadays.
4) There is an under-discussed opportunity for dimensionality reduction techniques like PCA (which this demo uses to get the data into 3D) to both improve signal-to-noise and improve distinctiveness. I am working on a blog post of a new technique to handle dimensionality reduction for text embeddings better which may have interesting and profound usability implications.
I was wondering if there was a universal signal that can be used as the identity and then based on that signal one could measure the distance to any other signal based on the principle relation of not(other). That is to say the identity would be precisely not all else for any X. Said another way, every thing is because it is exactly not everything else.
So thinking as first principles as possible I wondered if it were possible to represent everything as some frequency? A Fourier transform analog for every “time slice” of a thing? This is where it gets slightly slippery.
So the idea was trying to build relationship and identity and labeling from a simple rule set of things arising out of relation of not being other things.
In my mind I saw nodes on a graph forming in higher dimensions as half way points for any comparison. Comparisons create new nodes and implicitly have a distance metric to all other things. It made sense in my mind that there was an algorithmic annealing to new nodes in a “low density higher energetic state” allowing them to move faster in this universal emergent ontology/spatial space; eventually getting more dense and slower as it gets cold.
So the system implicitly also has a snapshot of events or interactions based on that where every comparison has a “tick” that encodes a particular density relation for some set of nodes it’s in association with.
The idea that cemented it all together was to treat each node like an address:chord. Similar to chording keys like a-b-c in some ux programs, but also exactly like chords in music too.
The idea being that when multiple “things” are dialed in at same time it becomes its own emergent label by proximity and association of those things being triggered to new information coming in classified as a distance to not(signal).
I didn’t really realize how close this idea was to what encoders/decoders seem to be doing although I do know I’m trying to think myself towards a universal solution that doesn’t require special encoders for every media type. Hence the Fourier transform path.
Know anything like this or am I spitting idiocy?
Which is more or less word2vec as far as I understand but then trying to extrapolate that as a universal principle to all things that can be represented by using a “common signature : hash based off a signal like a complex waveform” and then doing a difference on signal composition and its shape/bandwidth to compare its properties to other things and when they reference similar objects even in different modalities they’d be associated by being triggered together.
So “dog” vs image of dog would both translate to a primordial signal : identity representation and in the domain of frequency do the comparison and project a coordinate in the spatial sense and eventually those two nodes would more likely be triggered at the same time due to the likelihood of “dog” being next to image of dog when parsing information across future events.
Whew. Maybe I’m just talking to myself. At least it’s out there if it makes sense to anyone else.
I have this except you can see every single word in any dictionary at once in space, it renders individual glyphs. It can show an entire dictionary of words - definitions and roots - and let you fly around in them. It’s fun. I built a sample that “plays” a sentence and its definitions. GitHub.com/tikimcfee/LookAtThat The more I see stuff like this, the more i want to complete it. It’s heartening to see so many people fancied with seeing words… I just wish I knew where to find these people to like.. befriend and get better. Im getting the feeling I just kinda exist between worlds of lofty ideas and people that are incredibly smart sticking around other people that are incredibly smart.
Eg what is the real distance between the two vectors? That should be easy to compute
Similarly: what do I get from summing two vectors and what are some nearby vectors?
Maybe just generally: what are some nearby vectors?
Without any additional context it's just a point cloud with a couple of randomly labeled elements
People rarely have to get down to the real true metal on the embeddings models, and they're not what people think they are from their memory of word2vec. Ex. there's actually one vector emitted _per token_, the final vector is the mean. And cosine distance for similarity is the only metric anyone is training for.
In summary, there's ~no reason to think a visualization trying to show multiple vectors will ever be meaningful. Even just starting from "they have way way way more dimensions than we can represent visually" is enough to rule it out
Mini LM v2, foundation of most vector dbs, is 384 dims.
n.b. dear reader, if you've heard of that: you should be using v3! V3 is for asymmetric search, aka query => result docs. V2 is for symmetric search, aka chunk of text => similarly worded chunks of texts. It's very very funny how few people read the docs, in this case, the sentence transformers site.
I hadn't planned to keep building this but if I do, what should I add/change?
It’s actually quite approachable to play with, and some of the comments about, “wut?” may be best answered by a little more experimentation on the user’s side, haha. I think the content itself is tricky, which may trip people up.
Something I’ve seen before that may be interesting is doing something with the definitions of words. ATM, you’re using a source list of words and using the embedded vectors to visualize. But what if you visualized not just the words themselves, but the ordered list of words that make up the definition(s) of that word visible in some spatial relationship. This would look interesting because around (connected to?) each word is its meaning in this case; changing the definition (the context use of the word) would also change the definition… and also change the connected word nodes in the graph. I envision ordered lines and colored words in this style.
If you end up doing something like that, start with like.. a “sentence player”. At the moment you show the words at once. What would it look like to “animate” the appearance of the words and their relationships by definition?
Anyway. Thanks for getting this far, haha. This is a really fascinating project and I’m glad you shared it. Please do tell if any of this is close or far off from something you might be interested in!
> man woman king queen ruler force powerful care
and couldn't reliably determine position of any of them
Traveling words. There's code! https://arxiv.org/abs/2309.07315