Docs here: https://lexica.art/docs
General browsing is heavily dominated by portraits though. I was wondering if it would be worth having a face detected flag on images so you could filter portraits.
How much does it cost to host it? Feels like hosting 500GB of images and serving them can't be cheap.
We also released a free API https://devapi.krea.ai/ if anyone wants to check it out.
It will soon have endpoints with custom image generation features.
https://generrated.com https://news.ycombinator.com/item?id=32824448
I've been steadily adding new prompts/images, and in fact today was the first time a set of user-submitted prompts were added.
I'm so pleased to know it's useful to others.
In fact, I think thought for a while about how to potentially integrate it with Krea somehow. But I came up empty. If you have any ideas, please reach out via twitter!
Great work.
I’d love to integrate our crawler with GitHub Actions and make it a self-updating dataset…
There’s so much stuff to do!
If you enjoy thinking about how the future of this field might look like, I highly recommend watching the interview between Yannic Kilcher and Sebastian Risi (https://www.youtube.com/watch?v=_7xpGve9QEE).
I was mind-blown after hearing it. It was a long time since I didn't hear such an interesting conversation. It's crazy how Risi's ideas correlate so well with the way how complex systems emerge in nature (optimizing locally), and the idea of self-organizing systems is just amazing.
Thanks!
Open-source is the way to get the most out of this tech. We plan to keep building all the features that are to come at krea.ai in this way.
It's like how they say Da Vinci's was the last generation that could know 'everything' but now we're getting into real-time too much new information!
thanks for spotting the issue :)
It was same as when I saw game play of NFS most wanted it looked so realistic, now it absolutely does not.
This effect is amazing, don't know if it has a name though.
Lexica is a search engine (like krea.ai), but it doesn’t allow you to create collections or like generations.
Regarding the API, both have public APIs although I’m not sure if you can paginate through several search results using the public Lexica API. In the Krea Prompts API, you can do cursor-based pagination.
Finally, Lexica API allows you to do CLIP-based search, but with Krea we are using PostgreSQL full-text search (for now). However, the code to do CLIP search with the dataset (including reverse image search) is in the repository.
(edit: also, nor Lexica nor other search engines or similar products are offering the dataset afaik.)
I could (probably naively) imagine that this would be the next step in making these models even more pleasing to humans. Or at least in creating a GPT-based "companion" model that would suggest, from an initial subpar prompt, a prompt yielding better results.
We may use the data from there to train custom models. Kind of the same that MidJourney has, where they ask people to rate images in exchange for GPU hours as prizes.
We haven’t thought deeply about it yet.
A person has a glitchy face in this photo. A person has a glitchy body in this photo. etc..
and then train the AI to have a fixup pass.
For now, a workaround is to create your own "glitch" collection in krea.ai and store there images with artifacts.
If you end up doing it we will add a "download all" button right away :)
And all the prompts from each collection could also be added to Open Prompts for sure.
Fun to explore the prompts getting results similar to what I want. Great project.
The site is using Svelte + SvelteKit and I couldn’t find amazing Masonry components (like Masonic from React) that allow me to save and restore the scroll position easily. I can do it using hacky ways but there’s more important things to do.
I’m also still trying to figure out why Back Forward Cache is not working right away with my current implementation. I would make the site snappier and also address the issue you’re bringing.
Perhaps open-sourcing the code and figuring it out all together is the way…
But scraping data quickly and doing a (D)DOS is almost a synonym.
In krea.ai, it doesn't say which version of the model was used to generate each image
It appears that later versions are better in generating faces or something. Like, Stable Diffusion 1.5 vs 1.4 (I'm not sure but there's a great variability nonetheless and I wanted to know if the version of the model accounted for this)
(Why aren't 1.4 images part of the dataset? Someone said they are public too)
I now expect that the next model like GPT-3 will be multi-modal like Stable Diffusion, to better account for those semantic connections
you also have access to all the different components that create each prompt, and you can search similar ones by clicking them.