A bit more about the collaboration can be found here:
I'm curious when someone will do the right experiment in a way that some LLM on Cerebras will do the reasoning so well so big so fast, that it does something very novel
That probably helped downloads.
I think I wasted my time reading it this time. Just my opinion.
Generally, I feel like all the AI models are about the same at this point. Grok in Twitter has the ability to access real time events information but the rest seems to be interchangeable at this point.
I pay for ChatGPT for higher usage limits, then use all the rest for different things in order to keep history for different things separated(not because one is better than the other in the smartness department).
https://lmarena.ai/?leaderboard
Do they not take part, or is the list not complete?
They are similar speed. I am probably travelling the well worn road so in some equivalent of the LRU cache
See https://chat.mistral.ai/chat/01a9ee32-a8fe-4305-8f74-a5af959... as an example.
Try the same on other chats with websearch.
For your amusement too: https://imgur.com/EgmQ0Ph
Here is me trying (and finally succeeding) to persuade Le Chat to generate image using filename as a prompt...
https://chat.mistral.ai/chat/9940f6bf-b2e5-4db2-bb64-adcbd9f...
I mean... "pretty please" as a debugging technique. I kind of do not look forward to my future conversations with tea kettle and door knob.
Cute.