[0]: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...
It's like saying how can evaluating 5 years of performance at work be better at predicting someone's competency than their SAT scores.
https://huggingface.co/papers/2306.05685
This paper makes the argument that...
"Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain."
So, the Arena could theoretically be automated and achieve similar outcomes. Or at least, it could quickly determine a predicted-ELO for every model, which would be interesting to compare against the human-rated outcomes.
Phi-2 isn't fine tuned for instruction following yet.
For example, consider my analysis [0] based on observing the progression of Large Language Models (LLMs) in a single text adventure.
[0] https://github.com/s-macke/AdventureAI#evaluation-of-other-m...
-Ask any question to two anonymous models (e.g., ChatGPT, Claude, Llama) and vote for the better one!
-You can continue chatting until you identify a winner.
-Vote won’t be counted if model identity is revealed during conversation.
Do you really need more than this to know which one you’re going to pick? https://i.imgur.com/En37EJD.png
Avatar doesn’t have humans? Seriously?
I only make technical (pytorch) questions though.
The Glicko rating system is very similar to Elo, but it also models the variance of a given rating. It can directly tell you a "rating deviation."
https://www.reddit.com/r/LocalLLaMA/comments/17jrj82/new_mic...
Also very interesting that Mixtral 8x7B ranks in the same neighborhood as Gemini Pro/GPT 3.5 Turbo/Claude 2.1 while being fully open source and Apache 2.0 licensed.
If I was already using GCP and they reduced their price (>10%) and offered tight integration with rest of GCP services it would still be appealing.
Starling:
```
USER list 5 largest capitals sorted by largest population first
AI Here are some countries and their capitals in alphabetical order Australia: Canberra Belgium: Brussels Canada: Ottawa Finland: Helsinki Italy: Rome France: Paris Germany: Berlin Greece: Athens Ireland: Dublin …(lists 50 others)
Gpt3.5
list 5 largest capitals sorted by largest population first
ChatGPT Certainly! To list the 5 largest capital cities by population, I'll provide you with the information as of my knowledge cutoff date in 2022. Please note that population figures may change over time, and it's a good idea to verify the latest data from reliable sources if needed. Here are the 5 largest capital cities sorted by population, starting with the largest:
Beijing, China: Beijing is the capital of China and has a population of over 21 million people as of 2022. New Delhi, India: New Delhi is the capital of India and is part of the larger National Capital Territory of Delhi. It has a population of approximately 16.8 million people as of 2022. Tokyo, Japan: Tokyo is the capital of Japan, and while the city itself is not officially designated as the capital, it serves as the de facto capital. The Tokyo metropolitan area has a population of around 14 million people. Moscow, Russia: Moscow is the capital of Russia and has a population of over 12 million people. Dhaka, Bangladesh: Dhaka is the capital of Bangladesh and has a population of approximately 8.9 million people. ```
(Submitted title was "Gemini Pro achieves accuracy slightly inferior to GPT 3.5 Turbo".)
If you want to say what you think is important about an article, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
Someone described LLMs as “blurry JPEGs of the Internet”.
In that sense, maybe GPT 4 is as smart as the hive mind of the Internet gets, and newer models just take sharper pictures but of the same subject. Perhaps GPT 4 trained on one of the best subsets available and everything else is going to be worse or the same…
It’s curious that Sam Altman has publicly stated that OpenAI isn’t working on GPT 5. Why not? Is it because they know it’s a pointless exercise with the current training approaches?
let's not forget where this breakthrough came from, i wouldn't count Google out
Why do you way they will 'probably' do that? Do you have any information to back that up or is this your speculation?
So one of Google lied, this eval has bugs, they borked the deployment is true