Bard w/ Gemini Pro isn't available in Europe and isn't multi-modal, https://support.google.com/bard/answer/14294096
No public stats on Gemini Pro. (I'm wrong. Pro stats not on website, but tucked in a paper - https://storage.googleapis.com/deepmind-media/gemini/gemini_...)
I feel this is overstated hype. There is no competitor to GPT-4 being released today. It would've been a much better look to release something available to most countries and with the advertised stats.
It's available in 174 countries.
Europe has gone to great lengths to make itself an incredibly hostile environment for online businesses to operate in. That's a fair choice, but don't blame Google for spending some extra time on compliance before launching there.
Basically the entire world, except countries that specifically targeted American Big Tech companies for increased regulation.
> Europe has gone to great lengths to make itself an incredibly hostile environment for online businesses to operate in.
This is such an understated point. I wonder if EU citizens feel well-served by e.g. the pop-up banners that afflict the global web as a result of their regulations[1]. Do they feel like the benefits they get are worth it? What would it take for that calculus to change?
1 - Yes, some say that technically these are not required. But even official organs of the EU such as https://europa.eu continue to use such banners.
I really wonder how changing an LLM underpinning a service will influence this (I thought compliance had to do with service behavior and data sharing across their platform -- not the algorithm). And I wonder what Google is actually doing here that made them suspect they'll fail compliance once again. And why they did it.
Laws are not the issue, their model being crap at non-english languages is.
That's your response? Ouch.
Google essentially claimed a novel approach of native multi-modal LLM unlike OpenAI non-native approach and doing so according to them has the potential to further improve LLM the state-of-the-art.
They have also backup their claims in a paper for the world to see and the results for ultra version of the Gemini are encouraging, only losing in the sentence completion dataset to ChatGPT-4. Remember the new Gemini native multi-modal has just started and it has reached version 1.0. Imagine if it is in version 4 as ChatGPT is now. Competition is always good, does not matter if it is desperate or not, because at the end the users win.
Also I guess I don’t see it as critical that it’s a big leap. It’s more like “That’s a nice model you came up with, you must have worked real hard on it. Oh look, my team can do that too.”
Good for recruiting too. You can work on world class AI at an org that is stable and reliable.
You know those stats they're quoting for beating GPT-4 and humans? (both are barely beaten)
They're doing K = 32 chain of thought. That means running an _entire self-talk conversation 32 times_.
Source: https://storage.googleapis.com/deepmind-media/gemini/gemini_..., section 5.1.1 paragraph 2
It screams desperation to be seen as ahead of OpenAI.
Litigation is probably inescapable. I'm sure they want to be on solid footing.
Would you mind elaborating more on this.
Like how are you "searching" with ChatGPT?
- have digital partnerships with the EU where the DMA or very similar regulation is/may be in effect or soon to take effect (e.g. Canada, Switzerland).
- countries where US companies are limited in providing advanced AI tech (China)
- countries where US companies are barred from trading, or where trade is extremely limited (Russia). Also note the absence of Iran, Afghanistan, Syria, North Korea, etc.
See disposable income per capita (in PPP dollars): https://en.m.wikipedia.org/wiki/Disposable_household_and_per...
Of the three answers Bard (Gemini Pro) gave, none worked, and the last two did not compile.
GPT4-turbo gave the correct answer the first time.
I agree that it is overstated. Gemini Ultra is supposed to be better than GPT4, and Pro is supposed to be Google's equivalent of GPT4-turbo, but it clearly isn't.