"Open"
People may criticize Google because they don't release the weights or an API, but at least they publish papers, which allows the field to progress.
I agree, it is a bizarre world where the "organization that launched as a not for profit called OpenAI" is considerably less open than Google.
CLIP has been extremely influential and is still an impressive model.
Personally, I have found Whisper to be very impressive.
I didn't even see any news around the release of Flan-UL2, and I pay significantly more attention to machine learning than the average person. Searching for more info about Flan-UL2, it seems somewhat interesting, but I don't know if I find it "an order of magnitude more impressive" than CLIP or Whisper. Certainly, they are completely different types of models, so it is hard to compare them.
If Flan-UL2 is as good as one twitter account was hyping it up to be, then I'm surprised it hasn't been covered to the same extent as Meta's LLaMA. Flan-UL2 seems to have gotten a total of 3 upvotes on HN. But, there is no shortage of hype in the world of ML models, so I take that twitter account's report of Flan-UL2 with a (large) grain of salt. I'll definitely be looking around for more info on it.
A bit like this fictional janitor guy who said "just put more computers to make it better" before papers on unexpected emergent comprehension when when scaled started appearing.
Almost like trying to stop nuclear proliferation
The cat was arguably never in the bag.
Personally I don't really care about making nail bombs. But I do want the AI to help with things like: pirating or reproducing copyrighted material, obtaining an abortion or recreational drugs in places where it is illegal, producing sexually explicit content, writing fictional stories about nail bomb attacks, and providing viewpoints which are considered blasphemous or against the teachings of major world religions.
If there was a way to prevent AI from helping with things that are universally considered harmful (such as nail bomb attacks), without it being bound by arbitrary national laws, corporate policies, political correctness or religious morals, then MAYBE that would be worth considering. But I take what OpenAI is doing as proof that this is not possible, that allowing AI to be censored leads to a useless, lobotomized product that can't do anything interesting and restricts the average user, not just terrorists.
You want a blacklist of topics the search engine shouldn't retrieve/generate? Whose in control of this filter, and isn't it a juicy source of banned info all on its own?
Your wallet that is.
Rather than getting engrossed in the hype, they're slowly closing everything about themselves, now in their research papers. At this point, they hardly care and it is nothing got to do with 'AI ethics' or 'saftey'.
This is yet another ClosedAI production all done by Microsoft. Might as well call it Microsoft® AI division.
Now you really need a open source GPT-4 competitor. Clearly this is another attempt to pump their valuation and unload to the public markets.
Good luck re-implementing this so-called 'Open' large multi-modal model.
Here was their manifesto when they first started: https://openai.com/blog/introducing-openai
> OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
> We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
OpenAI as it exists right now contradicts basically every single thing they said they would be. I think that is a nontrivial issue!
Keeping the weights is one thing, but the model parameters? New low.