Normally I wouldn’t think too hard about this, but two things come to mind here:
Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Secondly (and perhaps this is clouding my judgment), the fact a Midjourney founder mentioned during an Office Hours session that [paraphrasing]: “it’s widely known in the industry that 90% of AI research is completely made-up garbage”…
You mean NLP field changing models from Google like BERT [1]? or Transformers paper [2]? or T5 model [3] (used by company doing ChatGPT like search currently on the front page on HN)?
1. https://arxiv.org/abs/1810.04805 code+models: https://github.com/google-research/bert
2. https://arxiv.org/abs/2112.04426
3. https://arxiv.org/abs/1910.10683 code+models: https://github.com/google-research/text-to-text-transfer-tra...
To add to your comment, Google has been using BERT to power Search since 2019: https://blog.google/products/search/search-language-understa...
I'm going to guess that the only reason they don't use larger models is because of the compute cost. ChatGPT at 4 billion users with today's hardware is an unsustainable business. However, that thought leads me to imagining: if Google offered Search "Premium" using the latest LLMs, how much would people pay for it?
> Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Because Google doesn't want you to have access to them? Why do you feel like you're entitled to their internal research?
Google releases papers on robots[0] as well. Do you expect them to ship you a free robotic arm? Or give you the ML model for it?
0: https://ai.googleblog.com/2022/12/talking-to-robots-in-real-...
For example, as an AI researcher, I can't consider Imagen/Parti to be the state of the art if all we have are cherry-picked examples and we can't verify anything. For all practical intents and purposes, they are just vaporware, and the state of the art are models like Stable Diffusion or DALL-E.
Of course they are free to keep them that way, but they risk losing their reputation as AI/ML/NLP leaders.
Companies have often ignored newer tech because it eats into their existing business (Kodak and digital cameras, for example).
Google CAN'T adopt AI at this point, at least not without drastic changes to its revenue model.
I don't see why. They could still incorporate ads in the output of such a tool..
Google is full of people that for a want of a better word are simply arrogant. They think that the purpose of AI is for them to show off their skills and... that's it. At best they'd use it internally for selling you more ads, they don't seem to think other people are worthy of using the output of their efforts in any shape, way, or form.
OpenAI is full of boyscouts that think that AI should be carefully censored so that it represents black, brown, asian, and white people equally. They deliberately skew the training data to enshrine wokeness into the product, while also trying to prevent anyone using their models to generate anything vaguely like porn. Basically, they're digital mormons. No fun.
Stability AI / Stable Diffusion is a bunch of people that had money thrown at them with no guard rails. Anything goes. Download our models and have fun! Make porn if you want to. Whatever.
To nobody's surprise, only the latter is of any interest or use to the general public.
The sad part is that Google had the most resources to spend on training their models, and it's the least accessible.
It's like Tony Stark inventing cold fusion energy and then using only to power his suit instead of... you know... changing the world for the better.
I don't think the papers they put out are to "show off." It's for advancing the entire field. Imagine if they had kept the Transformer paper in house which everyone uses and is basically the standard in AI now. AI wouldn't be anywhere close to where it is today. Also, I think it's a little ridiculous to think Google would spend $100B over the past 10 years on AI research and not think this stuff will be seen in important products.
But as we enter 2023, OpenAI no longer require approval before using GPT3, and DALLE2 is publicly accessible.
Google has yet to release anything publicly… In light of Midjourney’s comments, it’s hard not to be a little suspicious.
And exactly, where are the models? Where is the AI? Bert came out what 5 years ago?
Its certainly not in Search given how unusable the results are. They are busy using models to structure everyones content so they can display it on the Search page and capture value from content they didn't create
Not one mind blowing AI product from them. The results of OpenAI are exposing FAANGs as has beens
Competitive Programming with AlphaCode - https://news.ycombinator.com/item?id=30179549 - Feb 2022 (397 comments)
Most of our time is spent thinking about what to write not the actual writing.
Also nothing stopping it from being a voice input rather than typing.