> Publishing research is pretty normal for traditional companies.
Published research is "open" to the extent that it is transparent but it is not "open" to the extent that it can be used and accessed by people. Unless you are an AI researcher, half these papers (to be generous) might as well not exist.
My argument is from that perspective (ability for the average person to use it), academic research only gives the illusion of openness.
Not only that but the training data is often -- but not always -- omitted from academic research. So reproducing the exact results they did is often out of reach without a significant investment in building your own collection of training data.
For example: Facebook and Google have both announced similar technology to OpenAI yet neither is usable out of the box (or at all for practical purposes) where as OpenAI despite being "closed" I can get started in 5 minutes.
Take by contrast to both of those, Stable Diffusion. Which I think is miles ahead of DALL-E... their code and their pre-trained weights are very easy to use as well as being open.