But Yann LeCun seems to think the safety problems of eventual AGI will be solved somehow.
Nobody is saying this model is AGI obviously.
But this would be an entry point into researching one small sliver of the alignment problem. If you follow my thinking, it’s odd that he professes confidence that AI safety is a non issue, yet from this he seems to want no part in understanding it.
I realize their research interest may just be the optimization / mathy research… that’s their prerogative but it’s odd imho.