iPhones leaned in to "computational photography" a long time ago. Eventually they added custom hardware to handle all the matrix multiplies efficiently. They exposed some of it to apps with an API called CoreML. They've been adding more features like on-device photo tagging, voice recognition, VR stuff.
Google was the leader on computational smartphone photography. They released their "night sight" mode before Samsung and Apple had anything competitive.
Sure, and you can run Stable Diffusion on normal Snapdragon SoCs, and there's a very hacky way to get llama.ccp running on a Pixel phone https://twitter.com/thiteanish/status/1635678053853536256 but I haven't seen any good apps yet.