Should we ever suffer a significant loss of customer identity data and/or funds, that risk was considered an existential threat for our customers and our institution.
I'm not coming to Google's defense, but fraud is a big, heavy, violent force in critical infrastructure.
And our phones are a compelling surface area for attacks and identity thefts.
Then also allow the kernel to run linux as a process, and run whatever you like there, however you want.
Its technically possible at the device level. The hard part seems to be UX. Do you show trusted and untrusted apps alongside one another? How do you teach users the difference?
My piano teacher was recently scammed. The attackers took all the money in her bank account. As far as I could tell, they did it by convincing her to install some android app on her phone and then grant that app accessibility permissions. That let the app remotely control other apps. They they simply swapped over to her banking app and transferred all the money out. Its tricky, because obviously we want 3rd party accessibility applications. But if those permissions allow applications to escape their sandbox, and its trouble.
(She contacted the bank and the police, and they managed to reverse the transactions and get her her money back. But she was a mess for a few days.)
And this almost certainly means that the bank took a fraud-related monetary loss, because the regulatory framework that governs banks makes it difficult for them to refuse to return their customer's money on the grounds that it was actually your piano teacher's fault for being stupid with her bank app on her smartphone (also, even if it were legal to do so, doing this regularly would create a lot of bad press for the bank). And they're unlikely to recover the losses from the actual scammers.
Fraud losses are something that banks track internally and attempt to minimize when possible and when it doesn't trade-off against other goals they have, such as maintaining regulatory compliance or costing more money than the fraud does. This means that banks - really, any regulated financial institution at all that has a smartphone app - have a financial incentive to encourage Apple and Google to build functionality into their mass-market smartphone OSs that locks them down and makes it harder for attackers to scam ordinary, unsophisticated customers in this way. They have zero incentive to lobby to make smartphone platforms more open. And there's a lot more technically-unsophisticated users like your piano teacher than there are free-software-enthusiasts who care about their smartphone OS provider not locking down the OS.
I think this is a bad thing, but then I'm personally a free-software-enthusiast, not a technically-unsophisticated smartphone user.
This won't work. It's turtles all the way down and it will just end up back where we are now.
More software will demand installation in the sandboxed enclave. Outside the enclave the owner of the device would be able to exert control over the software. The software makers don't want the device owners exerting control of the software (for 'security', or anti-copyright infringement, or preventing advertising avoidance). The end user is the adversary as much as the scammer, if not more.
The problem at the root of this is the "right" some (entitled) developers / companies believe they have to control how end users run "their" software on devices that belongs to the end users. If a developer wants that kind of control of the "experience" the software should run on a computer they own, simply using the end user's device as "dumb terminal".
Those economics aren't as good, though. They'd have to pay for all their compute / storage / bandwidth, versus just using the end user's. So much cheaper to treat other people's devices like they're your own.
It's the same "privatize gains, socialize losses" story that's at the root of so many problems.
You also have so much grey area where things aren't actual illegal, such as gathering a massive amount of information on adults in the US via third party cookies and ubiquitous third party javascript.
Thats why platforms created in the internet age are much more opinionated on what API they provide to apps, much more stringent on sandboxing, and try to push software installation onto app stores which can restrict apps based on business policy, to go beyond technological and legal limitations.
Did she make it through the non-google play app install flow?
[1] https://en.wikipedia.org/wiki/Transaction_authentication_num... (This is a bit outdated, nowadays it works via QR codes instead of those flickering barcodes but the concept stays the same)
All the information and experience I ever got tells me this is security theater by institutions who try to distract from their atrocious security with some snake oil. But I'm willing to be convinced that there is more to it if presented with contraindicating information. So I'm interested in your case.
How did demanding control over your customers' devices and taking away their ability to run software of their choice in practice in quantifiable and attributable terms reduce fraud?
Those are based on APIs available from the mobile devices. Google and Apple can offer other means by which to secure these things, and to validate that the device hasn't been cracked and is submitting false attestations. But even a significant financial institution has no relationship with Apple on the dev side of things.. Apple does what it decides to do and the financial institution builds to what is available.
These controls work -- over time fraud and risk go down.