I worked at a bank on the backend for architecture and security.. and I've posted this attestation here before, but the sheer volume of fraud and fraud attempts in the whole network is astonishing. Our device fingerprinting and no-jailbreak-rules weren't even close to an attempt at control. It was defense, based on network volume and hard losses.
Should we ever suffer a significant loss of customer identity data and/or funds, that risk was considered an existential threat for our customers and our institution.
I'm not coming to Google's defense, but fraud is a big, heavy, violent force in critical infrastructure.
And our phones are a compelling surface area for attacks and identity thefts.
Then don't issue an app. Issue people cards to pay with and let them come to the bank for weird transactions.
Yeah, I worked at a bank once. I was told following policy and using dependencies with known vulnerabilities so my ass was covered was more important than actually making sure things were secure (it was someone else's problem to get that update through the layers of approval!). Needless to say, I didn't last long
How does preventing people from running software of their choice on their own device (what you call jailbreaking) prevent fraud in practice? It's a pretty strong claim you're making there. And it's being made frequently by institutions, yet I have never seen it actually explained and backed up with any real security model.
All the information and experience I ever got tells me this is security theater by institutions who try to distract from their atrocious security with some snake oil. But I'm willing to be convinced that there is more to it if presented with contraindicating information. So I'm interested in your case.
How did demanding control over your customers' devices and taking away their ability to run software of their choice in practice in quantifiable and attributable terms reduce fraud?
Do you allow customers to log in to their account with a web browser on a windows machine?
I wish we had technical solutions that offered both. For example, a kernel like SeL4, which could directly run sandboxed applications, like banking apps. Apps run in this way could prove they are running in a sandbox.
Then also allow the kernel to run linux as a process, and run whatever you like there, however you want.
Its technically possible at the device level. The hard part seems to be UX. Do you show trusted and untrusted apps alongside one another? How do you teach users the difference?
My piano teacher was recently scammed. The attackers took all the money in her bank account. As far as I could tell, they did it by convincing her to install some android app on her phone and then grant that app accessibility permissions. That let the app remotely control other apps. They they simply swapped over to her banking app and transferred all the money out. Its tricky, because obviously we want 3rd party accessibility applications. But if those permissions allow applications to escape their sandbox, and its trouble.
(She contacted the bank and the police, and they managed to reverse the transactions and get her her money back. But she was a mess for a few days.)