I work for a finance firm and everyone is wondering why we can store reams of client data with SaaS Company X, but not upload a trust document or tax return to AI SaaS Company Y.
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
Does SaaS X/Cloud offer IAM capabilities? Or going further, do they dogfood their own access via the identity and access policies? If so, and you construct your own access policy, you have relative peace of mind.
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
The question is what reason did you have to trust SaaS Company X in the first place?
using ai vs not-ai as your litmus test is giving you a false sense of security. it's ALL wild west
It doesn't sound like your firm does any diligence that would actually prevent you from buying a vendor that has security flaws.
> My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
And nobody seems to pay attention to the fact that modern copiers cache copies on a local disk and if the machines are leased and swapped out the next party that takes possession has access to those copies if nobody bothered to address it.
FWIW this company was founded in 2014 and appears to have added LLM-powered features relatively recently: https://www.reuters.com/legal/transactional/legal-tech-compa...