This is a valid concern, but we’ve always been very serious about consent and privacy. Our models cannot be used without explicit verbal/visual consent and you hold the keys to your clone.
> you hold the keys to your clone.
Can I run it on my computer?
If it doesn't run on my computer, what keys are you talking about? Cryptographic keys? It would be interesting to see an AI agent run on fully homomorphic encryption if the overhead weren't so huge - would stop cloud companies from having so many intimate, personal data of all sorts of people.
No way I'm going to trust a small company/startup (move fast, break things) with this. Especially in the US.
Probably the phrase "you hold the keys to your clone" should give anyone pause.
I once worked at a company where the head of security gave a talk to every incoming technical staff member and the gist was, "You can't trust anyone who says they take privacy seriously. You must be paranoid at all times." When you've been around the block enough times, you realize they were right.
You can guarantee you won't be hacked? You can guarantee that if the company becomes massively successful, you won't start selling data to third parties ten years down the road?
Does the end user optionally get like a big safetensors of their own digital twin?
And you promise to never get acquired right?
> we’ve always been very serious about consent and privacy.
That's quite a commitment, guys, I am sold
/s
No snark intended...if you're making it much easier to make clones of people verbally and visually, why would I feel confident in you accepting a verbal/visual consent from "me"?