You have no public statement or disclosures around security capability or practice. How will you prevent an entity from using your system adversarially to create deepfakes of other people? Do you validate identity? Are we talking about a target that includes a person's root identity records and a deep fake of them? Do you provide identity protection or a "lifelock" type of legal protection? I will be curious to see how the first unintended use of your platform damages an individuals life and your response. I would expect much more from your team around this, demonstration that it is a topic of conversation, actively being developed, and documentation/guarantees. Don't kid yourself if you think something like this wont happen to your platform... and please don't go around kidding lay people it wont either...