I didn't mean to imply a conflict of interest, I'm wondering what product or service offering (or maybe feature on their messaging app) prompted this.
No other tech (major) leaders are saying the quiet parts out loud right, about the efficacy, cost to build and operate or security and privacy nightmares created by the way we have adopted LLMs.
Maybe it's not about gaining something, but rather about not losing anything. Signal seems to operate from a kind of activism mindset, prioritizing privacy, security, and ethical responsibility, right? By warning about agentic AI, they’re not necessarily seeking a direct benefit. Or maybe the benefit is appearing more genuine and principled, which already attracted their userbase in the first place.
Whittaker’s background is in AI research. She talks a lot (and has been for a while) about the privacy implications of AI.
I’m not sure of any one thing that could be considered to prompt it. But a large one is the wide-deployment of models on devices with access to private information (Signal potentially included)