What I really want from Anthropic, Gemini, and ChatGPT is for users to be able to log in with them, using their tokens. Then you can have open/free apps that don’t require the developer to track usage or burn through tons of tokens to demonstrate value.
Most users aren’t going to manage API keys, know that that even means, or accept the friction.
When you share an app you created in Google AI Studio, it will use quota from the logged in user, instead of your own quota.
We do this at openrouter and many apps use exactly that pattern!
At some point the model providers will realize they don't need to provide apps, just enterprise-grade intelligence at scale in a pipe, much like utility companies providing electricity/water. Right now, they have to provide the apps to kick-off the adoption.
In some ways, that’s what MCP interfaces are kind of for. It just takes one extra step to add the mcp url and go through oauth.
I assume the fall off there will be 99% of users though, the way it works today.
But this theoretically allows multiple applications to plugin into ChatGPT/claude/gemini and work together.
If someone adds zillow and… vanguard, your LLM can call both through mcp and help you plan a home buy
won't they just eventually have a 'log in with OpenAI' button similar to a 'login with Google' button?
Maybe a 'connect with OpenAI' button so the service can charge a fee, while allowing a bring your own token type hybrid.
This is close to how it works with shared apps in Google AI Studio.
Foundation Models on iOS/macOS was seen to have dormant code for doing this via OpenAI. So they are experimenting with it and may make it available next year.
So basically oauth-style app connections. Makes sense.
https://x.com/steph_palazzolo/status/1978835849379725350