[flagged]
That's why it doesn't seem worth it if you are not running the model locally. To really get powerful use out of this you need to be running inference constantly.
The pro plan exhausts my tokens two hours into limit reset, and that's with occasional requests on sonnet. The 5-8x usage Max plan isn't going to be any better if I want to run constant crons, with the Opus model (the docs recommend using Opus).
Good Macs are thousands but Im waiting to find someone who's showing off my dream use case to jump at it.
>Having something that can actually parse "yep lets do 4pm tomorrow" from texts and create calendar holds is the kind of thing that's always felt 5 years away.
Isn't that just Google Assistant? Now with Gemini it seems to work like a LLM with tools.
This is a bot account. Last post in 2024, then in the last 25 minutes it has spammed formulaic comments in 5 different threads. If you were not able to instantly recognise this post as LLM-generated, this is a good example to learn from, I think. Even though it clearly has a prompt to write in a more casual manner, there's a certain feel to it that gives it away. I don't know that I can articulate all the nuances, but one of them is this structure of 3 short paragraphs of 1-2 sentences each, which is a favorite of LLMs posting specifically on HN for some reason, together with a kind of stupidly glazy tone ("killer app", "always felt 5 years away", randomly reinforcing "comparison to a human assistant you've never met" as though that's a remotely realistic comparison; how many people in the world have a human assistant they've never met and trust with all of their most sensitive information?).