logoalt Hacker News

epistasisyesterday at 11:17 PM6 repliesview on HN

Siri was also completely miscommunicated from the beginning. I could never get Siri to do what I wanted, because I didn't realize that it had a very strict and narrow menu, but it never communicated what that menu was, and had no way of saying "here are the 5 things you can tell me about." And then there were the network communication issues where you don't know why you're not getting a response, or if Siri is going to work at all.

Every few years I would try to use it for a few days, then quit in frustration at how useless it was. Accidentally activating Siri is a major frustration point of using Apple products for me.


Replies

thatjoeoverthrtoday at 12:19 AM

In game design we used to call this opacity “hunt the verb” in text adventures.

All chat bots suffer this flaw.

GUIs solve it.

CLIs could be said to have it, but there is no invitation to guess, and no one pretends you don’t need the manual.

show 7 replies
highwaylightsyesterday at 11:35 PM

I get this pain with Apple in a bunch of different areas. The things they do well, they do better than anyone, but part of the design language is to never admit defeat so very few of the interfaces will ever show you an error message of any kind. The silent failure modes everywhere gets really frustrating.

I’m looking at you, Photos sync.

EDIT: just noticed this exact problem is on the front page in its own right (https://eclecticlight.co/2025/11/30/last-week-on-my-mac-losi...)

show 3 replies
rdiddlyyesterday at 11:46 PM

Ironically this manages to break all four of Apple's famous UI principles from Bruce Tognazzini: discoverability, transparency, feedback and recovery

npuntyesterday at 11:26 PM

Yeah, it's a classic CLI v GUI blunder. If you don't know exactly what the commands are, the interface is not going to be particularly usable.

I've found I appreciate having Siri for a few things, but it's not good enough to make it something I reach for frequently. Once burned, twice shy.

stavrostoday at 12:14 AM

This is just the conversational interface issue. You need the system to be able to do most of the things you would expect a human to be able to do (e.g. if you're talking to your phone, you'd expect it to be able to do most phone things). If the conversational system can only do a small subset of those, then it just becomes a game of "discover the magical incantation that will be in the set of possibilities", and becomes an exercise in frustration.

This is why LLMs are the first conversational interface to actually have a chance of working, once you give them enough tools.

show 1 reply
TYPE_FASTERtoday at 2:57 AM

I didn’t know for years that you can ask it to do things remotely over SSH.

show 1 reply