logoalt Hacker News

embedding-shapelast Thursday at 9:35 PM1 replyview on HN

Not all models are trained with long one-shot task following by themselves, seems many of them prefer closer interactions with the user. You could always add another layer/abstraction above/below to work around it.


Replies

fastballlast Friday at 12:38 AM

Can't this just be a Ralph Wiggum loop (i.e. while True)