logoalt Hacker News

2ndorderthoughtyesterday at 7:02 PM1 replyview on HN

Why are you running 2 instances anyways? If you want that workflow just rent a few ec2 gpu instances and fire away?


Replies

vidarhyesterday at 7:13 PM

If you're going to rent a few ec2 gpu instances you might as well funnel things through openrouter. Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not.

As for why, why would you not? Sitting around waiting for a single assistant is inefficient use of time; I tend to have more like 4-10 instances running in parallel.

show 2 replies