logoalt Hacker News

happyopossumlast Wednesday at 7:32 PM1 replyview on HN

> Imagine, you have a very small weak model, and you have to wait 20 seconds for your request.

For your first request, after having scaled to 0 while it wasn’t in use. For a lot of use cases, that sounds great.


Replies

sterenlast Wednesday at 8:21 PM

Also, a GPU instance needs 5s to start. The test depends on how large the model is. So a "very small weak model" can lead much faster than 20s