logoalt Hacker News

Show HN: Pocache, preemptive optimistic caching for Go

85 pointsby bnkamalesh10/11/202426 commentsview on HN

Hey all, I recently published this Go package, and would like to show off as well as get feedback!


Comments

serialx10/11/2024

PSA: You can also use singleflight[1] to solve the problem. This prevents the thundering herd problem. Pocache is an interesting/alternative way to solve thundering herd indeed!

[1]: https://pkg.go.dev/golang.org/x/sync/singleflight

show 3 replies
bww10/11/2024

You may be interested in Groupcache's method for filling caches, it solves the same problem that I believe this project is aimed at.

Groupcache has a similar goal of limiting the number of fetches required to fill a cache key to one—regardless of the number of concurrent requests for that key—but it doesn't try to speculatively fetch data, it just coordinates fetching so that all the routines attempting to query the same key make one fetch between them and share the same result.

https://github.com/golang/groupcache?tab=readme-ov-file#load...

show 3 replies
pluto_modadic10/11/2024

so.... if I initially got the key "foo" at time T=00:00:00, this library would re-query the backing system until time T=00:00:60? even if I requery it at T=:01? vs... being a write-through cache? I guess you're expecting other entries in the DB to go around the cache and update behind your back.

if you are on that threshold window, why not a period where the stale period is okay? T0-60 seconds, use the first query (don't retrigger a query) T60-120 seconds, use the first query but trigger a single DB query and use the new result. repeat until the key is stale for 600 seconds.

that is, a minimum of 2 queries (the first preemptive one at 60 seconds, (in the cache for 10 minutes total)

and a maximum of 11 queries (over 10 minutes) (the initial one that entered the key, and if people ask for it once a minute, a preemptive one at the end of those minutes, for 20 minutes total in the cache).

show 1 reply
tbiehn10/11/2024

Interesting idea - do you handle ‘dead keys’ as well? Let’s say you optimistically re-fetch a few times, but no client re-requests?

show 1 reply
indulona10/11/2024

I have implemented my own SIEVE cache, with TTL support. It solves all these issues and requires no background workers. Author, or anyone else interested in this, should read the SIEVE paper/website and implement their own.

show 3 replies
sakshamhhf10/11/2024

Not work

show 1 reply