logoalt Hacker News

makapuftoday at 6:53 AM2 repliesview on HN

AFAIK, you can't explicitly allocate cache like you allocate RAM however. A bit like if you could only work on files and ram was used for cache. Maybe I am mistaken ? (Edit: typo)


Replies

KeplerBoytoday at 11:08 AM

You can in CUDA. You can have shared memory which is basically L1 cache you have full control over. It's called shared memory because all threads within a block (which reside on a common SM) have fast access to it. The downside: you now have less regular L1 cache.

lou1306today at 7:36 AM

You can't explicitly allocate cache, but you can lay things out in memory to minimize cache misses.

show 2 replies