logoalt Hacker News

koyotetoday at 1:26 AM3 repliesview on HN

Are there any details around how the round-trip and exchange of data (CPU<->GPU) is implemented in order to not be a big (partially-hidden) performance hit?

e.g. this code seems like it would entirely run on the CPU?

    print!("Enter your name: ");
    let _ = std::io::stdout().flush();
    let mut name = String::new();
    std::io::stdin().read_line(&mut name).unwrap();
But what if we concatenated a number to the string that was calculated on the GPU or if we take a number:

    print!("Enter a number: ");
    [...] // string number has to be converted to a float and sent to the GPU
    // Some calculations with that number performed on the GPU
    print!("The result is: " + &the_result.to_string()); // Number needs to be sent back to the CPU

Or maybe I am misunderstanding how this is supposed to work?

Replies

kigtoday at 3:12 AM

"We leverage APIs like CUDA streams to avoid blocking the GPU while the host processes requests.", so I'm guessing it would let the other GPU threads go about their lives while that one waits for the ACK from the CPU.

I once wrote a prototype async IO runtime for GLSL (https://github.com/kig/glslscript), it used a shared memory buffer and spinlocks. The GPU would write "hey do this" into the IO buffer, then go about doing other stuff until it needed the results, and spinlock to wait for the results to arrive from the CPU. I remember this being a total pain, as you need to be aware of how PCIe DMA works on some level: having your spinlock int written to doesn't mean that the rest of the memory write has finished.

LegNeatotoday at 3:46 AM

We use the cuda device allocator for allocations on the GPU via Rust's default allocator.

zozbot234today at 1:43 AM

Why are you assuming that this is intended to be performant, compared to code that properly segregates the CPU- and GPU-side? It seems clear to me that the latter will be a win.