totally agree. we're not trying to replace deep CUDA knowledge:) just wanted to skip the constant guess and check.
every time we generate a kernel, we profile it on real GPUs (serverless) so you see how it runs on specific architectures. not just "trust the code" we show you what it does. still early, but it’s helping people move faster
Btw, I'm not talking deep CUDA knowledge. That takes years. I'm specifically talking about novices. The knowledge you get from a few weeks. I'd be quite hesitant to call someone an expert in a topic when they have less than a few years of experience. There's exceptions but expertise isn't quickly gained. Hell, you could have years of experience but if all you did is read medium blogs and stack overflow you'd probably still be a novice.
I get that you profile. I liked that part. But even as the other commenter says, it's unclear how to evaluate given the examples. Showing some real examples would be critical to sell people on this. Idk, maybe people blindly buy too but personally I'd be worried about integrating significant tech debt. It's easy to do that with kernels or anytime you're close to the metal. The nuances dominate these domains