You don't need to compile it yourself though? Unless you want CUDA support on Linux I guess, dunno why you'd need such a silly thing though:
https://github.com/ggml-org/llama.cpp/releases