To run Llama 3.1 8B locally, you would need a GPU with a minimum of 16 GB of VRAM, such as an NVIDIA RTX 3090.
Talas promises a 10x higher throughtput, being 10x cheaper and using 10x less electricity.
Looks like a good value proposition.
What do you do with 8b models ? They can't even reliably create a .txt file or do any kind of tool calling
What do you do with 8b models ? They can't even reliably create a .txt file or do any kind of tool calling