logoalt Hacker News

bt1a01/21/20250 repliesview on HN

How excellent for a quantized 27GB model (the Q6_K_L GGUF quantization type uses 8 bits per weight in the embedding and output layers since they're sensitize to quantization)