master-4f0154b
4f0154b0 · llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691) · Jun 10, 2023