Friday, September 13, 2024

GGUF Quantization with Imatrix and Okay-Quantization to Run LLMs on Your CPU

Must read


Quick and correct GGUF fashions in your CPU

Towards Data Science
Generated with DALL-E

GGUF is a binary file format designed for environment friendly storage and quick massive language mannequin (LLM) loading with GGML, a C-based tensor library for machine studying.

GGUF encapsulates all needed elements for inference, together with the tokenizer and code, inside a single file. It helps the conversion of varied language fashions, similar to Llama 3, Phi, and Qwen2. Moreover, it facilitates mannequin quantization to decrease precisions to enhance velocity and reminiscence effectivity on CPUs.

We regularly write “GGUF quantization” however GGUF itself is just a file format, not a quantization methodology. There are a number of quantization algorithms applied in llama.cpp to cut back the mannequin dimension and serialize the ensuing mannequin within the GGUF format.

On this article, we’ll see how one can precisely quantize an LLM and convert it to GGUF, utilizing an significance matrix (imatrix) and the Okay-Quantization methodology. I present the GGUF conversion code for Gemma 2 Instruct, utilizing an imatrix. It really works the identical with different fashions supported by llama.cpp: Qwen2, Llama 3, Phi-3, and so forth. We may also see how one can consider the accuracy of the quantization and inference throughput of the ensuing fashions.



Supply hyperlink

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article