RFC: Reducing Download Traffic and Latency with ZipNN Lossless Compression for AI Models #34737
Labels
Discussion
Discussion on a topic (keep it focused or open a new issue though)
Feature request
Request for a new feature
Feature request
This RFC proposes integrating a lossless compression method called ZipNN into Hugging Face Transformers to reduce latency and traffic for downloading models. ZipNN is specifically designed for AI models, offering a model size reduction of 17% to over 50%, depending on the model format and compressibility. Additionally, it significantly reduces time for the user due to its fast decompression speed, allowing compressed models to be ready for use almost immediately without impacting model accuracy.
Motivation
From a LinkedIn post by Julien Chaumond, August 2024, Hugging Face holds 1.3M models, with a cumulative storage space of 12PB. They also serve 1 billion daily requests, amounting to a network bandwidth of around 6 PetaBytes per day!
Downloading large models from Hugging Face can be time-consuming; for example, downloading a model like Llama-3.1-405B can take nearly a day on a 10 MB/s home connection or nearly 2 hours on a 125 MB/s high-bandwidth connection. ZipNN could reduce this time by up to 33%.
Model Comparison Table
We took the 20 most downloaded models in Hugging Face from late OCT 2024:
(Based on 1GB from the middle of the model).
Your contribution
ZipNN
ZipNN (The NN stands for Neural Network) is a lossless compression library tailored to neural networks. ZipNN compresses models by targeting the skewed distribution of exponent bits in floating-point parameters, which is highly compressible. By isolating exponents and applying Entropy Encoding with Huffman codes, ZipNN achieves efficient compression without the overhead of multi-byte repetition algorithms like Lempel-Ziv. It further optimizes speed by skipping non-compressible segments and adapting strategies based on the model’s characteristics.
ZipNN Repository Link
ZipNN arXiv Paper: ZIPNN: LOSSLESS COMPRESSION FOR AI MODELS
Comparing Speed and Compression ratio of different compression methods:
(Based on 1GB from the middle of the model).
User benefits
Figure 10 in the arXiv paper shows the download and upload timing for three models, comparing the original and compressed versions, including decompression and compression times. Network speed is the primary factor affecting download and upload durations, and even for models that are less compressible, users benefit from reduced total latency when decompression and compression are included.
Link to Figure 10 from the arXiv paper
Usage
Installation
To get started, you can install the library directly from PyPI:
API Usage
You can call ZipNN directly from the API:
Command-Line Scripts
You can also use the provided wrapper scripts.
Note: All ZipNN compressed files use the ".znn" extension.
Single file compression/decompression:
Hugging Face Plugin and compressed Models stored on Hugging Face
Plugin Usage
ZipNN has a plugin for the Hugging Face transformers library that can handle ZipNN-compressed Models.
The user can save the compressed model to his local storage using the default plugin. When loading, the model includes a fast decompression phase on the CPU while remaining compressed in its storage.
What this means: Each time the user loads the model, less data is transferred to the GPU cluster, with decompression happening on the CPU.
Alternatively, avoiding future decompression: the user can save the model uncompressed on his local storage. This way, future loads won’t require a decompression phase
To compress and decompress manually, simply run: Link to scripts
There are a few models compressed by ZipNN hosted on Hugging Face:
Example:
compressed FacebookAI/roberta-base
compressed meta-llama/Llama-3.2-11B-Vision-Instruct
And a usage example:
Usage Example Llama-3.2-11B
Upload compressed models to Hugging Face:
Download the scripts for compressing/decompressing AI Models:
wget -i https://raw.githubusercontent.com/zipnn/zipnn/main/scripts/scripts.txt && rm scripts.txt
python3 zipnn_compress_path.py safetensors --path .
Current status
The code is ready for use with single-threaded compression and decompression on the CPU, and ZipNN already has a few users. The next version will support multi-threading on the CPU, with a future milestone targeting GPU implementation.
Proposed change:
Decompress any shard of a model that was previously compressed with ZipNN. This commit only extends the functionality of load_state_dict(), making sure to load the model and decompress it as efficiently as possible by decompressing in chunks and by avoiding unnecessary I/O requests.
In modeling_utils.load_state_dict():
This is a proof of concept, currently only supporting sharded models whose index.json been modified to .znn suffixes (as seen in this ZipNN compressed Llama 3.2 example on Hugging Face), safetensors or any other file. Support for all single files can be readily added by adding individual checks in modeling_utils.PreTrainedModel.from_pretrained() or by changing utils.hub.cached_file() to check for .znn filepath.
A working version of all edge cases can be found in ZipNN's zipnn_hf() plugin.
Additionally, to allow for users to only decompress once, the plugin has a flag
zipnn_hf(replace_local_file=True)
that locally saves the decompressed model in the cache, reorders the symlinks, and fixes accordingly any index.json if there is one. This functionality can be done equivalently by adding a flag in from_pretrained().The text was updated successfully, but these errors were encountered: