Year of Graduation
Context Huffman Coding in High-Dimensional Vectors Compression
The paper considers the problem of high-dimensional vectors compression. SIFT image descriptors from SIFT1M dataset, as well as deep neural network descriptors from DEEP1B dataset are taken for experiments in this work. The classical solution of this problem is applying the Product Quantization (PQ) algorithm to the initial set of vectors to obtain a new set of integer vectors with a fixed compression coefficient given as a parameter of the algorithm. The Product Quantization algorithm is a lossy compression algorithm, and the loss is unknown until the algorithm is applied. In the existing modifications of the Product Quantization, such as Optimized Product Quantization (OPQ), or Additive Quantization (AQ), the coding error is optimized, at the same compression ratio as in the original algorithm. In this paper, it is proposed to apply to the Product Quantization output a bigram Huffman coding to optimize the compression ratio, with a fixed encoding error introduced by the basic Product Quantization algorithm. The paper developed two compression methods based on the building a minimal spanning tree, as well as on the lexicographic sort of vectors encoded by Product Quantization. Both methods give different compression ratios on different startup parameters and datasets, from 5.6 to 1.4 bits per byte of Product Quantization output volume. Unfortunately the proposed methods require additional computing powers, as well as significant amounts of RAM for both encoding and decoding. In addition, both methods introduce a variable size of the dataset elements, which makes it impossible for random access to vectors, and also fixes the vector storage structure of the dataset. Nevertheless, the proposed algorithm can be used in vector compression problems, where the Product Quantization algorithm is applied, and these additional limitations do not play a big role.