2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution

1Shanghai Jiao Tong University, 2ETH Zürich, 3Max Planck Institute for Informatics, 4Westlake University
NeurIPS, 2024

Abstract

Low-bit quantization has become widespread for compressing image super-resolution (SR) models for edge deployment, which allows advanced SR models to enjoy compact low-bit parameters and efficient integer/bitwise constructions for storage compression and inference acceleration, respectively. However, it is notorious that low-bit quantization degrades the accuracy of SR models compared to their full-precision (FP) counterparts. Despite several efforts to alleviate the degradation, the transformer-based SR model still suffers severe degradation due to its distinctive activation distribution. In this work, we present a dual-stage low-bit post-training quantization (PTQ) method for image super-resolution, namely 2DQuant, which achieves efficient and accurate SR under low-bit quantization. The proposed method first investigates the weight and activation and finds that the distribution is characterized by coexisting symmetry and asymmetry, long tails. Specifically, we propose Distribution-Oriented Bound Initialization (DOBI), using different searching strategies to search a coarse bound for quantizers. To obtain refined quantizer parameters, we further propose Distillation Quantization Calibration (DQC), which employs a distillation approach to make the quantized model learn from its FP counterpart. Through extensive experiments on different bits and scaling factors, the performance of DOBI can reach the state-of-the-art (SOTA) while after stage two, our method surpasses existing PTQ in both metrics and visual effects. 2DQuant gains an increase in PSNR as high as 4.52dB on Set5 (x2) compared with SOTA when quantized to 2-bit and enjoys a 3.60x compression ratio and 5.08x speedup ratio.

Method

Overview of 2DQuant
Overview of 2DQuant

Poster

BibTeX

@inproceedings{liu20242dquant,
        title={2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution},
        author={Liu, Kai and Qin, Haotong and Guo, Yong and Yuan, Xin and Kong*, Linghe and Chen, Guihai and Zhang, Yulun},
        booktitle={NeurIPS},
        year={2024}
    }