FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 85.03 | 8 | 8 | FQ-ViT (ViT-L) | 2021-11-27 |
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 83.31 | 8 | 8 | FQ-ViT (ViT-B) | 2021-11-27 |
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 82.97 | 8 | 8 | FQ-ViT (Swin-B) | 2021-11-27 |
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 82.71 | 8 | 8 | FQ-ViT (Swin-S) | 2021-11-27 |
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 81.20 | 8 | 8 | FQ-ViT (DeiT-B) | 2021-11-27 |
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 80.51 | 8 | 8 | FQ-ViT (Swin-T) | 2021-11-27 |
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 79.17 | 8 | 8 | FQ-ViT (DeiT-S) | 2021-11-27 |
HPTQ: Hardware-Friendly Post Training Quantization | ✓ Link | 78.972 | 8 | 8 | Xception W8A8 | 2021-09-19 |
Learned Step Size Quantization | ✓ Link | 77.878 | 4 | 4 | ADLIK-MO-ResNet50-W4A4 | 2019-02-21 |
Learned Step Size Quantization | ✓ Link | 77.34 | 3 | 4 | ADLIK-MO-ResNet50-W3A4 | 2019-02-21 |
HPTQ: Hardware-Friendly Post Training Quantization | ✓ Link | 77.092 | 8 | 8 | EfficientNet-B0 ReLU W8A8 | 2021-09-19 |
Learned Step Size Quantization | ✓ Link | 76.7 | 4 | 4 | ResNet50-W4A4 (paper) | 2019-02-21 |
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs | ✓ Link | 76.4 | 8 | 8 | EfficientNet-B0-W8A8 | 2020-07-20 |
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs | ✓ Link | 76 | 4 | 4 | EfficientNet-B0-W4A4 | 2020-07-20 |
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs | ✓ Link | 75.45 | 3 | 4 | ResNet50-W3A4 | 2020-07-20 |
HPTQ: Hardware-Friendly Post Training Quantization | ✓ Link | 74.216 | 8 | 8 | EfficientNet-B0 W8A8 | 2021-09-19 |
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network | ✓ Link | 74.03 | | | MPT (80) +BN | 2021-03-17 |
LSQ+: Improving low-bit quantization through learnable offsets and better initialization | ✓ Link | 73.8 | 4 | 4 | EfficientNet-W4A4 | 2020-04-20 |
HPTQ: Hardware-Friendly Post Training Quantization | ✓ Link | 73.356 | 8 | 8 | DenseNet-121 W8A8 | 2021-09-19 |
LSQ+: Improving low-bit quantization through learnable offsets and better initialization | ✓ Link | 71.7 | 4 | 4 | MixNet-W4A4 | 2020-04-20 |
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer | ✓ Link | 71.61 | 8 | 8 | FQ-ViT (DeiT-T) | 2021-11-27 |
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer | ✓ Link | 71.5 | | | UniQ (Ours) | 2021-04-01 |
HPTQ: Hardware-Friendly Post Training Quantization | ✓ Link | 71.46 | 8 | 8 | MobileNetV2 W8A8 | 2021-09-19 |
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs | ✓ Link | 70.9 | | | MobileNetV2 | 2020-07-20 |
R2 Loss: Range Restriction Loss for Model Compression and Quantization | | 69.79 | 4 | | MobileNet-v1 + EWGS + R2Loss | 2023-03-14 |
R2 Loss: Range Restriction Loss for Model Compression and Quantization | | 69.64 | | | MobileNet-v1 + LSQ + R2Loss | 2023-03-14 |
R2 Loss: Range Restriction Loss for Model Compression and Quantization | | 68.45 | 2 | 4 | ResNet-18 + PACT + R2Loss | 2023-03-14 |