DARIO: Differentiable vision transformer pruning with low-cost proxies - Université d'Évry
Article Dans Une Revue IEEE Journal of Selected Topics in Signal Processing Année : 2024

DARIO: Differentiable vision transformer pruning with low-cost proxies

Résumé

Transformer models have gained popularity for their exceptional performance. However, these models still face the challenge of high inference latency. To improve the computational efficiency of such models, we propose a novel differentiable pruning method called DARIO (DifferentiAble vision transformer pRunIng with low-cost prOxies). Our approach involves optimizing a set of gating parameters using differentiable, data-agnostic, scale-invariant, and low-cost performance proxies. DARIO is a data-agnostic pruning method, it does not need any classification heads during pruning. We evaluated DARIO on two popular state-of-the-art pre-trained ViT models, including both large (MAE-ViT) and small (MobileViT) sizes. Extensive experiments conducted across 40 diverse datasets demonstrated the effectiveness and efficiency of our DARIO method. DARIO not only significantly improves inference efficiency on modern hardware but also excels in preserving accuracy. Notably, DARIO has even achieved an increase in accuracy on MobileViT, despite only fine-tuning the last block and the classification head.
Fichier non déposé

Dates et versions

hal-04813053 , version 1 (01-12-2024)

Identifiants

Citer

Haozhe Sun, Alexandre Heuillet, Felix Mohr, Hedi Tabia. DARIO: Differentiable vision transformer pruning with low-cost proxies. IEEE Journal of Selected Topics in Signal Processing, In press, pp.1-13. ⟨10.1109/JSTSP.2024.3501685⟩. ⟨hal-04813053⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More