ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models

Department of Computer Science, UNC Chapel Hill

ICLR 2024

ECoFLaP is a unified coarse-to-fine approach to first efficiently compute sparsity ratios for each layer with zeroth-order gradients, and then prune the model in a layer-wise manner with the obtained sparsity.

Abstract

Large Vision-Language Models (LVLMs) can understand the world comprehensively by integrating rich information from different modalities, achieving remarkable advancements on various multimodal downstream tasks. However, deploying LVLMs is often problematic due to their massive computational/energy costs and carbon consumption. Such issues make it infeasible to adopt conventional iterative global pruning, which is costly due to computing the Hessian matrix of the entire large model for sparsification. Alternatively, several studies have recently proposed layer-wise pruning approaches to avoid the expensive computation of global pruning and efficiently compress model weights according to their importance within a layer. However, they often suffer from suboptimal model compression due to their lack of a global perspective. To address this limitation in recent efficient pruning methods for large models, we propose Efficient Coarse-to-Fine LayerWise Pruning (ECoFLaP), a two-stage coarse-to-fine weight pruning approach for LVLMs. We first determine the sparsity ratios of different layers or blocks by leveraging the global importance score, which is efficiently computed based on the zeroth-order approximation of the global model gradients. Then, the model performs local layer-wise unstructured weight pruning based on globally-informed sparsity ratios. We validate our proposed method across various multimodal and unimodal models and datasets, demonstrating significant performance improvements over prevalent pruning techniques in the high-sparsity regime.

Method

MY ALT TEXT

We demonstrate the design difference of ECoFLaP compared to global and layer-wise pruning approaches. ECoFLaP obtains an adaptive sparsity ratio per layer in a single step based on a "global importance score" (Coarse) and then removes parameters that are less critical (to the model's performance), in a layer-wise manner (Fine).

MY ALT TEXT

Our approach utilizes zeroth-order gradient to reduce the memory requirement caused by backpropagation when computing the global information.

Results

MY ALT TEXT

ECoFLaP outperforms SoTA layer-wise pruning baselines since our ECoFLaP is able to exploit the global information. Note that our zeroth-order ECoFLaP uses on-par memory budgets compared to layer-wise pruning methods, Wanda and SparseGPT, by avoiding expensive backpropagation computation.

MY ALT TEXT

ECoFLaP can be applied to both multimodal and unimodal architectures. We also show that ECoFLaP demonstrates larger performance improvements over baselines in the high-sparsity regime.

MY ALT TEXT

ECoFLaP also performs better than baselines on CLIP models over 11 tasks, and the approach can worked well with SoTA layer-wise pruning approaches, Wanda or SparseGPT. The result shows that the strategy of using local scores leads to a decrease in performance compared to leveraging a uniform sparsity ratio, and this justifies our use of global information.

BibTeX


        @inproceedings{Sung2024ECoFLaP,
          author = {Yi-Lin Sung, Jaehong Yoon, Mohit Bansal},
          title = {ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models},
          booktitle = {International Conference on Learning Representations (ICLR)},
          year = {2024},
        }