Self-Knowledge Distillation Compression Algorithm Based on Filter Attenuation
DOI:
CSTR:
Author:
Affiliation:

School of Information and Software Engineering, East China Jiaotong University, Nanchang 330013 , China

Clc Number:

TP391.4

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Objective】To address the severe performance loss after model pruning,a compression algorithm based on filter attenuation and self-knowledge distillation is proposed. [Method] This method utilized a filter attenuation mechanism to preserve the information of redundant filters, thereby minimizing the model difference before and after pruning, and reducing the performance loss caused by pruning. Meanwhile, an annealing attenuation function was introduced during the pruning process to dynamically change the attenuation of filters, enabling a fast and efficient search for the optimal substructure of the model and improving the convergence speed of the model. Additionally, self-knowledge distillation was employed for knowledge transfer between the pre-trained model and the compressed model. [Result] The results show that this compression algorithm improves the model accuracy by 0.12 percentage points while reducing the FLOPs of the VGG-16 model by 37.3% . [Conclusion] This method provides a more stable and efficient model compression approach for convolutional neural networks.

    Reference
    Related
    Cited by
Get Citation

熊李艳,黄佳文,黄晓辉,陈庆森.基于滤波衰减的自知识蒸馏压缩算法[J].华东交通大学学报英文版,2024,41(6):112-120.
Xiong Liyan, Huang Jiawen, Huang Xiaohui, Chen Qingsen. Self-Knowledge Distillation Compression Algorithm Based on Filter Attenuation[J]. JOURNAL OF EAST CHINA JIAOTONG UNIVERSTTY,2024,41(6):112-120

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:March 15,2024
  • Revised:
  • Adopted:
  • Online: February 10,2025
  • Published:
Article QR Code