You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.
The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.
For the reports in this repository we specifically note that
- the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
- the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
- technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
- in other cases, please contact the copyright owner for detailed information
By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.
If you are in doubt, feel free to contact webmaster@ide.mdh.se
proard: progressive adversarial robustness distillation: provide wide range of robust students
Publication Type:
Article, research overeview
Venue:
International Joint Conference on Neural Networks 2025
Abstract
Adversarial Robustness Distillation (ARD) has emerged as an effective method to enhance the
robustness of lightweight deep neural networks against adversarial attacks. Current ARD approaches
have leveraged a large robust teacher network to train one robust lightweight student. However, due
to the diverse range of edge devices and resource constraints, current approaches require training a
new student network from scratch to meet specific constraints, leading to substantial computational
costs and increased CO2 emissions.
This paper proposes Progressive Adversarial Robustness Distillation (ProARD), enabling the efficient
one-time training of a dynamic network that supports a diverse range of accurate and robust student
networks without requiring retraining. We first make a dynamic deep neural network based on
dynamic layers by encompassing variations in width, depth, and expansion in each design stage to
support a wide range of architectures (> 1019). Then, we consider the student network with the
largest size as the dynamic teacher network. ProARD trains this dynamic network using a weight-
sharing mechanism to jointly optimize the dynamic teacher network and its internal student networks.
However, due to the high computational cost of calculating exact gradients for all the students within
the dynamic network, a sampling mechanism is required to select a subset of students. We show that
random student sampling in each iteration fails to produce accurate and robust students. ProARD
employs a progressive sampling strategy that gradually reduces the size of student networks in three
steps during training while applying robustness distillation between the dynamic teacher network
and the selected students. Finally, we leverage a multi-objective evolutionary algorithm based on
a proposed accuracy-robustness predictor to identify optimal architectures that balance accuracy,
robustness, and efficiency.
Through the experiments, we show that ProARD reduces the computational cost by 60× and improves
accuracy and robustness by 13% and 14%, respectively, compared to random sampling. We also
demonstrate that our accuracy-robustness predictor can estimate the accuracy and robustness of test
student networks with root mean squared errors of 0.0073 and 0.0072, respectively.
Bibtex
@article{Mousavi7210,
author = {Seyedhamidreza Mousavi and Seyedali Mousavi and Masoud Daneshtalab},
title = {proard: progressive adversarial robustness distillation: provide wide range of robust students},
month = {July},
year = {2025},
journal = {International Joint Conference on Neural Networks 2025},
url = {http://www.es.mdu.se/publications/7210-}
}