A Competition to Strengthen AI Security: TRC ‘22

IEEE Computer Society Team
Published 11/06/2023
Share this on:

When it comes to data security and privacy, trained neural networks offer both challenge and opportunity. That’s why Prof. Meikang Qiu of the Beacom College of Computer and Cyber Science at Dakota State University, Madison, S.D., U.S.A., led the IEEE Trojan Removal Competition (TRC ’22). Supported by a grant from the IEEE Computer Society’s Emerging Technology Fund, this event aimed to support the development of innovative end-to-end neural network backdoor removal techniques to counter attacks.

“The IEEE Trojan Removal Competition is a fundamental solution to improve the trustworthy implementation of neural networks from implanted backdoors,” said Qiu. “This competition’s emphasis on Trojan Removal is vital because it encourages research and development efforts toward enhancing an underexplored but paramount issue.”

The competition focused on solutions that can enhance the security of neural networks. By developing general, effective, and efficient white box trojan removal techniques, participants worked to build trust in deep learning and artificial intelligence (AI), especially for pre-trained models in the wild, which is crucial to protecting artificial intelligence from potential attacks.

Evaluated on clean accuracy, poisoned accuracy, and attack success rate, the competition boasted 1,706 submissions from 44 teams worldwide. Six groups successfully developed techniques that achieved better results than the state-of-the-art baseline metrics published in top machine-learning venues. In fact, the competition’s winning team from the Harbin Institute of Technology in Shenzhen, created set HZZQ Defense, which resulted in a 98.14% poisoned accuracy rate and only a 0.12% attack success rate.

A deeper dive into full competition results brought forth two key findings for the community:

  1. Many classic techniques for mitigating backdoor impacts can overcorrect, where they “unlearn” key elements of the code, resulting in low model performance as they normally ignore measuring the impact on the poisoned accuracy, a novel metric proposed and highlighted throughout the IEEE TRC ’22.
  2. Many existing techniques are of low generalizability, i.e., some methods are only effective on certain data sets or specific machine learning model architectures.

These results point to the fact that, for the time being, a generalized approach to mitigating attacks on neural networks is not advisable. But event leaders pointed out that as pre-trained models become increasingly commonplace, adequate security will be all the more imperative. Competitions like TRC ’22 seek to galvanize the community to develop more robust and adaptable security measures for AI systems.

“We’re hoping that this benchmark provides diverse and easy access to model settings for people coming up with new AI security techniques,” shared Yi Zeng, the competition chair of the IEEE TRC ’22, research assistant at Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, Va., U.S.A. “Now developers can explore new defense methods and get rid of remaining vulnerabilities.”

“As the world becomes more dependent on AI and machine learning, it is important to deal with the security and privacy issues that these technologies bring up,” said Qiu. “The IEEE TRC ’22 competition has made a big difference in this area. I’d like to offer a special thanks to my colleagues on the steering committee—Professors Ruoxi Jia from Virginia Tech, Neil Gong from Duke, Tianwei Zhang from Nanyang Technological University, Shu-Tao Xia from Tsinghua University, and Bo Li from University of Illinois Urbana-Champaign—for their help and support.”

For more information or to apply for an Emerging Technology Grant, visit https://www.computer.org/communities/emerging-technology-fund.