CLOSED: Call for Papers: Special Issue on Adversarial Learning for Intelligent Cyber-Physical Systems

IEEE Intelligent Systems seeks submissions for this upcoming special issue.
Share this on:
Submissions Due: 1 September 2023

Important Dates

Submissions Due: 1 September 2023

Publication: July/August 2024


Cyber-physical systems (CPS) are computer systems where the used mechanisms can be monitored and/or controlled by computer algorithms. CPS is also the collaboration of computing entities that are intensively connected to the surrounding physical world and its ongoing processes, while providing and using data access and processing services available on the Internet, which is designed to make the physical world more accessible to computational entities. In CPS, both software and physical components tend to be intertwined, enable operations on a variety of temporal as well as spatial scales, show tendencies towards multiple distinct behavioural modalities, and open gateways of interaction between Intelligent Systems in aspects that may be able to change with context. Both current machine learning and deep learning (ML/DL) techniques with their strong learning capabilities can be beneficial to CPS in a number of ways (e.g., spam filtering, intrusion detection, process monitoring). However, most ML/DL techniques rely on large data sets to achieve good performance, thus it has been a challenging issue to secure data collected by CPS, while preserving its confidentiality (privacy). In addition, well-trained ML models used in CPS are highly vulnerable to malicious attacks (e.g., adversarial/poisoning attacks) due to the distributed property of data sources and the inherent physical constraints imposed by CPS. As ML/DL techniques are integrated into a greater number of CPSs currently, those malicious attacks could become a serious problem. To overcome the above limitations and develop a trustworthiness of ML/DL models applied in CPS under malicious attacks, adversarial learning has been proposed to study these attacks, as well as the defenses on ML/DL algorithms in which to identify and spot the intentionally misleading data or behaviours, which is important for CPS. 

This special issue aims to study the critical ML/DL theoretical and practical issues in adversarial environments and for CPS. The following interests are considered in this special issue, but not limited to: 

  • Foundational theory development of adversarial learning in CPS
  • Managing dynamic configurations in CPS using adversarial learning
  • Representation learning, knowledge discovery and model generalizability in CPS
  • Distributed and federated adversarial learning in CPS
  • Large-scale adversarial learning developments in CPS  
  • Adversarial learning of CPS interactions, behaviors, and impacts 
  • Malware and intrusion detection in CPS that use adversarial learning
  • Detecting data poisoning and evasion attacks in CPS
  • Adversarial learning-based risk assessment and risk-aware decision making in CPS
  • Ethical and data protection issues by adversarial learning in CPS
  • Explainable, transparent, or interpretable CPS systems via adversarial learning 
  • Robustness and reliability in adversarial learning systems in CPS

Submission Guidelines

For author information and guidelines on submission criteria, please visit the IS Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.


Questions?

Email the guest editor at is4-24@computer.org.

Guest Editors:

  • Jerry Chun-Wei Lin (lead), Western Norway University of Applied Sciences, Norway 
  • Gautam Srivastava, Brandon University, Canada 
  • Yu-Dong Zhang, University of Leicester, UK, 
  • Jhing-Fa Wang, National Cheng Kung University, Taiwan