• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Publications
  • /Tech News
  • /Research
  • Home
  • / ...
  • /Tech News
  • /Research

AdvML: AI’s Achilles Heel

By IEEE Computer Society Team on
March 26, 2025

As AI applications proliferate across industries and sectors, two key security questions arise:

  • Are these AI applications cyber-secure?
  • Can bad actors exploit them through attacks?

A recent article discusses these questions in relation to AI’s Achilles’ heel: adversarial machine learning (AdvML).

In “Lights Toward Adversarial Machine Learning: The Achilles’ Heel of Artificial Intelligence,” authors Luca Pajola and Mauro Conti take a cybersecurity practitioner’s viewpoint as they discuss the full range of AI application threats

  • from the systems and libraries used to deploy an AI application,
  • to threats arising in the AI application itself.

Here, we offer a quick overview of Pajola and Conti’s detailed look at AdvML and how it might best serve the needs of AI users today and in the future.

AdvML: Analyzing Adversaries and Entry Points


AI is increasingly deployed in high-risk applications—from “driving” autonomous taxis to directing armed drones to human targets—and having security assurances is far more than a “nice to have” concept. Or rather, it should be.

Enter AdvML, a research field that investigates cyberthreats and malicious actors aiming to manipulate or control AI applications. To do this, AdvML researchers build threat models based on two factors:

  • Attacker knowledge: What do they know about the system that contains the AI application?
  • Attacker capabilities: What type of operations they might logically engage in?

Based on a detailed examination of these factors and the literature, the authors distinguish two major categories of attack:

  • AI-level cyberthreats, which exploit algorithm vulnerabilities in the AI application.
  • System-level cyberthreats, which produce AI-level threats by exploiting vulnerabilities in the system that hosts the AI application.

The article explores these two attack categories in detail, including the different families of attacks at each level.

Enterprise Generative AI Summit in San Jose, CaliforniaEnterprise Generative AI Summit in San Jose, California

At the AI level, the most popular attack family is the model evasion, in which attackers alter the input with a perturbation that produces a misclassification; simple examples here include

  • changing pixel values in a computer vision application, or
  • inserting a typo in offensive language to evade detection by commercial tools.

System-level attacks occur by exploiting weaknesses at deeper levels in the AI lifecycle, from hardware to OS to libraries. These attacks can produce threats similar to those at the application level. The article offers two examples:

  • OS-level attack. At this level, a backdoor attack can be executed after the AI is deployed by studying and flipping specific bits in the dynamic random-access memory.
  • Library-level attack. Common AI libraries—such as Caffe, TensorFlow, and PyTorch—are susceptible to denial-of-service attacks; the consequences of these vary from application crash to model evasion.

AdvML: Changing Directions


As the authors point out, over the past decade, AdvML has focused on understanding potential AI application failures and generating families of adversarial threats. However, these studies are primarily conducted on testbeds that are far removed from our increasingly complex reality.

Moving forward, the authors argue that AdvML needs to shift its focus to filling the gap between research and industry, considering concrete threats to AI applications. Doing so requires deeper consideration of two key questions:

  • Who are the AI consumers today?
  • What might motivate attacks on these consumers?

Digging Deeper


To read on, see “Lights Toward Adversarial Machine Learning: The Achilles Heel of Artificial Intelligence” in the Sept/Oct issue of the IEEE Intelligent Systems magazine.

To dig even deeper, check out the following resources:

  • The AdvML-Frontiers website has information on the 2024 Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers) as well as on previous sessions.
  • IEEE Computational Intelligence Society hosted a panel discussion on Adversarial ML: Lessons Learned, Challenges & Opportunities.
  • UC Berkeley’s Center for Long-Term Cyber Security offers an AdvML overview and short explainer video.
LATEST NEWS
Quantum Insider Session Series: Practical Instructions for Building Your Organization’s Quantum Team
Quantum Insider Session Series: Practical Instructions for Building Your Organization’s Quantum Team
Beyond Benchmarks: How Ecosystems Now Define Leading LLM Families
Beyond Benchmarks: How Ecosystems Now Define Leading LLM Families
From Legacy to Cloud-Native: Engineering for Reliability at Scale
From Legacy to Cloud-Native: Engineering for Reliability at Scale
Announcing the Recipients of Computing's Top 30 Early Career Professionals for 2025
Announcing the Recipients of Computing's Top 30 Early Career Professionals for 2025
IEEE Computer Society Announces 2026 Class of Fellows
IEEE Computer Society Announces 2026 Class of Fellows
Read Next

Quantum Insider Session Series: Practical Instructions for Building Your Organization’s Quantum Team

Beyond Benchmarks: How Ecosystems Now Define Leading LLM Families

From Legacy to Cloud-Native: Engineering for Reliability at Scale

Announcing the Recipients of Computing's Top 30 Early Career Professionals for 2025

IEEE Computer Society Announces 2026 Class of Fellows

MicroLED Photonic Interconnects for AI Servers

Vishkin Receives 2026 IEEE Computer Society Charles Babbage Award

Empowering Communities Through Digital Literacy: Impact Across Lebanon

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter