• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Publications
  • /Tech News
  • /Research
  • Home
  • / ...
  • /Tech News
  • /Research

Demystifying Quantum Benchmarks

By IEEE Computer Society Team on
February 26, 2024

quantum computing benchmarkingquantum computing benchmarkingQuantum computing, with its potential to revolutionize fields from medicine to finance, is no longer science fiction. But like any complex system, accurately measuring its performance remains a challenge. Enter quantum benchmarks, the tools for gauging the true power of these machines.

IBM's paper, "Defining Best Practices for Quantum Benchmarks,” challenges the quantum community to adopt consistent benchmarking approaches to evaluate and compare quantum devices. This work is built around a simple but vital question: can developing standardized scientific benchmarking guidelines increase clarity and objectivity in gauging quantum achievements?

To address this concept, a team of IBM researchers — Mirko Amico, Helena Zhang, Petar Jurcevic, Lev S. Bishop, Paul Nation, Andrew Wack, and David C. McKay — outlined criteria that quantum benchmarks should follow and encourage widespread adoption of these principles so all stakeholders can accurately track progress in the rapidly evolving quantum sphere.

The Benchmarking Conundrum


Unlike their classical counterparts, quantum computers are like a universal yardstick for performance. This ambiguity makes comparing different devices, let alone tracking progress over time, a daunting task. The IBM Quantum team in Yorktown Heights, New York, emphasized the need for standardized benchmarks, stressing key characteristics:

  • Randomized: Eliminate biases and ensure statistically significant results.
  • Well-defined: With clear specifications and implementation procedures, leaving no room for ambiguity.
  • Holistic: Encompassing various aspects of device performance, not just focusing on specific strengths.
  • Device-independent: Applicable to different technologies, fostering inclusivity across the field.

The IBM Quantum team examined Quantum Volume (QV) as an example benchmark, exploring the nuances of using different success metrics to evaluate its performance. Choosing the right benchmark depends on the specific task and desired insights.

Beyond the Benchmark: The Power of Diagnostics


Not all tools are created equal, and the same applies to benchmarks and diagnostics. While benchmarks provide a holistic assessment of a device's average performance, diagnostics pinpoint specific error sources or hardware components. The IBM Quantum team highlighted the role of application-oriented circuit libraries, collections of algorithms with diverse quantum circuit structures, in uncovering hardware quirks. However, it cautions against reliance solely on a limited set of applications, as this can paint an incomplete picture.

A Reflection of True Potential


The researchers introduced a powerful technique called mirror circuits. These circuits offer a way to optimize benchmarking by addressing hardware limitations. Mirror circuits can expose subtle errors that might go unnoticed in traditional benchmarks.

Balancing Scale, Quality, and Speed


The IBM Quantum researchers also explore the art of fine-tuning benchmarks, balancing three key aspects:

  • Scale: the number of qubits and gates involved
  • Quality: The accuracy and reliability of the results
  • Speed: How quickly the benchmarks can be executed

There are inherent trade-offs between these factors. More complex benchmarks might offer deeper insights but take longer to run. Striking the right balance is key, and transparency is crucial. Disclosing the optimization techniques used ensures fair comparisons and fosters trust within the research community.

Showcasing the Impact


Using a suite of applications and mirror circuits, the authors illustrate the dramatic effects of error suppression and mitigation techniques on reported values. The results showcase that even basic techniques can significantly improve both application and mirror benchmarking circuits. The authors also examine more sophisticated error mitigation techniques, revealing their potential to further enhance the quality of results.

By advocating for standardized, well-defined benchmarks and emphasizing the importance of transparency and optimization techniques, "Defining Best Practices for Quantum Benchmarks" equips researchers with valuable tools to navigate quantum benchmarking and accurately assess and advance the future of quantum computing. For a closer look at the research findings, download the full paper.

Download Full Study "Defining Best Practices for Quantum Benchmarks”"

LATEST NEWS
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Autonomous Observability: AI Agents That Debug AI
Autonomous Observability: AI Agents That Debug AI
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Read Next

From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities

IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT

Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)

Autonomous Observability: AI Agents That Debug AI

Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference

Copilot Ergonomics: UI Patterns that Reduce Cognitive Load

The Myth of AI Neutrality in Search Algorithms

Gen AI and LLMs: Rebuilding Trust in a Synthetic Information Age

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter