• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE-CS_LogoTM-orange
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
IEEE-CS_LogoTM-orange

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2026 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Journals
  • /Bd
  • Home
  • / ...
  • /Journals
  • /Bd

CLOSED: Call for Papers: Special Section on Pre-Trained Large Language Models

IEEE TBD seeks submissions for this upcoming special issue.

Important Dates

  • Submission Deadline: 1 December 2023

Publication: Mid 2024


Pre-Trained Large Language Models (LLMs), such as ChatGPT and GPT-4, have revolutionized the field of AI with their remarkable capabilities in natural language understanding and generation. LLMs are widely used in various applications such as voice assistants, recommender systems, content-generation models like ChatGPT and text-to-image models like Dall-E. However, these powerful models also pose significant challenges for their safe and ethical deployment. How can we ensure that LLMs are fair, safe, privacy-preserving, explainable, and controllable?

The special issue will cover two main themes: 

  1. ) to present recent progress of foundational LLMs and their applications in different domains; 
  2. ) to address open issues and challenges for building trustworthy LLMs. We hope that this special issue will foster interdisciplinary collaboration and contribute to the development and use of LLMs that benefit humanity.

We welcome submissions on recent advances and applications of large language models (LLMs) together with an emphasis on enhancing trust in the use of LLMs.

Techniques

  • Advanced model architectures for LLMs, e.g., Trans-former architectures and attention mechanisms.
  • Advanced algorithms for improving performance, cost, robustness, and complexity of LLMs.
  • Model transfer and compression techniques for LLMs.
  • Federated Learning for LLMs.
  • Prompt Engineering for LLMs.

Applications

  • Innovative applications of LLMs in various domains, e.g., psychotherapy, elderly care, etc.
  • Educational technologies based on LLMs such as chat-bots, content generation, feedback systems, etc.
  • Natural language understanding and generation tasks using LLMs, e.g., storytelling, marketing copywriting, etc.
  • LLMs for health care, protein synthesis, etc.

Challenges

  • Ethics, social economics, and trustworthiness of LLMs.
  • Data labeling and quality issues for training LLMs.
  • Privacy and security risks of models and data used by LLMs.
  • Potential bias and unfairness in the output of LLMs
  • Human oversight and intervention mechanisms for con-trolling LLMs.
  • Hallucination detection and alleviation for LLMs.
  • Emergent behavior in LLMs


Submission Guidelines

For author information and guidelines on submission criteria, please visit the TBD’s Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.


Questions? Contact the guest editors:

  • Yuxiao Dong, Tsinghua University
  • Qiang Yang, Hong Kong University of Science and Technology & WeBank AI
  • Chang Zhou, Alibaba Group
  • Xuezhi Wang, google brain
  • Qiaozhumei, Michigan university

LATEST NEWS
IEEE CS High-Performance Computing Conference SC Recognized as Fastest Growing Event in 2025
IEEE CS High-Performance Computing Conference SC Recognized as Fastest Growing Event in 2025
ASTRA 2025: Neuroimaging, Brain-Computer Interfaces, and AI
ASTRA 2025: Neuroimaging, Brain-Computer Interfaces, and AI
IEEE Computer Society Launches Software Professional Certification
IEEE Computer Society Launches Software Professional Certification
IEEE LCN 2025: Promoting Sustainability and Carbon Neutrality
IEEE LCN 2025: Promoting Sustainability and Carbon Neutrality
CS Juniors: Girls.comp Day
CS Juniors: Girls.comp Day
Read Next

IEEE CS High-Performance Computing Conference SC Recognized as Fastest Growing Event in 2025

ASTRA 2025: Neuroimaging, Brain-Computer Interfaces, and AI

IEEE Computer Society Launches Software Professional Certification

IEEE LCN 2025: Promoting Sustainability and Carbon Neutrality

CS Juniors: Girls.comp Day

The Stylist in the Machine: Shipping a Day-1 Fashion Recommender with LLMs

LinkedIn Profile Template

Quantum Insider Session Series: Choosing the Right Time and Steps to Start Working with Quantum Technologies

Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter