• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Journals
  • /Bd
  • Home
  • / ...
  • /Journals
  • /Bd

CLOSED: Call for Papers: Special Section on Pre-Trained Large Language Models

IEEE TBD seeks submissions for this upcoming special issue.

Important Dates

  • Submission Deadline: 1 December 2023

Publication: Mid 2024


Pre-Trained Large Language Models (LLMs), such as ChatGPT and GPT-4, have revolutionized the field of AI with their remarkable capabilities in natural language understanding and generation. LLMs are widely used in various applications such as voice assistants, recommender systems, content-generation models like ChatGPT and text-to-image models like Dall-E. However, these powerful models also pose significant challenges for their safe and ethical deployment. How can we ensure that LLMs are fair, safe, privacy-preserving, explainable, and controllable?

The special issue will cover two main themes: 

  1. ) to present recent progress of foundational LLMs and their applications in different domains; 
  2. ) to address open issues and challenges for building trustworthy LLMs. We hope that this special issue will foster interdisciplinary collaboration and contribute to the development and use of LLMs that benefit humanity.

We welcome submissions on recent advances and applications of large language models (LLMs) together with an emphasis on enhancing trust in the use of LLMs.

Techniques

  • Advanced model architectures for LLMs, e.g., Trans-former architectures and attention mechanisms.
  • Advanced algorithms for improving performance, cost, robustness, and complexity of LLMs.
  • Model transfer and compression techniques for LLMs.
  • Federated Learning for LLMs.
  • Prompt Engineering for LLMs.

Applications

  • Innovative applications of LLMs in various domains, e.g., psychotherapy, elderly care, etc.
  • Educational technologies based on LLMs such as chat-bots, content generation, feedback systems, etc.
  • Natural language understanding and generation tasks using LLMs, e.g., storytelling, marketing copywriting, etc.
  • LLMs for health care, protein synthesis, etc.

Challenges

  • Ethics, social economics, and trustworthiness of LLMs.
  • Data labeling and quality issues for training LLMs.
  • Privacy and security risks of models and data used by LLMs.
  • Potential bias and unfairness in the output of LLMs
  • Human oversight and intervention mechanisms for con-trolling LLMs.
  • Hallucination detection and alleviation for LLMs.
  • Emergent behavior in LLMs


Submission Guidelines

For author information and guidelines on submission criteria, please visit the TBD’s Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.


Questions? Contact the guest editors:

  • Yuxiao Dong, Tsinghua University
  • Qiang Yang, Hong Kong University of Science and Technology & WeBank AI
  • Chang Zhou, Alibaba Group
  • Xuezhi Wang, google brain
  • Qiaozhumei, Michigan university

LATEST NEWS
Vishkin Receives 2026 IEEE Computer Society Charles Babbage Award
Vishkin Receives 2026 IEEE Computer Society Charles Babbage Award
Empowering Communities Through Digital Literacy: Impact Across Lebanon
Empowering Communities Through Digital Literacy: Impact Across Lebanon
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Read Next

Vishkin Receives 2026 IEEE Computer Society Charles Babbage Award

Empowering Communities Through Digital Literacy: Impact Across Lebanon

From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities

IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT

Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)

Autonomous Observability: AI Agents That Debug AI

Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference

Copilot Ergonomics: UI Patterns that Reduce Cognitive Load

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter