There are many science and research challenges in genomics and precision medicine. We are exploring the integration of the knowledge of domain science, genomics data, and intelligent algorithms to address some of these challenging questions. Recent advances in -omics technologies provide substantial novel opportunities to study and/or identify biomarkers of chronic diseases by interpreting multi-omics data, including transcriptomics, epigenomics, genomics, and proteomics, that together may improve understanding of precision medicine. Precision medicine is drugs or treatments designed for small groups, rather than large populations, based on characteristics such as medical history, genetic makeup, and data recorded by wearable devices. It is designed to optimize the pathway for diagnosis, therapeutic intervention, and prognosis by using multidimensional biological datasets that capture individual variability in genes, function, and environment. The use of genomic data can support precision medicine to enable clinicians to predict the most appropriate course of action quickly, efficiently and accurately for a patient. This offers clinicians the opportunity to tailor early interventions to each patient more carefully.
The advancements in intelligent algorithms will enable techniques to process multimodal, multi-scale genomic data, handle heterogeneity in space and time, and accurately quantify uncertainty in the results. Current approaches may present limitations with challenges that come from three perspectives. First, these algorithms are sometimes limited by small sample size and no independent data for validation. Second, the “black box” nature of some algorithms, such as deep neural networks, is an intrinsic property and does not necessarily lend itself well to complete understanding or transparency. Third, most machine-learning algorithms face the bias-variance tradeoff. High bias (overly simple) can cause an algorithm to miss the relevant relations between features and target outputs (underfitting). High variance (overly complex) can cause an algorithm to model the random noise in the training data, rather than the intended outputs (overfitting).
Recent years have seen a surge in approaches, such as deep learning, that have shown broad utility in uncovering new biology and contributing to new discoveries in precision medicine. Intelligent algorithms, especially machine-learning algorithms, have proven to be promising in predicting disease risk from available multidimensional clinical and biological data. Therefore, there is a strong need to discuss and foster these advances in a systematic way to give support both to researchers and practitioners.
This special issue welcomes research that yields novel breakthroughs towards artificial intelligence and genome-based precision medicines. All the manuscripts submitted to this special issue will be peer-reviewed. Pertinent topics may include:
- Deep machine learning in limited data sets
- Imbalanced learning algorithms for biomedical or bioinformatics data
- Enhancement of the interpretability of existing data analysis techniques in problems related to genomics and precision medicine
- Deep-learning applications in genomics and precision medicine, where model interpretability/comprehensibility/explainability is relevant
- Data visualization for model interpretation and explanation
- Novel computational strategies for clinical or specific diseases research
Open for submissions in ScholarOne Manuscripts: November 1, 2020
Closed for submissions: December 1, 2020
Results of first round of reviews: February 1, 2021
Submission of revised manuscripts: April 1, 2021
Results of second round of reviews: May 1, 2021
Publication materials due: June 1, 2021
Prospective authors are invited to submit their manuscripts electronically after the “open for submissions” date, adhering to the TCBB guidelines. Please submit your papers through the online system (https://mc.manuscriptcentral.com/tcbb-cs) and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by email to the guest editors directly.
Xiuzhen Huang, PhD, is a professor of computer science at Arkansas State University. Dr. Huang conceived and defined the concept of No-Boundary Thinking (NBT). She founded the Arkansas Artificial Intelligence (AI) Campus and founded the Joint Translational Research Lab on the campuses of Arkansas State University and St. Bernard’s Medical Center’s Internal Medicine Residency Program. Her research interests include bioinformatics and biomedical informatics; artificial intelligence, machine learning, and deep learning; graph theory and algorithms; parameterized computation and complexity; and theory of computation. Dr. Huang was named Arkansas Research Alliance Fellow and serves as Associate Editor of TCBB. Her current research projects are supported by funding agents including NSF and NIH. Dr. Huang completed her doctorate in computer science from Texas A&M University.
Yu Zhang, PhD, is a professor and chair of computer science at Trinity University. Her research falls within agent-based modeling and simulation, with applications in bioinformatics and social-network analysis. From 2011 to 2017, Dr. Zhang was editor in chief of the International Journal of Agent Technologies and Systems and editor in chief of the Newsletter of the Society for Modeling and Simulation International. She is on the editorial board of the SCS Transaction of Simulation. She is general chair of Agent-Directed Simulation in SCS SpringSim 2018 and co-program chair of Agent-Directed Simulation in the European Modeling and Simulation Symposium from 2018-2020. She currently serves as co-editor for the “No Boundary Thinking in Bioinformatics” book, which will be published by Cambridge in 2020. Dr. Zhang received the 2013 Outstanding Service Award from the Society for Modeling & Simulation International, 2008 Trinity Distinguish Junior Faculty Award, 2007 IEEE Central Texas Chapter Service Recognition, and Best Paper Award of the 2008 IEEE Region 5 Student Paper Competition.
Xuan Guo, PhD, is an assistant professor of computer science and engineering at the University of North Texas. His research focuses on machine learning, big-data mining, and high-performance computing and their applications in the environment, food, and health sectors. He founded the Biocomputing Research Laboratory at the Center for Computational Epidemiology and Response Analysis at the University of North Texas. His research projects have been supported by funding agents including DOD, DOE, NSF, and NIH. Before the University of North Texas, he was a postdoctoral scholar in the Computer Science and Mathematics Division at Oak Ridge National Laboratory. He received his PhD in computer science from the Georgia State University. Dr. Guo has served as an editorial board member of the International Journal of Bioinformatics Research and Applications, a guest editor of MDPI Genes, and the program chair of BDCloud 2016 and DataCloud 2017.