Educator, Researcher, and Entrepreneur Extraordinaire: A Conversation with Amit Sheth, W. Wallace McDowell Award Winner

IEEE Computer Society Team
Published 08/01/2023
Share this on:

Prof.Amit Sheth has left an impactful mark on the worlds of AI and computer science. As the founding director of the university-wide AI Institute (AIISC) and the NCR Chair & Professor of Computer Science & Engineering at the University of South Carolina, his pioneering work has advanced the field of Artificial Intelligence. Additionally, he has worked hard in his efforts to lead the next generation of AI pioneers to new heights.

As a result of this hard work, he has received the honor of the 2023 W. Wallace McDowell Award for “…pioneering and enduring contributions to information integration, data and service semantics, and knowledge-enhanced computing. The interview below shares how his accomplishments have led him to this major achievement.


Can you tell us more about your role as the founding director of the Artificial Intelligence Institute? How did you come to develop the institute and what were the initiative’s goals?

In 2018, my current university, the University of South Carolina (USC), ran a university-wide competition called the “Excellence Initiative.” I was running an Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis), which then was the largest research center in the history of my previous university. The proposal to establish a university-wide AI Institute, submitted by the College of Engineering and Computing Dean, ranked first. As I had been preselected as a potential director (one of the election criteria was the h-index of over 100, so number of qualified and movable AI researchers was extremely small), I also had a chance to give inputs to the proposal and define its character. In a meeting held during my campus visit, ten deans showed up; where they told me why they wanted AI rather than my selling AI. This told me there was a real opportunity to make a significant impact which has greatly worked out. Despite less investment than originally promised, the AI Institute has around 50 researchers (six faculty, two research faculty, >30 Ph.D. students) and translational research involving over 25 projects across almost all colleges/schools on the campus. Along with the strength in foundational as well as translational AI research, training Ph.D.s as independent researchers that can compete with any top program graduate has not only been our objective, but something that we have achieved. The strategy for this is explained in our article, “What Do You Have That Others Don’t?: Succeeding in Academia or Industry.”

Honor your colleagues achievements. Nominate Someone for a Major Award Today!


You have a diverse background as an educator, researcher, and entrepreneur. How do you balance these different roles, and how do they complement each other in your work?

I have been able to merge and sync these roles. My first three jobs after my Ph.D. were in industry research or R&D, where several of the outcomes were translated into successful products. However, constraints on research independence and bureaucracy (e.g., in early 1990s. MBAs without understanding the potential of the growing Web market asked to decide whether to accept my proposal to convert a technology my team had developed into a product) made me consider switching to academia. I also started advising my first Ph.D. while I was with Bellcore (my third industry job, a professor at Rutgers agreed to be the supervisor on paper), and I liked the idea of working with young minds. While in academia, my federal grant-funded research continued to be driven by real-world problems, and some of the technologies we developed had excellent market fit. This, in turn, led to my founding three of the four companies by licensing the technology developed at the university. In all these entrepreneurship ventures, I was able to simultaneously support the university’s interest (universities got hundreds of thousands in royalty/fees from the company I founded), students’ interest (my students took up top technical roles in the startup phase of the company), and investors’ interest, while taking the innovation from the lab to market.


You have supervised over 45 Ph.D. advisees and postdocs. What qualities do you look for in your students, and how do you support their success in academia, industry research, and entrepreneurship?

I have taken inspiration from a unique academic work culture called Gurukul, a residential system of learning that developed in the Indian subcontinent during the Vedic age and adapted some tenets to the current times. The guru would look after the students from a young age to the time of their graduation, during which he would teach all the “vidya” or knowledge and expertise he could teach. In current times, this implies substantial mutual commitment, substantial trust in the teacher (guru), and substantial investment in ensuring the success of each student by the teacher. It also takes an investment of the order of $300K (over $50K in stipend and tuition each year, and an average of six years) for each Ph.D. student I graduate. I usually have to win highly competitive grants (by submitting proposals to NIH and NSF) to afford this investment. Hence my process of recruiting has been fairly deliberate and unusual. Most of my students reach out to me directly (they are usually not the students who first join my department and then come to me as they look for an advisor). They often do an online or onsite internship with me for several months. My diverse work experience has shown that technical skills are not more important than work culture. So I assess the reason for the student to work specifically with me, the desire to pursue a top research career, the willingness to work hard and run a marathon (a Ph.D. is a marathon, not a sprint), the ability to collaborate and work in a team, and soft skills. In return, my investment in each student is highly personalized. Having worked in industry, academia and built companies, I have the experience and network to prepare them for opportunities and success in any of these roles. Practically all my Ph.D.s and postdocs have won the competition with their counterparts from the top 20 programs for coveted research jobs (e.g., tier-1 universities or top industry research labs). After one or two jobs, several have founded startups.


You have been a pioneer within the development of the Semantic Web, creating the first Semantic Search company in 1999. How have you witnessed the development of this technology’s capabilities, and how do you see it impacting the future of AI and information retrieval?

I bet early on the winning side of the Semantic Web- one that used more scalable technology and realized the important role of populated ontology (also called World Model or Knowledge Graph) in building high-value semantic applications (search, browsing, personalization, advertisement), and not on difficult to scale, logic heavy and agent-centric side. Taalee, which I founded in 1999 (later Voquette and Semagix after M&A) was distinct in creating, scaling and maintaining a very large Knowledge Graph using tools, using both symbolic (knowledge-based) and statistical (machine learning) AI technologies and using them to support semantic applications including semantic search (all described in patent, papers, and keynotes during 2000-2003). It took quite a few years, until 2012, when Google decided that search should be semantic and required using a Knowledge Graph. Just as we learned that machine learning-based search is good but knowledge-enhanced semantic search is better, we will see that generative AI is good, but neurosymbolic AI is better.


What are your future research directions and goals? Are there any specific areas or challenges within AI that you are particularly interested in exploring? Any challenges you hope to overcome in the near future?

Three of my startups and three of my research organizations (Kno.e.sis and now AI Institute) have exploited the synergy of knowledge-based or symbolic AI and machine learning or statistical AI techniques starting in 2000. Following the emergence of deep learning, we started working on Knowledge-infused Learning, or knowledge-guided neurosymbolic AI, in 2016 (see Neurosymbolic Artificial Intelligence: Why, What, and How). Data-driven, statistical AI has shown tremendous success in scalably performing what I would call lower intelligence tasks such as classification, prediction, and recommendation, and since the emergence of the transformer model and generative AI, it is going up the intelligence ladder by handling more complex language and vision tasks. But two major classes of capabilities are not well supported: (a) explainability, interpretability, and safety; these limitations can be tied to the black box nature, and (b) understanding of data and signals that are contextualized, grounded and innate, leading to higher levels of conceptualization and reasoning utilizing the real world knowledge and experience. A strong belief that we need to exploit both data and knowledge (which I have described as the duality of data and knowledge) has motivated me to pursue knowledge-empowered neurosymbolic AI approaches in two forms: (a) knowledge-infused learning that incorporates knowledge into a neural network; approaches span shallow-infusion that lowers richer knowledge into simpler data representation (through embedding), a cruder way but computationally efficient to exploit knowledge, to semi-deep and deep infusion where attention is enhanced with targeted use of knowledge but requires more careful knowledge alignment within a neural network architecture, and (b) knowledge elicitation that unearths and lifts meaningful patterns from the neural network and aligns with rich, contextually relevant knowledge for reasoning to support tasks the represent higher-level of intelligence, such as abstraction and analogy (not just a simpler word or sentence level but rich analogies with structure and higher levels of abstraction).

A key area of our current focus is AI safety. The concern for safety is well justified as we are aware of generative AI’s misuse to supercharge the creation and spread of fake news, disinformation, and toxicity, which in turn has demonstrated harm to human, community, and societal interests. The difficulty for humans to distinguish between an AI and a human also adds to the challenge. The safety concern has led to thousands of concerned AI scientists and technology leaders calling to advocate a six-month hiatus on advancing AI. Geoffrey Hinton resigned from Google to express his concern regarding AI’s potential negative impact on humanity more freely. My team is pursuing strategies to create guardrails in various neurosymbolic AI techniques to support AI safety that use process knowledge to incorporate and respect policies, regulations, and guidelines that humans are expected to follow. For example, in the virtual agents we are building for various health conditions, we can constrain their behavior and operations using the same clinical practice guidelines the doctors in that specific medical practice are expected to follow. And for a mental health virtual agent, we are working on developing guardrails to constrain the natural language generation to avoid using terms that could negatively impact a mental health patient and add or adjust empathy so the interactions are more sensitive and supportive to the patient’s current condition.


More About Amit Sheth

Prof. Amit Sheth (Home PageLinkedIn) is an Educator, Researcher, and Entrepreneur. He is the founding director of the university-wide AI Institute (AIISC), NCR Chair & Professor of Computer Science & Engineering at the University of South Carolina. He is a Fellow of IEEE, AAAI, AAAS, and ACM. His awards include IEEE TCSVC Research Innovation Award, Ohio State University Franklin College Alumni Research Excellence Award, and the Ohio Faculty Commercialization Award (runner-up). Key areas of his R&D contributions include federated databases, semantic information integration, distributed and adaptive workflows, Semantic Web including ontology or knowledge graph enhanced computing, semantic web services, semantic sensor web, semantic social networking, and in recent years, knowledge-infused learning for neuro-symbolic AI. He has (co-)founded four companies- three of them from licensing his academic research, including the first Semantic Search company in 1999 that pioneered technology similar to what was found in Knowledge Graph-driven Google Semantic Search around 2013, ezDI which developed knowledge-infused clinical NLP/NLU, and Cognovi Labs at the intersection of emotion and AI. He is particularly proud of the exceptional success of his >45 Ph.D. advisees and postdocs in academia, industry research, and as entrepreneurs.