Nearly 20 years after the 1st workshop on Agent Theories, Architectures, and Languages (ATAL’94) at ECAI’94, which many assume as a starting point of intensive agent systems research, we asked some of the most prominent and active researchers in the field to share their views on a few fundamental questions about software agents.
The responses from Costin Badica, Lars Braubach, Giancarlo Fortino, and Giovanni Rimassa provide quite a composite view of the status and perspectives of research on agents and multi-agent systems, as well as on their future applications.
Costin Badica is a professor in the Computers and Information Technology Department at the University of Craiova, Romania. He has a PhD in Control Systems from University of Craiova. His research interests are at the intersection of artificial intelligence, distributed systems and software engineering. He has more than 100 publications in journal articles, book chapters, and conference papers, more than 10 guest-edited journal special issues, as well as several edited books. Contact him at firstname.lastname@example.org.
Lars Braubach is a senior researcher and project leader in the Distributed Systems Group, University of Hamburg. He has a PhD in computer science from the University of Hamburg. His research aims at enhancing the software technical means for developing distributed applications and targets technology transfer from research to industry. He is co-inventor of the active components approach, which brings together agents with components and services characteristics, and he has been one of the Jadex agent platform’s core architects since 2003. He has supported several technology transfer projects with industrial partners such as Daimler and Uniique AG and published more than 80 papers at international workshops, conferences, and journals. Contact him at email@example.com.
Giancarlo Fortino is an associate professor of computer engineering in the Department of Informatics, Modeling, Electronics, and Systems (DIMES), University of Calabria (Unical), Rende, Italy. He has a PhD in computer engineering from Unical. His research interests include distributed computing, wireless sensor networks, software agents, cloud computing, and multimedia networks. He has authored more than 200 publications in journals, conferences, and books, and currently serves on the editorial boards of Journal of Networks and Computer Applications, Engineering Applications of Artificial Intelligence, Information Fusion, and Multi Agent and GRID Systems. He is cofounder and CEO of SenSysCal, a spin-off of Unical, developing innovative sensor-based systems for e-health and domotics. He is IEEE Senior member. Contact him at firstname.lastname@example.org.
Giovanni Rimassa is a product manager at Whitestein Technologies. He has a PhD in information engineering from Università degli Studi di Parma. His professional interests include intelligent business process management, agent middleware, product innovation and technology transfer. Active on the agent scene since 1997 with his seminal work on the JADE platform, he has published more than 60 papers on journals and conferences. He is a member of IEEE and IEEE Computer Society. Contact him at email@example.com.
- Where is research on agent systems today and where is it heading? What are the largest successes and failures to date, and what are the most important lessons learned?
- Which are the most successful applications of software agents in the real world? Which are the next application areas for them, and are humans ready to interact with software agents in their everyday lives?
- What is your perspective on agents standards? Is the current status of FIPA as an IEEE body satisfactory, or do we need something else?
- What’s still missing in the area of agent platforms, languages, and tools?
Question 1: Where is research on agent systems today and where is it heading? What are the largest successes and failures to date, and what are the most important lessons learned?
Given my professional trajectory in these 20 years, during which I moved from academic to industrial research, and then to product management and innovation, I’m not going into a detailed assessment of agent systems research at large, which can be done much more authoritatively by others. I’d rather focus on issues such as the general impact of the research in the field, or technology transfer and eventual adoption in the large arena of software construction.
From my specific observation point, research in multiagent systems has clearly become a pretty mature field. After the boom of the 90s, the discipline appears to have settled. It’s probably not as convenient today to choose a PhD topic on general agent principles, nor is it easy to get, say, EU funding for a research project on agent technology infrastructure or open systems testbeds. Given this level of maturity, I expect future development in agent systems research will see a continuation of theoretical research, applying mathematics and fundamental computer science to schematic problems related to multiagent systems, but the more applied and engineering-centric areas will have to get more specialized and domain-centric. An example, inspired by the recent world situation, is the application of agent-based simulation for economic and financial risk assessment and decision making. Moreover, with the recent scandals of the US National Security Agency (NSA) global spying, people might start remembering that the Internet was supposed to be a peer-to-peer, decentralized system rather than a huge pyramid scheme in which users submit their data — and now even computation — to a small number of enormous IT corporations of dubious accountability. It is therefore possible, and even likely, that core computing topics on networking (for true peer-to-peer) and programming languages (for easier concurrent programming on multicore processors) will get even closer to the ideas of agent systems for very pragmatic reasons.
I can’t say that software agent research has been completely successful, and in the past few years I’ve even heard multiple statements about the ultimate failure of multiagent systems — although usually without clear motivations (the funniest case was from a keynote speaker in a conference on autonomic computing). I believe the result is actually mixed and requires some elaboration. Agent technology’s biggest successes are in some of its central concepts and how they’ve anticipated and in some cases even driven mainstream computing’s evolution. If we compare the typical IT landscape today with the way it looked in 1994, we find many traits that researchers were advocating in multiagent systems back then are now commonplace:
- Asynchronous message-based communication;
- Complex, structured schemas for message payloads;
- Ubiquitous concurrency;
- Situated mobile devices with a host of context-sensitive information;
- Mobile code; and
- Increased use of social sciences ideas to design software (organization models, social networks, collaborative problem solving, gamification).
In the area in which I operate — that is, business project management software for enterprises in a variety of industries — the evolution went from traditional middleware views to human-centric business process management (in which BPM suites provide artifacts to improve cooperation among human agents) to the current trends that recognize and emphasize that BPM software must bring some intelligence and adaptivity of its own to support empowered knowledge workers. The envisaged result is nothing less than multiagent systems in which some human and some software agents cooperate to bring business processes forward.
Perhaps the most interesting failure point is that agent technology never became the “next OOP” (in other words, the dominant programming style). I find it particularly ironic that many now call functional programming to the rescue when pointing out the limits of object-oriented approaches. Agent systems — taking the moves from the actor-programming model — should be the obvious step forward from OOP in concurrent and distributed environments (basic “design by contract” is invalidated when more execution threads are involved).
In my view, two main factors have contributed to prevent agent technology from becoming the accepted mainstream evolution of OOP. First, the research community wasn’t particularly programmer-centric; being mainly interested in algorithms, protocols, and formal frameworks, the software engineering focus was on methodologies and middleware. In comparison with the OOP community, not much was done in the area of programming languages and development tools. That said, the situation has partially changed, with a dedicated subcommunity now striving to establish software agents as a programming paradigm.
Second, and perhaps more importantly, there has been a significant impairment in technology transfer towards industry practitioners or casual programmers. Even today, most people familiar with multi-agent systems have been involved with the academic research or (for younger people) received university courses on the subject. Compare this with object orientation, in which multiple tiers of people and institutions relay, rehash, and sometimes distort the core research’s fundamental results. A programmer applying the Factory Method pattern, for example, often won’t even be aware of the original entry in the Gang of Four book (E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, 1994.) I don’t feel the same can be said of, for example, the Contract Net Protocol.
Most current research is spread into the areas of formal modeling and logic for multiagent systems, game theory (with related topics, such as mechanism design), and learning. There is also interest in adjacent areas, such as robotics and simulation.
I think the most promising areas are game theory and learning. Formal logic is nice, but real applications are limited. Robotics and simulation also show interest in applications.
Probably agent-oriented programming is very far from what was expected initially. Agent-oriented software engineering is also quite far from mainstream software engineering.
It’s important to evaluate results from autonomous agents (AA) and multiagent systems (MAS) research in the context of sound computer science. We shouldn’t treat them like something “exotic” but rather should evaluate them according to computer science rules. Agents should not be treated only as a new technological paradigm but also as a method for analysis and design of complex systems. In the end, a system designed using the agent paradigm can be implemented with state-of-the-art technologies, not necessarily agent middleware or platforms. Nevertheless, agent platforms can be very useful for simulation and prototyping.
Researchers are focused mainly on the same issues they were 10 years ago: providing methods and algorithms for dealing with (natural and artificial) complex systems modeling and analysis from different perspectives. Of course, agent application domains have changed a bit as novel technology and application domains have appeared.
It’s difficult to say where the field is heading. Indeed, I think there’s no single direction. Researchers are focusing on the same important theoretical contexts (formal languages, engineering methodologies, negotiation algorithms, game theory, simulation-drive emergent behavior analysis, and so on), and sometimes they move to new application domains — Internet of Things is a current example — to address specific issues by exploiting “agents.”
After 20 years, we still don’t have a unified agent model — a fact that has reduced the agent-oriented approach’s appeal, specifically in non-academic contexts. On the other hand, a significant amount of research (and some development) has been carried out, and many algorithms and methods are actually available for use in a wide range of (even non-agent-oriented) application domains. Moreover, the agent paradigm’s effectiveness for modeling distributed and complex systems is perhaps its greatest success.
The agent paradigm is effective in dealing with open dynamic distributed complex systems. However, it isn’t the “killer paradigm,” even though researchers have successfully proposed methods, algorithms, and systems at different levels of abstraction. Some, such as the Java Agents Development framework (JADE), have even been used in real industrial applications rather than just in academic contexts.
Research on agent systems comes from many different directions. Hence the many different kinds of definitions for the term agent that have emerged since the beginning of agent research, which have focused on AI, software engineering, personal assistants, and mentalistic notions (human properties such as beliefs and goals when used for describing software agents), for example. Moreover, agent research includes many different topic areas. This broad spectrum of research is one of the greatest strengths for multiagent systems, but it’s also a fundamental problem. It is a strength because the supporting community is broad and progress in many different subfields is achieved independently of each other. On the other hand, that breadth leads naturally to heterogeneity in the field, including artifacts such as programming languages and tools, all of which makes the results more difficult to assess. In my perception, the research focus has shifted considerably since the beginning.
In the early years, agent-based software engineering was a very active area of research (inspired, for example, by Yoav Shoham’s seminal article about agent-oriented programming), and researchers proposed many agent programming languages, tools, and methodologies. But as SE foundations settled and implications in practice remained low, the relative importance in the research community diminished. Nowadays, I perceive an emphasis on formal foundations and multiagent planning, learning, and coordination techniques (looking, for example, at the accepted papers from the 12th International Conference on Autonomous Agents and Multiagent Systems [AAMAS 2013] as an indicator).
An important success of multiagent systems is a shift in thinking about how to look at complex systems. It encourages decentralized architectures and represents the first dedicated paradigm for distributed systems. One big failure of agent research is the inability to establish agents as an accepted SE development paradigm. Despite substantial initial efforts, the technology has largely failed to reach software people outside the agent community itself. In my opinion, that failure is largely due to the relatively large conceptual distance between agent-based software engineering and well-established paradigms with object and component orientations.
This distance manifests on different levels in the single-agent programming languages versus agent organization and coordination techniques. But my main criticism here concerns the assumption that agents should communicate only by asynchronous speech-act-based messages. This renders agent technology unacceptable for many practitioners that require explicit system interfaces and method signatures. Here, the conceptual requirement of keeping an agent autonomous is misinterpreted in a too-technical way. Several recognized researchers have noticed the problem of message-based communication, and work on alternatives has led to several different proposals, including commitment and goal-based interactions, artifacts, and active components.
In terms of lessons learned, my answer comes from a very personal software engineering perspective, having actively developed agent applications and a platform since 2003. For me, an important lesson is that agent technology needs to be simplified and brought nearer to object orientation and SOA to become usable in practice. This means that researchers should aim to reduce heterogeneity and complexity rather than steadily increasing them by inventing new approaches on all levels. Currently, agent software is not well-suited for rapid prototyping complex systems because it involves too many preparatory activities such as protocol and ontology design.
Question 2: Which are the most successful applications of software agents in the real world? Which are the next application areas for them, and are humans ready to interact with software agents in their everyday lives?
In contrast to the 2005 agent technology roadmap forecast, which predicted agent technology’s slow but continuously improving deployment (reaching the mainstream starting in 2010), actual deployments haven’t increased visibly, and still just a few real-world applications have been installed. The good news is that many of the original agent companies, including Intelligent Automation (Cybele), Whitestein (Living Systems), TILAB (JADE), AOS (JACK), Cougaar Software, and SOAR Tech, are still successfully applying agent technology in specialized market segments such as telecommunications, logistics, workflows, autonomous vehicle control, satellite control, and intelligent support systems.
The Procedural Reasoning System (PRS) — the origin of BDI systems — was developed at NASA and successfully applied to several space applications, including a fault-detection system for the space shuttle’s reaction-control system. NASA’s mission-control software was also implemented in an agent-oriented manner using the Brahms framework.
Simulation is another strong application area for multiagent systems. For example, Massive Software‘s creation of movie scenes with artificial actors in the Lord of the Rings was an impressive showcase for applying agent simulation technology.
The use of the SOAR cognitive architecture in (military) training applications is also noteworthy as it shows how agents can efficiently simulate human behavior in real-world scenarios. In the military domain, we find several further uses of agent technology. One exceptional example is the very complex logistics management software in the DARPA-funded UltraLog project, which led to the development of the Cougaar agent platform.
Autonomic and cloud computing, big data scenarios, and robotics tend to be very promising areas. I believe that intelligent cloud infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) control software could greatly benefit from agent technology — for fully decentralized coordination, for example. Despite this opportunity, however, the big cloud providers such as Google and Amazon are using other alternatives to agents in their solutions.
Big data scenarios, which typically involve distributed data processing, offer other opportunities. In robotics, problems increasingly shift from single-robot to multi-robot perspectives. Here, multiagent technology has a good chance to be fruitfully combined with robot operating system software.
Interaction with autonomous systems will become increasingly well-established. For example, several web sites already have avatars that imitate human advisers. Of course, interaction with virtual characters doesn’t necessarily mean that agent technology is used behind the scenes.
Is there any successful agent-based (in the strict sense) application? Perhaps we can point to web crawlers, which aren’t software agents in the strict sense, or JADE, which is indeed a platform for building applications. Many agent-based applications exist in various domains, but it’s tough to identify “successful” applications. The next application areas for agents are likely to be smart cities and the Internet of Things. In terms of interacting with software agents, I think users will adjust to them if they are integrated into interactive GUIs. People already use Facebook, Twitter, and many other applications. It depends on usability and usage frequency.
Simulation and security are likely areas for agent applications. However, people are probably not yet ready to interact with agents in daily life. We should also look into what traditional human-computer interface (HCI) research calls for and try to understand which results would fit or could be applied to human-agent interaction.
Software agents have been applied in practice across many domains. From my personal experience, I can point to Whitestein’s logistics solution, which DHL has deployed in 27 countries; I believe it is the largest multiagent system ever deployed for production use, in terms of number of users. We are also obtaining very good response for our agent-oriented business process management (BPM) suite in the financial services and manufacturing industries.
Although our suite is already successful, the field of BPM software as a whole has yet to explicitly take on agent ideas. That said, Gartner changed its Magic Quadrant for BPM suites last year, introducing the concept of intelligent BPMS (iBPMS, for short), and it’s news as of September 2013 that Jim Sinur (who drove the iBPMS ideas at Gartner), Jim Odell (former FIPA Chair), and others have a book on the next wave of BPM, which is, in their words, going to be agent-oriented BPM. I’d put business process management at the front of the next areas of agent technology application. Other areas come from agent-based simulation use due to emphasis on big data and quick adaptive decision making: in the past two to three years, we have received requests from energy, aerospace, and financial companies that were all looking for complex decision-support systems with significant simulation capabilities.
It’s very difficult to assess how ready users are for a technology. On the one hand, if the technology brings genuine innovation, the users can’t be ready, by definition. Nonetheless, the ways people concretely employ new tools will provide feedback that, in turn, shapes those tools: misuse and abuse are hallmarks of success. I am cautiously optimistic, for I see a complex scenario with many possibilities, in which the political and social trends rather than the core technology traits will make the greatest difference. Interestingly, this year’s Gartner Emerging Technologies Hype Cycle concentrates on the evolving human-machine relationship.
Question 3: What is your perspective on agents standards? Is the current status of FIPA as an IEEE body satisfactory, or do we need something else?
Standards for agent technology, in general, are as useful as those in other parts of computer science and IT. Their effectiveness depends on much more than the quality of the technical specifications, to include support consortia, network effects, and ease of blending with other predominant practices and technologies. For middleware, this last point is even more important.
Most specifications from the Foundation for Intelligent Physical Agents (FIPA) made sense in 2000, when mainstream IT wasn’t really grasping the concepts demanded by an agent infrastructure; nowadays, the world has not only moved on but progressed in exactly the direction FIPA advocated. Therefore, it would be pointless not only to use the old standards as they are but even to try to update them to create our own flavor of what everybody else is already doing anyway.
Hosting FIPA within IEEE is a very good move from the organization and credibility point of view, but I’d like FIPA to recast its original mission (interoperability) in a world in which the basic services of agent platforms and environments are already a given. The first step should be a gap analysis: what isn’t there yet that would make multiagent systems more convenient to realize in concrete applications? What elements of this list would benefit from third-party standardization? On the other hand, the latest and most active FIPA working group I recall was focused on methodology and design process, which seems to be putting the cart before the horse a little bit.
We don’t necessarily have to follow standards, like FIPA. Their spread was popular in the “interoperability era.” I think FIPA is OK, but I don’t think that more standards are needed.
We’ve got, let’s say, “standards,” but agents aren’t strongly immersed in the commercial or industry world — so, do we really need standards in this case? FIPA has been frozen since the 2005 decision in Budapest to move it into IEEE. I thought — and still think — that FIPA, and agent technology itself, were not mature enough to be moved to IEEE. I think we need to restart the organization (perhaps with a different name, although FIPA is already known [only] in the agent community) as an (almost) voluntary effort if we’d like to create critical mass on agent standardization — but maybe nobody cares!
Standards are always important in order to foster acceptance of a technology and free customers of potential vendor lock-in. With agent technologies, the FIPA standards primarily address agent-to-agent communication and ensure that different agent platforms can communicate. In 2013, the world simply doesn’t have that problem as only a few agent applications are deployed and no worldwide network of deployed agent platforms exists in the spirit of AgentCities. So, these standards are mostly irrelevant for practice. Furthermore, web services technology has successfully achieved interoperability in distributed systems. In this respect, allowing agent platforms to seamlessly externalize functionalities as web services and use existing web services to integrate with other systems is of great practical importance. This includes standard Web Service Definition Language (WSDL) as well as a growing number of RESTful web services, as big players such as Google and Yahoo offer more APIs via REST. In my view, a new initiative for pure agent standards will not contribute much to the technology’s adoption as they address an unimportant problem. Instead, we primarily need standards that propose integration with established technologies.
Question 4: What’s still missing in the area of agent platforms, languages, and tools?
We have a lot of stuff. I think we need to apply and evaluate them. Of course, in the industry world we need to focus on a (limited) set of reference models and related CASE tools (including a reference methodology). I think this is key for wider acceptance of agent technology.
I think several things are missing — starting with agent programming languages that tightly integrate with mainstream object-oriented languages. We should also have industry-grade distributed infrastructures in the grid and cloud areas, which employ agent technologies. Especially in the area of platform-as-a-service (PaaS), new programming approaches are needed to develop distributed applications. Here, multi-agent approaches could help fill this gap. Related to this aspect is that we still lack comprehensive debugging and testing approaches and tools for distributed systems.
We need to link to more traditional CS research and methods and to avoid unnecessarily reinventing things under the agent umbrella — as AOSE or AOP sometimes try to do.
Agent platforms should be redesigned with the additional constraint not to pollute the standard IT infrastructure: be shiny for their advanced features but very boring in terms of mechanisms. They should exhibit no persistence except with relational database management systems or NoSQL databases, and perform no message passing except with Java Message Service, SOAP, or RESTful web services. For languages and tools, the current wave of research trying to bring together actor and agent programming languages should become stronger, with the goal of producing one or more complete development environments that are really suited for mass adoption. My personal wish would be to have an actor/agent language with a modern type system that differentiates among agent, artifacts/services, and knowledge/data.