Issue No.05 - Sept.-Oct. (2012 vol.27)
Published by the IEEE Computer Society
Fei-Yue Wang , State Key Laboratory of Management and Control for Complex Systems
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MIS.2012.91
The flood of big data in cyberspace will require immediate actions from the AI and intelligent systems community to address how we manage knowledge. Besides new methods and systems, we need a total knowledge-management approach that willl require a new perspective on AI. We need "Merton's systems" in which machine intelligence and human intelligence work in tandem. This should become a normal mode of operation for the next generation of AI and intelligent systems.
It seems like everyone been talking about "big data" recently, speculating on the future of AI and intelligent systems. Big data has been characterized in many ways, from Doug Laney's original 2001 "3Vs" model to the various recent extended "4Vs" descriptions. Laney's three Vs are volume, velocity, and variety; the fourth V could be variability, virtual, or value, depending on whom you ask.
To most, those Vs indicate "bigness"—big size, fast movement, many types, and significant impact. To me, the "bigness" of big data is derived from its "smallness," or more precisely, for its inclusion and use of data stemming from all degrees of volume, velocity, variety, value, variability, and so on, whether virtual or real. In particular, big data implies that the long-tail effects on personal living and business operations will be a normal mode in the future.
But what does big data really mean in the era of cyberspace? Just two things, elegantly stated by two founding fathers and pioneers of modern management sciences long before the introduction of the big data concept. The first is from W. Edwards Deming: "In God we trust; all others must bring data." The second is from Peter F. Drucker: "The best way to predict the future is to create it."
Before the existence of big data and cyberspace, we could only treat their words as maxims. However, now, we must consider them as achievable technical criteria for our work. It is from these two maxims that we can find an entry point for AI and intelligent systems to embrace the idea of big data in the cyberspace era.
The flow—or flood—of big data in cyberspace will lead to knowledge revolutions in all sectors. In particular, we need immediate actions from the AI and intelligent systems community to face and manage potential consequences from revolutions in
• knowledge generation,
• knowledge dissemination,
• knowledge acquisition,
• knowledge utilization, and
• knowledge representation, evaluation, and implementation.
Web 2.0, Semantic Webs, and Web sciences are just the initial stages of those revolutions. Specific efforts have been made in the VIVO and iPlant projects in the US, the LiquidPub and Pl@ntNet projects in Europe, and the CAN, AI 3.0, cPlants, and PlantWorld projects in China. However, more work and innovation are still needed in order to experience their real and full effect.
In addition to new methods and systems, we must take a total knowledge-management approach to effectively handle the scale, speed, and impact of a transition from our current practices to cyberspace-based data-driven activities. This would require us to reevaluate AI from a new perspective.
From Newton to Merton
The field of AI was founded on the claim that a central human characteristic, intelligence, can be so precisely described that it can be simulated by a machine. John McCarthy, who coined the term in 1955, defined AI as "the science and engineering of making intelligent machines." Over the last five decades, in spite of the tremendous progress AI has made as its own scientific field, its focus is still on the intelligence of machines. The problem lies with a basic question: what machines?
By and large, they are Newton's machines, made and governed by Newton's laws. Humans are their builders and manipulators but are not integral parts. There lies the major reason behind the anxiety about machines taking over in the future as AI progresses.
By contrast, in the coming knowledge revolutions, we must deal with a new type of machine, ones in which humans are an integral part. Webs are typical examples of such machines. However, because we must take human and social behaviors into account for these "generalized machines," Newton's laws are no longer adequate for describing, manipulating, and controlling them. We need Merton's laws, such as Merton's Self-Fulfilling Prophecy, as well as Simon's Bounded Rationality and Heiner's Theory of Predictable Behaviors.
I would like to call these generalized machines Merton's systems, in which the human must be included in the loop and we deal with the art or science of the possible in a computational reality. It is time to move from Newton's machines to Merton's systems for the further advancement of intelligence research and intelligent systems.
In Merton's systems, machine intelligence and human intelligence will work in tandem, think together, and run parallel to each other. This should become a normal mode of operation for the next generation of AI and intelligent systems.
Toward Analytics Intelligence
Driven by data and guided by Merton's Laws, Merton's systems can be a new platform for the research and development of intelligence based on big data and cyberspace, making Deming's and Drucker's maxims a reality for the operation of future intelligent systems. We can already see movement in this direction in industry, as many major companies have moved from business intelligence to business analytics. In the academic world, the leading professional management organization, the Institute for Operations Research and Management Sciences (INFORMS), is championing the transformation of operations and management practices into analytics, and a few universities are proposing (and some even implementing) new academic degrees in analytics to tap into the demand for graduates who can use data to solve business problems.
INFORMS defines analytics as "the scientific process of transforming data into insight for making better decisions." I have some reservations about this definition, because it deals with only the abstraction process in analytics, whereas I believe that the reverse process (aka the visualization process), "transforming insight into data for making better decisions," is equally or even more important and should be central to any analytics research and application. I actually found the Wikipedia definition of analytics, "the discovery and communication of meaningful patterns in data," better and more precise.
AI can and must play a major role in this change towards analytics. The move from intelligence to analytics in the business world has so far ignored a major aspect of intelligence. The gathering or distribution of information, especially secret information, is only one side of intelligence; the other side is the capacity for learning, reasoning, understanding, and other similar forms of mental activity. We have to make sure the use of intelligence in analytics addresses both sides, which is why we should integrate AI and analytics and move toward Analytics Intelligence—a field guided by Merton's Laws, supported by Merton's Systems, and implementing Deming's and Drucker's maxims for operations. My own suggestion for starting Analytics Intelligence is the ACP approach I have championed in the past: artificial societies for descriptive analytics, computational experiments for predictive analytics, and parallel execution for prescriptive analytics.
I believe that in the not-so-distant future, everyone will need a personal Analytics Intelligence portal for his or her connection to and navigation in cyberspace. Google or Baidu will not be nearly adequate for the demands of future Web users, and without such a portal users may find themselves submerged in the flood of big data.