July/August 2012 (Vol. 14, No. 4) pp. 6-10
1521-9615/12/$31.00 © 2012 IEEE

Published by the IEEE Computer Society
Books

Norman Chonacky reviews an introductory textbook that considers fundamental computer operations as well as the methods and practices needed to create, use, or manage programs intended for the computational sciences and engineering. George Luger reviews a book dealing with how the rational human mind might be understood and enhanced through computer-based representations and algorithms.

Programming from the Bottom Up
Norman Chonacky

Aaron R. Bradley, Programming for Engineers: A Foundational Approach to Learning C and Matlab, Springer-Verlag, 2011, ISBN: 978-3-642-23302-9, 235 pp.
This book was prepared for a one-semester introductionto computer programming for electrical and computer engineers. It was designed to provide future engineers and scientists with an understanding of fundamental computer operations as well as the methods and practices needed to create, use, or manage programs intended for the computational sciences and engineering.
I found this book interesting to review within the context of these needs because computation has become a third, essential approach—along with experiment and analytical theory—to understand the physical world and to efficiently engineer complex, useful systems and products. Given computation's importance in science and engineering practice, you might expect the text to jump to the latest computing tools as a start—and the book does move to the Matlab programming environment in its latter part. But it starts with C and, what's most exceptional, is that it devotes high priority and considerable space to fundamental computer operations—that is, "what's under the hood"—and basic programming best practices, such as program validation and verification, strategies for failure, and good programming habits.
Here are some examples of this book's topical priorities, arranged in order of appearance.
Memory First
Rather than starting with a program to write "Hello world" (included, but much later, in the I/O chapter), it starts with a stack representation of a C program snippet that does something of interest to scientists and engineers. It assigns values and adds them to one another. In the process, it introduces the concept of "garbage" as a pre-assignment condition of memory locations, and thus explains the need for initialization. It effectively uses a stack diagram to represent what's happening in memory at stages of the snippet's execution. This diagram explicitly shows both the memory addresses and their contents. It makes the important distinction between data and pointer values as memory content, which is important to lots of programs but essential to those written in C.
Learning by Exploring
Apparent from the start is the text's instructional style—to explain some facts (for example, memory content) and yet illustrate but defer explaining others (such as why memory addresses of successive integer variables differ by four). This explanatory restraint lends an exploratory air to the text, offering room to engage the imagination and opportunity to evoke curiosity. The book interleaves programming best practices with factual exposition. Exercises punctuate the material, encouraging readers to self-assess their understanding. This style fosters learning based upon investigation instead of teaching based upon authority. It's a natural way to learn and gradually construct holistic understanding, and the book follows this pattern throughout.
Learning from Mistakes
The text confronts the pervasive programming issue of bugs right up front, in Chapter 1, in a section archly titled "How to Crash Your Program." This introduction to debugging is notable in making a distinction between program verification and validation. As an example, it illustrates how a mistaken access to memory causes not a program crash but invalid results.
Basic Elements
The text presents the important program concept of function by introducing the structure of a stack frame using the (former) notation of a stack diagram to visualize a function's memory allocation. Faithful to the text's style, it employs explanatory restraint, hinting that the program element main is itself a function.
The book concludes its memory chapter by introducing binary representations of numerical values. In doing so it uses a detail, formerly illustrated but not explained: integer variables are allocated four memory locations. The result: mystery resolved and curiosity rewarded!
The second chapter opens by introducing the most primitive control structure, the conditional If. In keeping with its style, the text presents this control concept in the context of a practice already offered as useful to good programming—input checking.
Moving on, via History…
The topics that I've mentioned are treated in just the first 36 pages of a book with more than 200 pages. However, by this point it has introduced a "basis set" of concepts that serves as a foundation for all that's to follow. The author marks this occasion by making a rather important and, I believe, exceptional point for a text of this level. It acknowledges that the mechanisms presented thus far are sufficient to compute anything "computable."
This assertion, related to computability theory, significantly places the specific concepts into the context of a larger body of historical knowledge pioneered by Alan Turing. This challenges the notion that computer programming is simply an artisan enterprise and supports the reality that it rests on both practical skill sets and a rigorous set of theoretical principles.I believe this is quite important for students to realize as they strive to build their own coherent understanding of the host of mechanisms and methods that follow. A few illustrations from the remaining chapters demonstrate the author's commitment to coherence and to his instructional style for enabling students to achieve it.
…to the Rest
A recursive control section occurs early in the conventional order of topics and here uses the previously introduced stack representation of stack frames to great effect. It gradually introduces new C syntax in the context, and to fit the procedural demands, of new algorithms. Recursion is important for more than streamlining nested calculations. For example, it surfaces in the construction of linked lists and trees, which appear later.
The debugging section, by extent and priority, is a great asset to students and consistent in its position with the pattern of interleaving practice and context as a didactic method in this text.
Dynamic allocation of memory was first described in the section "memory allocation" on the Stack within the context of functions. Now a Heap section presents dynamic memory allocation by the program, not the system. This distinction between Stack and Heap is important, both because the latter's programmatic allocation must be invoked explicitly and because, unlike the Stack, de-allocation of memory is the responsibility of the programmer and not the system.
The section on abstract data types uses the important construct of a matrix as an example and the implementation of matrices as an exercise of memory allocation on the Heap. But it also expresses the author's commitment to introducing best practices within a context by making a case for program modularization. In particular, it introduces that C capability for specifying data declarations in a way that separates public from private interfaces to these data types.
The section on linked lists situates their construction within the context of recursion. It exploits visual rendering of the list construct in a manner analogous to the stack list used for memory in earlier sections. In my personal experience, linked lists epitomize a divide between computer scientists and computational scientists. The former understand the mechanism and have a collection of their own uses, while the latter might be less adept with the mechanism but have uses far different than those imagined by the former.
For those of us who grew up when microprocessors were first invented and who value how our understanding of processor and memory operations framed in assembly language enhance our ability to deal with programming in higher-level languages today, this book confirms the belief that fundamentals matter and can serve as a reference for our young programming protégés. As a former physics professor, I had similar reactions to the book's didactic use of explorations and "just-in-time learning," which I can attest enable students to construct their own coherent, deep understanding of science and technology.
Who will benefit from this book and how? For students who are inclined to use deep, concept-oriented thinking—and are motivated to use exploration to achieve it—using this book in a course taught in the author's manner should be productive. For those who are satisfied, or compelled, to use algorithmic—rule oriented—thinking, the approach of this book could be problematic.
For an instructor steeped in convention, unwilling to explore new and somewhat unorthodox approaches, the benefits of using this book will be at best illusive. For a course to obtain the benefits of this text, both the instructor and the students should be amenable to its approach and interested in it goals.
Finally, this book could be a useful instructional tool for mature but "young" scientists or engineers to use in self-study, particularly if they've never had the benefits that we "old-timers" had in learning computation "under the hood."
Norman Chonacky is a research consultant to the Department of Applied Physics at Yale University. As a founder of the Partnership for Integration of Computation into Undergraduate Physics (PICUP), his current work is on assisting faculty to attain PICUP's (eponymous) goal nationally. Contact him at norman.chonacky@yale.edu.
Can Artificial Intelligence Improve Human Reasoning?
George F. Luger

Robert Kowalski, Computational Logic and Human Thinking: How to Be Artificially Intelligent, Cambridge University Press, 2011, ISBN: 9780521123365, 310 pp.
For readers desiring to know how the rational human mind might be understood and enhanced through computer-based representations and algorithms, Computational Logic and Human Thinking is a must-read. The subtitle, How to Be Artificially Intelligent, offers a further challenge that suggests that this book might also make an important contribution to understanding how humans think, justify their thoughts, and present coherent arguments.
Kowalski's goal is to "attempt to show that the practical benefits of computational logic are not limited to Mathematics and Artificial Intelligence, but can also be enjoyed by ordinary people in everyday life, without the use of mathematical notation" (p. 2). Logic, Kowalski suggests, in the tradition of Aristotle and Boole, offers a formalization for human laws of thought. Aristotle developed his theory of logic and reasoning in the Organon, or The Instrument, a term used by commentators to collect his various writings on logical forms. But Aristotle's logical forms also appeared in his Rhetoric—for example, I–II, where he suggested that their use is valuable for developing clear thought processes and presenting convincing arguments. Kowalski, following Aristotle, has also made computational logic the unifying theme for developing critical human skills for thinking and reasoning: logic "focuses on the formulation of normative theories, which prescribe how people ought to think. Viewed in this way, Computational Logic can be regarded as a formalization of the language of human thought" (p. 2).
The Details
The book is written in an easy-flowing and example-driven style. Chapter 1, for example, takes the set of instructions that make up an "emergency notice" on the London underground subway system and translates these into the conditional language of logic, including the if-then, and, or, and not connectives. In Chapter 2, Kowalski addresses arguments proposed by cognitive psychologists suggesting that humans aren't logic-based reasoning systems. In the following chapters, Kowalski uses another example—Aesop's fable of the fox and the crow—to introduce the notions of backward, or goal-driven, and forward, or data-driven, reasoning. Using these different reasoning modes, the human agent produces a search process, moving across a set of possible world states in the process of trying to build an argument or accomplish a task.
Because the reasoning scheme (or controlling process) of choice for Kowalski's computational logic is a form of resolution, Chapter 5 makes an argument for the use of negation as failure in the context of human reasoning and decision making—so basically, the failure to find a fact true can lead to the justification of it being false. This is a reasonable choice, I feel, when an agent has knowledge of all the relevant facts (the closed-world assumption). However, the argument is unconvincing when, as usually happens in human reasoning, such an assumption isn't warranted.
In the following chapters, Kowalski proposes a form of Emil Post's 1 and Allen Newell and Herbert Simon's 2 traditional production system as a cognitive controller, or architecture, that is able to apply goal- and data-driven reasoning schemes. This software architecture has been shown by cognitive psychologists to support many of the observable features of human problem-solving performance. As part of the decision-making component of production systems—the so-called conflict-resolution scheme—Kowalski relates his approach to the modern decision theory of Daniel Kahneman and Shane Frederick. 3 Kowalski invokes the prisoners' dilemma problem (the classic conflict between individual self-interests and collective interests in various contexts), including strategies and possible solutions, to demonstrate the addition of decision theory to conflict resolution to control the production system architecture.
To complete his presentation, Kowalski describes other related aspects of human decision making including the meaning of life, abductive inference, 4 agents in an evolving world of goals and purposes, and meta-logic. In the book's final component, Kowalski—addressing related arguments from the psychological community—argues for the cognitive plausibility of his computational logic formalization of human reasoning and decision making.
Analysis
Kowalski has been careful, in his goal of making the book accessible to the general interested reader, to exclude from his writing almost all mathematical notation. To complement this less formal presentation in the book's body, he has included six appendices that offer a rigorous formalization that supports his earlier, more intuitive presentation of the chapters. These appendices serve as excellent tutorials for the motivated reader. Their content includes the syntax of logical forms; logic-based truth as Herbrand interpretations; forward and backward reasoning schemes (including claims for soundness and completeness); minimal models and negation; resolution refutation systems; and abductive logic programming.
Although Kowalski has kept equations out of the 17 chapters of his primary presentation, his writing isn't free from artificial intelligence and computer science technical jargon. He uses such science-specific terms as closed-world assumption, minimum models, meta-logic, default or defeasible reasoning, negation as failure, compile/decompile, encapsulation, and many more. The author could have done a great service to the general reader by avoiding these terms whenever possible; or when such terminology was necessary, by describing their meanings in a way that's easily understandable to a nonspecialist. In future editions, it would be helpful to provide a glossary of such terms to assist some readers.
What I liked best about this book is that, as his top-level goal, Kowalski has taken on the task of developing a full epistemological stance. This stance attempts to address, in the form of a computational model with related search algorithms, what it means for the human agent to perceive stimuli, reason about their meaning, and respond appropriately within the constraints of an ever-evolving world. Kowalski's enterprise is much broader than piecemeal research on issues such as goal-reduction algorithms or truth in minimal models.
The cognitive architecture Kowalski chooses for his epistemological stance—the production system—has a long history in cognitive science, with its earliest use and justification by Turing Award winners Newell and Simon. 2 In more recent years, it has been extended, in the Soar architecture, to include agent learning of new rules/skills. 5 The broad epistemological viewpoint developed throughout this book, with its goals of understanding the nature of human reasoning and improving its use in the challenges of normal life, reflect many of the insights Kowalski has garnered during more than 40 years as a highly successful researcher in the fields of artificial intelligence and computational logic.
In this book, Kowlaski takes on an enormous task—which, of necessity, requires assessing the nature, representations, and processes enabling human reasoning. So it's only natural that there will be a number of epistemological issues that remain unresolved. For example, Kowlaski's adoption of the production system architecture of the 1970s and 1980s as the brain's "software" that controls decision making, 2 is seen today as an example of Daniel Dennett's Cartesian theater, 6 a remnant of Descartes' dualism. This theater is a hypothesized special place in the cortex where decision theoretic algorithms sort through specific choices brought forward by sensory, emotional, memory-based, linguistic, and other components of the human agent. Modern science proposes a much more decentralized and distributed architecture for cognition, a society of mind 7 with key constituent roles played by multiple distributed elements of the human system. For example, the finger moves from the hot stove more quickly than any nerve signal can go from the fingertip to the brain's Cartesian theater and from there to a process for motor control and finger withdrawal.
A further concern is how the components of Kowalski's "thought processes" are to be reified (made concrete) in the form of logic expressions, as he proposes in Chapter 2. Although logic is both a convenient and suitably expressive representation, when it's then coupled with specific search algorithms and related assumptions (that is, a closed world), the entire system becomes impossible to establish as a necessary model for human reasoning in any scientific sense. This confirmation problem is called representational indeterminacy by the psychologist John Anderson 8 and other philosophers of science.
In this serious and enjoyable book, Kowalski proposes a specific, utilitarian, and sufficient model, in the scientific sense, of human subject/world communications. And, as Aristotle suggested long ago, the sufficiency of this logic-based representational effort could offer insights that can lead to more coherent reasoning, writing, discussions, and arguments by human agents.

References

George F. Luger is a professor in the Computer Science, Psychology, and Linguistics Departments at the University of New Mexico in Albuquerque. His current research focuses on the development of probabilistic models of human skilled performance. Luger has a PhD in interdisciplinary research that included computer science, mathematics, linguistics, and psychology from the University of Pennsylvania. His book Artificial Intelligence: Structures and Strategies for Complex Problem Solving, first published in 1989, is now in its sixth edition; his book Cognitive Science: The Science of Intelligent Systems was published in 1995. Contact him at luger@cs.unm.edu.