I consider myself a bit of a language junkie, although I’m more properly termed a languages person trapped in a systems researcher’s body. In the early part of my career, I worked with a colleague at Argonne National Laboratory on compilers (and tools for creating compilers) for experimental object-oriented languages. It’s probably a sign of insanity, but for those of us who studied in the late 1980s and early 1990s, the obsession du jour was to come up with the next awesome programming language or operating system. Realizing that getting anyone to use our languages would be a daunting challenge, most of us then settled on more pedestrian pursuits. For example, I decided to refocus my energy on messaging middleware for C and C++, knowing that there was likely to be a bigger audience.
As I was wrapping up my dissertation, which included some 160 pages and tens of thousands of lines of code, the Java programming language took the world by storm with its C++-esque syntax but a layer of abstraction above the bare metal known as the Java Virtual Machine (JVM). That led me to rework most of my dissertation project in Java, ultimately leading to a publication in the Sun Microsystems Press. It was still a ton of code, but removing the worry about unsafe pointers and having proper type checking, at both the source and object-code levels, was a big plus. Even though the ideas underpinning Java were hardly new (having been done in Smalltalk and UCSD Pascal several years earlier), many in my circles thought the Holy Grail of computing had been realized, where code would just magically work on any new platform simply by moving the executable code, and that developers would make little or no progress in programming languages from there. If code could be run anywhere simply by porting the JVM, problem solved!
That was 1995. This is 2012.
Unfortunately, as the story now goes, Java does run everywhere, but it requires a lot of resources, so it sometimes runs everywhere poorly (both in terms of performance and taking advantage of the actual platform on which it’s running). Worse, as we now know, the diminishing focus on the desktop means that having the same UI on every platform is not a win, so you inevitably have to live with some platform-specific code. That said, Java is still enormously popular, particularly in enterprise settings, and works well when resources are not an issue. Even so, it has other implementations (some controversial) that are much more efficient on smartphones and embedded systems.
Since the Java days, however, several new and interesting programming languages have risen to prominence — many of them opting to compile to the JVM as the target, and many targeting other runtimes (.Net) or native compilation (as an heir apparent to C/C++). Each of the languages is seemingly designed to take advantage of various platform affordances or work within their constraints. And, as the history of computing has shown us repeatedly, the advent of innovative hardware often leads to a rethinking of language — especially when it comes to the entrenched options (C, C++, Fortran), which are probably not going away anytime soon but might face serious threats from the latest to crash the language party.
I can point to at least two fundamental architectural trends that made us rethink language: multicore and power-aware/embedded computing (a.k.a. the Internet of things). And there might be more to come with general-purpose computing on graphics processing unit (GPGPU) systems and other accelerators, which largely continue to be programmed (with much pain and suffering) in the C/C++ languages. (Does anyone remember data-parallel languages like C* for the Connection Machine?)
So, we now find ourselves surrounded by several interesting language options, especially when it comes to supporting parallel, distributed, embedded, and Internet/cloud computing — not to mention various domain-specific needs. Listing them all is virtually impossible, but one thing does unite them that has nothing to do with architecture: a growing need to be productive. In the end, language (just like human language) gives us the ability to express any idea we wish — and hopefully to express it concisely, clearly, and correctly. Therefore, programming languages must evolve, or we will cease to evolve as developers.
Form Follows Function
In virtually all of the domains I listed, it takes too much work and too much code to solve most problems well in traditional von Neumann-style languages (those in which side effects are embraced as features, thereby making it difficult to attain reproducible results.) Worse, the code becomes incomprehensible in the presence of explicit threading, communication, and storage (I/O) calls. This points to a need to raise the level of abstraction so that developers can focus on the problems they’re trying to solve.
In working on this introduction, perhaps the biggest surprise was a confirmation of what Konrad Hinsen, Konstantin Läufer, and I wrote about for Computing in Science & Engineering almost three years ago in our two-part series on the promises of functional programing.1,2 Functional programming is reemerging as a serious alternative to the imperative model. It lacks side effects, and it requires the programmer to think about how to use (often recursive) types and functions to describe a given system and its behavior. More importantly, thinking this way gives us some hope of ascertaining a solution’s correctness (a “proof” of sorts).
Although this style of programming isn’t for everyone, the amount of energy going into it — or even in adding it to more pragmatic language designs such as Java, Python, and C# — leads to the inevitable conclusion that functional constructs are becoming a desirable feature of any programming language. This should help to improve software reliability over the next decade, not to mention let us take advantage of innovations in computer architecture, from smartphones to exascale supercomputers.
Most of the articles in this month’s Computing Now theme come from Computing in Science & Engineering‘s recent special issue on concurrency in modern programming languages. In that issue, we asked the authors to describe the language’s promise for supporting computational science applications and gave them a “homework assignment” to implement the Poisson equation, which is used extensively in computational science and engineering domains.
In “Concurrency and Message Passing in Erlang,” Steve Vinoski discusses the dynamically typed functional language Erlang. Although arguably not modern (it dates back to the 1980s), Erlang has been gaining ground recently, largely because of its lightweight concurrency model, in which userland threads communicate through message passing. This model has vast potential for efficient execution on multicore architectures.
In “Clojure for Number Crunching on Multicore Machines,” Martin Kalin and David Miller discuss Clojure, a Lisp-style programming language designed for the Java Virtual Machine. The ideas behind Clojure aren’t new, but combining the language with the JVM is. Given the JVM’s innate support for multithreading, developers can efficiently compile Clojure (a functional language) into threaded JVM code. Because all the tasks use a shared-nothing parallel approach, the synchronization overhead typically found in Java programs can be virtually eliminated, and tasks can be serialized when not enough processors are available. (This isn’t to say that Clojure can’t have shared variables, but when it does, the shared variables are accessed atomically.)
In “Deterministic Parallel Programming with Haskell,” Duncan Coutts and Andres Löh describe Haskell. In contrast to Erlang, Haskell is statically typed and offers a deterministic model for concurrency (parallelism). This is to be expected from a purely functional language, in which the results are supposed to be the same every time (otherwise, it’s considered mathematically and logically unsound).
The field of programming languages, much like jazz and improvised musical traditions in general, can best be summarized as a unique combination of theoretical rigor and creative expression. The languages covered here represent, in most cases, ideas that go back a half-century or more and are grounded in solid theory but were designed for a different generation of computing machines. Modern approaches basically “jazz it up” (pardon the reference) by building on earlier ideas but adding elements that are designed to go beyond the traditional von Neumann model of computing as we know it. One thing is certain: As long as there is interesting computer architecture, interesting languages are certain to follow.
- K. Hinsen, “The Promises of Functional Programming,” Computing in Science & Eng., vol. 11, no. 4, 2009, pp. 86–90.
- K. Läufer and G.K. Thiruvathukal, “The Promises of Typed, Pure, and Lazy Functional Programming: Part II,” Computing in Science & Eng., vol. 11, no. 5, pp. 68-75, 2009, doi:10.1109/MCSE.2009.147.
George K. Thiruvathukal is a full professor of computer science at Loyola University Chicago and co-director for the Center for Textual Studies and Digital Humanities. His research and teaching interests include parallel and distributed systems, programming languages, operating systems, platform studies, and free/open source software development. He also has extensive multidisciplinary interests, including digital humanities, music, history, bioinformatics. Thiruvathukal has a PhD from the Illinois Institute of Technology in Chicago. He is the incoming EIC for Computing in Science & Engineering and cochairs the STC on Social Networking. He recently coauthored Codename Revolution: The Nintendo Wii Platform (MIT Press, 2012) with Steven E. Jones.