The Future is Reconfigurable
It won't be easy for students to create a new clustering algorithm for FPGA CAD tools, but some ambitious minds are certain to try. Accomplishing that task would give the winning team bragging rights for the first design competition at the International Conference on Field-Programmable Technology in December, along with the opportunity to have their algorithm published. More than that, such an accomplishment would illustrate just how far the world of reconfigurable computing has come and show what's possible in the future.
The contest, which ends November 26, calls for students to cluster lookup tables and flip-flops into a dual-output basic logic element to meet a set of CAD benchmark circuits that stress-tests FPGAs and their CAD tools.
Peter Jamieson, a researcher at Imperial College London who sits on the contest's organizing committee, said that the contest provides a way for students to understand the opportunities for smaller optimizations within the CAD architecture.
"Likely, this algorithm would need to move away from existing greedy approaches and look to estimate the effects of downstream CAD stages (such as placement and routing) and then use this information for a global solution," he said. "We suspect the design contest won't get that type of submission, but you never know."
Should a breakthrough occur at the conference, it would be one more step in the continued development of reconfigurable computing, a field that boomed after Xilinx ushered in the first FPGAs in the 1980s. Currently, reconfigurable computing is used primarily in product prototyping, limited-production complex devices, and high-performance computing, but the possibilities for its future applications are almost limitless. It's also used in embedded devices.
The Center for High-Performance Reconfigurable Computing (CHREC), a National Science Foundation consortium backed by more than 30 companies and organizations, is among the groups leading the way as the field advances. The organization's Web site lists several reconfigurable computing applications, including image processing, cryptology and bioinformatics.
According to Alan George, the center’s director and a professor at the University of Florida, CHREC and its partner universities are working on numerous projects to make those advances. One of the organization's goals is application productivity – because reconfigurable computing is relatively new and its use is limited to experts, there is still a long way to go to develop applications that are compatible with the hardware.
"We're sort of where conventional computing was a couple of decades ago," George said. "To do computing meant to write your own applications, and to do that you needed to know a lot about the computer. Of course, today people write applications for the computer that don't know anything about [hardware] and don't need to know anything about [hardware]. That’s sort of where we're at with reconfigurable computing, but that's natural. Any emerging technology is going to have that."
CHREC also has projects to develop devices that are reconfigurable in different ways, including polymorphous computer architectures (PCA), which can "morph" into different modes of execution to fit certain applications.
Speeding up FPGAs
Another company, Maxeler Technologies, sees FPGAs and parallel computing as the solution to scaling challenges brought on by microprocessors built with multiple cores to overcome problems with power consumption and clock frequencies.
"By mapping compute-intensive algorithms directly into parallel FPGA hardware, tightly coupled to a conventional CPU through a high-speed I/O bus, complete applications can be accelerated by orders of magnitude over conventional CPU implementations," the company says on its Web site. "By exploiting massive parallelism at the bit-level, FPGAs deliver performance far in excess of CPUs at approximately a tenth of the clock frequency, offering substantial improvements in cost/performance and power/performance ratios."
The company's chair, Stanford University professor Michael Flynn, explained that modern processor dies have been built to speed up processes via cache and buffering, so FPGAs have been able to catch up because they do more actual arithmetic.
"It's not intuitive that FPGAs should give you speed up," Flynn said. "They were always designed to do emulation of logic before you committed to design, so it became surprising that you could use this technology and get significant speed-up."
Maxeler's technology has proven useful for geophysical modeling in oil and gas exploration, where it's helping construct a geological image of the Earth based on terabytes of data from underwater sonic sensors. Oskar Mencer, Maxeler's chief executive officer and founder, made a presentation last year with the Stanford Center for Earth and Environmental Studies (CEES) to show how they improved an algorithm called shot-profile migration to make FPGAs 48 times faster for oil and gas industry's seismic processing.
Looking to the future
Mencer is also the head of the Computer Architecture Research Group at Imperial College London, where he oversees projects such as "liquid circuits," which aims to automate FPGA processes to the point where users could simply plug an FPGA card to make everything go faster.
"In general, my aim is to exploit reconfigurable technology for computing in ways that cannot be done with any other technology," Mencer said.
He acknowledged, however, that reconfigurable computing and FPGAs have a long way to go before they gain mainstream acceptance. Maxeler's seismic work, for example, requires a complete understanding of math, physics, and geology just to use the technology. "You really need a degree in geophysics and electrical engineering to get anything off the ground," he said.
However, as regular microprocessors reach their limits, reconfigurable computing increasingly comes up as a possible solution to larger computing needs.
"The value in what we have is in creating a computational array where all the cells can be active at the same time, doing a computation. They're all synchronized," Flynn said. "For this to happen across a number of applications, we're taking the data-flow graph of the program, unrolling it, and creating an array implementation of it. In order to do multiple applications you have to reconfigure elements of the array, and that's where reconfiguration is important."
CHREC's George is confident that the opportunities presented in reconfigurable computing can be realized. "In any complex system it's a challenge to find the bottlenecks and make the most of it," he said. "As people in the conventional computing world are now coming to realize, it was easy when everything was serial computing; it isn't so easy when everything is parallel computing. But it's necessary. The future is parallel computing, be it fixed or reconfigurable, there's no way around that, so the question is making the most of it."
Share this article