Notes from the Expo Floor - Home
Supercomputing and Big Names
DEC 11, 2013 09:13 AM
A+ A A-

This year’s Supercomputing conference brought together more than 10,500 attendees from over 50 countries to learn about the in-roads being made in many facets of the HPC field. Sessions and talks sprawled through the Denver Convention Center, leaving little time to breathe the crisp and clean mountain air. And while all of that is well and good, as usual, I found the exhibit floor to be utterly fascinating. There was simply too much to write about in one article, and in this first piece, I want to look at some of the big names on the Expo Floor.

It was hard to ignore once I saw the first one, and soon, I could see them everywhere: neon green scarves, heralding Nvidia’s tentacle-like grasp on the floor. Normally, the green hues would awaken the cynic in me, but it was clear that Nvidia was there to do one thing: talk about science. And that is something that they are starting to do very well.

As someone who cut his tech-teeth on pixel-counting and 8-bit graphics, I am fascinated with the graphics capabilities of this generation of GPUs, but as many of you know, GPUs are no longer *just* about graphics. Since the early 2000’s, scientists have been using the power of GPUs to perform a plethora of scientific applications, and with the rise of CUDA, more and more researchers are using GPUs to augment their research.

All of that (possibly obvious) history aside, I wasn’t struck necessarily by Nvidia’s size per se, but more by its involvement in the HPC community at large. If you take a look at the list of Nvidia partners who were also exhibiting on the floor, you’ll see just how impressive and voluminous the list is. Beyond that, they returned with their GPU Technology Theater again, a mini-conference in its own right, with 30-minute talks ranging from Leadership Computing and Heterogeneous Architectures to CUDA tutorials and Exascale. The speakers were just as good as the topics, too (Michael Wolfe, Jack Wells, James Hack, Jack Dongorra, Thomas Sterling, and Brent Leback, just to name a few). And the cherry on top of all of this? You can go to their site and watch any and all of these talks right now (They were also live-streamed during the conference, but that doesn’t really matter anymore I guess).

While I hesitate to shower too much praise on Nvidia for marketing their product well (that’s the point of the Expo Floor, right?), the idealist in me realizes that it doesn’t really matter on some level: if using GPUs can help in the biomedical field, climate modeling, or in analytics to improve our daily lives, then it all works out in the end, right?

Continuing on that thread, I had a chance to sit down with Douglas Miles, director of The Portland Group. I was more than a little curious as to how PGI’s focus would possibly have changed since Nvidia bought the company back in July. His answer? “We are tasked with working on compiler technologies for HPC systems. Period.” Succinct and to the point, Miles stressed the importance of programming and efficiency through utilizing the potentials in the HPC space, and as PGI has been one of the driving forces in the compiling space for the last twenty years, it’s impressive to see so much talent driving the road toward exascale.

One of the important things to note about PGI is how they are an enabler. I’ve mentioned this before, but supercomputing needs enablers, those brave souls that seek out ways to optimize the technology, helping scientists and researchers squeeze every last bit of juice from these power hungry HPC systems. With their support of OpenACC (among other initiatives), they seem to be asking, “Do you know what you want? Okay, here’s how to do it…” And that’s a great thing for all of the applications coming from the supercomputing sector.

Utilizing clusters for research is one thing, but there’s also been a recent trend toward offloading to the cloud some of the heavy lifting in research. Be it because of financial constraints or something else, cloud computing for research is now a viable option for some. Noting the trend, Microsoft Research has started the Windows Azure for Research Award Program. With a committee that spans engineering, architecture, and computing perspectives, they award researchers use of the Windows Azure platform for data computations and calculations.

So far, there have been 35 granted projects, ranging from drug discovery and genomics to climate science and civil engineering. They are currently accepting proposals for the program, and they aren’t necessarily looking just for meta-research (how to use the cloud for research). Additionally, as these awards go out, other than some mutual promotion, there are no strings attached. While we’ve talked about some of the different approaches to science data management and cloud computing for research, it’s nice to see these programs reaching out to the ones doing the field work (read: the researchers themselves).

Overall, the ability to gather data and information and the technology to do something with it is the current Chicken and Egg situation facing many organizations. Where do you allocate your resources when dealing with large datasets (I actually made it through the article without saying “big data”)? What do you do with all of the information? If you want to manage the data yourself, you’re going to need to find some data scientists and utilize the specialists who are going to help you navigate the many possibilities out there. Eventually, the systems of managing and interpreting datasets will be more comprehensible (something akin to what the Dell Kitenga Analytics Suite is capable of). Quickly visualizing data (and making it digestible for the people who aren’t necessarily fluent in data-speak) will be of the utmost importance in all manner of sectors moving forward.

And as much as I’d like to end with some ponderously philosophical musings about the human condition being linked to attributing meaning to things that seemingly have no meaning, I’ll just say that it is incredibly important that we turn all of this data into data that we can use. It’s good to see so many companies making in-roads in that space.

FIRST
PREV
NEXT
LAST
Page(s):
[%= name %]
[%= createDate %]
[%= comment %]
Share this:
Please login to enter a comment:
 
RESET