By Michael Martinez
The 51st Annual IEEE/ACM International Symposium on Microarchitecture (Micro-51) saw blockbuster attendance this year, posting its largest turnout ever with 706 participants, who all gathered in Fukuoka City, Japan, in October.
That figures compares with the 570 attendees who participated in last year’s milestone Micro 50th Anniversary conference, held in Boston.
Read a calendar of IEEE Computer Society conferences
In a breakdown of participants, Micro-51 attracted 619 people to the main conference: 366 regular members and 253 student members.
Meanwhile, 87 attendees came for the workshops/tutorials only.
The conference, which is the world’s premier technical forum for innovative microarchitecture ideas and techniques for advanced computing and communication systems, drew academicians and industry professionals from dozens of countries to the conference site, the Grand Hyatt Fukuoka.
Not surprisingly, Japan was the No. 1 contributor to the main conference. The United States was No. 2. Here’s a breakdown of attendees from each country:
Japan 221
United States 177
South Korea 116
China 44
United Kingdom 13
Canada 9
Sweden 5
Spain 5
India 4
Taiwan 4
Hong Kong 3
Germany 3
Israel 3
Greece 3
Cyprus 2
Norway 2
Singapore 1
Switzerland 1
Poland 1
Belgium 1
New Zealand 1
Total 619
“This symposium brings together researchers in fields related to microarchitecture, compilers, chips, and systems for technical exchange on traditional microarchitecture topics and emerging research areas,” organizers said on their website.
During the conference, Ravi Nair (center in photo below), a researcher at the IBM Thomas J. Watson Research Center, formally received the 2018 B. Ramakrishna Rau Award.
Nair was recognized “for contributions to branch prediction in processors, microarchitecture techniques in heterogeneous processing, microarchitecture support for virtual machines, and near-memory processing.” The Rau Award honors significant contributions in the field of computer microarchitecture and compiler code generation.
Satoshi Matsuoka, director of Riken-CCS and a professor at Tokyo Institute of Technology, gave the first keynote speech, about “From Post-K onto Post-Moore is from FLOPS onto BYTES, and from Homogeneity to Heterogeneity.” Riken-CCS is the top-tier high-performance computing (HPC) center in Japan, currently hosting the K Computer and developing the next generation Arm-based Post-K machine, along with multitudes of ongoing cutting edge HPC research being conducted.
“The Japanese Flagship ‘Post-K’ next generation exascale supercomputer we are developing with Fujitsu, slated to start functioning as a whole system in 2020, will also be a pivotal machine as we proceed towards the Post-Moore era. The heart of the machine will be the more than 100,000 A64fx processors, each one being a 48 (+4) core Arm v8 CPU with the world’s first SVE (Scalable Vector Extension) implementation,” Matsuoka said in a summary of his speech. “Although being general purpose to boot any breed of Linux as well as Windows, it will be a game-changing CPU with extreme performance characteristics in memory bandwidth of 840 GB/s, on-chip network of over 40GB/s, while being very low power, at 15GFlops/W besting the top GPUs.”
The second keynote was delivered by Ruby B. Lee, the Forest G. Hamrick professor in engineering and a professor of electrical engineering at Princeton University.
Lee spoke about “Security Aware Microarchitecture Design.” At her Princeton Architecture Lab for Multimedia and Security (PALMS), Lee ‘s research includes designing security-aware architectures for processors, smartphones and cloud computing. Prior to Princeton, Lee was chief architect at Hewlett Packard computer systems.
“Recent covert and side channel attacks, like Meltdown and Spectre, have alerted the world to the security dangers of attacks on hardware architecture features. Even though microarchitecture performance features, like speculation and branch prediction, may be implemented correctly according to current computer architecture specifications, they can be exploited to leak security-critical or privacy-sensitive information to unauthorized entities. How then, can we design microarchitecture that improves performance without degrading security? Can we design architecture that improves security and performance at the same time?” she wrote in a summary of her remarks.
“The good news is that this ‘era of security’provides both critical challenges and exciting opportunities for new design principles and innovative strategies for joint performance-security optimization, security-functional-timing verification and new tools,” she said.
Mike Davies, who is director of Intel’s Neuromorphic Computing Lab, addressed “Neuromorphic Principles for Efficient Self-Learning Microarchitecture” in the keynote address on the third and final day of programming. The conference was held October 20-24.
“Earlier this year, Intel published its Loihi neuromorphic research chip in IEEE Micro (magazine). This novel processor implements a microcode-programmable learning architecture supporting a wide range of neuroplasticity mechanisms under study at the forefront of computational neuroscience. By applying many of the fundamental principles of neural computation found in nature, Loihi promises to provide highly efficient and scalable learning performance for supervised, unsupervised, reinforcement-based, and one-shot paradigms. This talk describes these principles as applied to the Loihi architecture and shares our preliminary results towards the vision of low power, real-time on-chip learning,” Davies said in a summary of his remarks.
About Michael Martinez
Michael Martinez, the editor of the Computer Society’s Computer.Org website and its social media, has covered technology as well as global events while on the staff at CNN, Tribune Co. (based at the Los Angeles Times), and the Washington Post. He welcomes email feedback, and you can also follow him on LinkedIn.