How Did We Ever Live without GPUs?

By Dr. Jon Peddie
Published 12/01/2017
Share this on:

How Did We Ever Live without GPUs?
From Graphics to Crypto Currency and Scary Autonomous Devices, the GPU is Everywhere

Programmable graphics controllers have been with us since the Hitachi HD63484 in 1984, but it was limited and addressable only in basic binary code. Earlier in 1981, Jim Clark (founder of Silicon Graphics) and Marc Hannah (Stanford) developed the geometry engine that could transform model space to screen space viewing and it had very limited programmability. The first truly programmable graphics device was the TI TMS34010, introduced in 1986. The first company to use the term GPU was 3Dlabs when they introduced their programmable geometry processing unit in 1999. Moore’s law was kicking in and it was possible, and economically viable to use a million transistors to incorporate all the registers and cache needed for a programmable device. It was also necessary to have a high-level complier, so a larger population of programmers could exploit the power of these machines.

GPU Big
GPUs have revolutionized our lives

 

In late 2000, Nvidia introduced their GPU, this time the “G” stood for graphics. Nvidia is one of the greatest marketing companies in the world, and was immensely successful in establishing the term GPU into our vocabulary. AMD also had a programmable device, and not nearly the marketing power of Nvidia, but nonetheless, their GPUs (which they called VPUs at the time) were used in experiments at Stanford as parallel processors. In case there’s anyone left in the universe that doesn’t know it, a GPU is a collection of 32-bit processors capable of doing integer and/or floating-point calculations simultaneously and summarizing the results, it’s what’s known as a single-instruction, multiple-data (SIMD). The SIMD construct, or architecture, was needed because graphics is such a parallel function. The easiest was to embrace the concept is to consider an HD display—it has >2 million pixels, and we want to refresh most, and sometimes all of them at least 30 times a second—you can’t do that with a single processor.

The researchers at Stanford developed parallel processing algorithms using the arcane and cranky API OpenGL (developed at SGI decades ago). The group at Stanford left the university and formed a start-up software company to exploit the parallel processing power of GPUs, which at the time was being designed and targeted for PC games. The company, PeakStream, lasted almost a year and was acquired by Google in 2007 and buried deep inside the company, but the acquisition represented an affirmation by Google of the need for parallel processing. AMD supported the fledging PeakStream before and after the acquisition, but was distracted with other demands, and moved on. Nvidia, however, saw the opportunity and leapt on it, and by 2006 everyone thought Nvidia had invented the GPU, parallel processing and SIMD.

Mvidia

Nvidia, fully committed to SIMD computing, in addition to gaming and professional graphics, soon realized SIMD, or GPU-compute (also called GPGPU, a term I personally and pedantically don’t like) would never realize its potential if the only way you could program the beast was through OpenGL, and so in 2005 the company made a big bet and invested in the development of a specialized parallel processing programming language they called CUDA. The company introduced CUDA to the world in 2006—20 years after the TMS34010 was introduced. CUDA, which stands for Compute Unified Device Architecture, is a C-like construct, and that opened up the possibility of making it accessible to millions of programmers. But Nvidia took it a step further, and enabled hundreds of universities around the world to offer CUDA classes.

The platform was now in place, a SIMD language, and a SIMD processor that was being sold by AMD and Nvidia in the tens of millions every month and therefore very affordable. But any revolution takes time to be understood and adopted, and so it was with parallel processing. Hundreds of software accelerators were being developed in industry, in CAD, finite element analysis, and other scientific, medical, industrial, and military applications.

Seeking a non-proprietary solution, Apple developed OpenCL and then bequeathed it to the Khronos group, who also manage, develop, and support OpenGL. OpenCL is a framework for writing parallel-processing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs).

And then in mid-2000, Geoffrey Hinton developed an incredibly efficient deep-learning algorithm. His work was amplified by Andrew Ng and team at Google research with the cat-finder project. Who knew there was “big-data” for free on the web that consisted of hundreds of thousands of images of cats. Deep-learning enabled a quick sorting and learning process that when guided by a teacher (this is a cat, this is not a cat) could enable a computer to sort and select cat pictures.

The example, although cute and silly, became a sort of benchmark as universities vying for the fastest cat search, and ignited the imagination of thousands who quickly saw the artificial-intelligence applications.

The Big Leap
In the meantime, GPUs kept getting bigger, faster, and in general more powerful, while still remaining affordable. When researchers began applying DL techniques to AI using GPUs, magic happened, honest to goodness magic. It suddenly became possible to sort through the ever-increasing reams of data being generated by ATMs, IoT devices, medical and social security records, credit card transactions, and scariest of all, social media activities.

AI is going to make autonomous cars, drones, submarines, and trucks possible, and not 20 or 50 years from now, but now, today. It is also going to make autonomous military weapons possible, which scares the hell out of famous people like Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts. They have called for some type of regulatory system. Good luck with that.

Apart from the scary AI stuff, yet more earth-shaking applications have arisen for GPUs—crypto-currency transaction versifications, known as block-chain mining. Twenty-four hours a day, every day, tens of thousands of high-powered GPUs are evaluating Internet scans looking for transactions between buyers and sellers who use cryptocurrencies like Ethereum, bitcoins, Litecoin, and a dozen others.

Cryptocurrency mining is a computationally intense process that contributes to the operations of the cryptocurrency network while generating new currency. However, it takes a massive amount of computer resources and subsequently electrical power to generate meaningful income. Nonetheless, thousands of people and organizations are involved in the work and it has accounted for what I estimate is $1.05 billion in sales of graphics add-in boards (AIBs) to date.

GPU Chart3

The big jump in GPU shipments was driven by crypto currency

 

In addition, there is great promise for block chain computing as a disruptive technology in the Internet economy. The idea of a shared, distributed ledger to record (and preserve) the history of transactions streamlines legal disputes, financial exchanges, even barter networks, or credit exchanges. The work going on around this capability is going to further drive demand for distributed processing even if cryptocurrency eventually blows up, or at least settles down from these go-go years.

Although today the largest demand and use for GPUs is still gaming, all these other activities and interests are taking a bigger share of GPU use every day. From what was a seemingly esoteric application of professional graphics (for applications like special effects in the movies) to PC and console gaming, the GPU has become the heart of supercomputers, AI machines, autonomous vehicles, cryptocurrency processors, and big-data, deep-learning applications in almost every industry and activity imaginable. The GPU has become ubiquitous—how did we ever live without it?