• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Publications
  • /Tech News
  • /Chasing Pixels
  • Home
  • / ...
  • /Tech News
  • /Chasing Pixels

The Many Roles and Names of the GPU - Its versatility has led it to multiple and various platforms

By Dr. Jon Peddie

By Dr. Jon Peddie on
September 18, 2018

Featured ImageFeatured ImageThe original use and development for the GPU was to accelerate 3D games and rendering. The acceleration of the game’s 3D models involved geometry processing, matrix math, and sorting. Rendering involved polishing pixels and hiding some of them. Two distinctive, non-complimentary tasks, but both served admirably by a high-speed parallel processor configured as a SIMD—same instruction, multiple data, architecture. The processors where used in shading applications and became known as Shaders. Those GPUs were applied to graphics add-in boards (AIBs) and served their users very well. SIMD is the architectural design, GPU is the branding name, just like we have an x86 CPU (brand) which is a CISC architecture, and an Arm CPU (brand) RISC architecture.

It didn’t take long for the mass-produced GPU, which enjoyed the same economy of scale the ubiquitous x86 processor did, to be recognized as a highly cost-effective processor with massive compute-density. As such it was applied as a compute accelerator, and other than an awkward programming interface that only a coder could love, it exceeded the expectations of the users, and the suppliers. GPUs ultimately found their way into the top 10 of the 500 supercomputers, year after year.

GPUs were also applied to image-processing workloads in high-end, ultra-high-resolution cameras, robotic cameras, and cameras in smartphones. That then led to the application of GPUs in machine learning and AI, both for training and inference.

And it didn’t stop there. GPUs were placed in servers in the datacenter and first used for bursty projects like film rendering as a service from the merchant cloud providers. And that led to the idea of making a remote GPU a virtual GPU, bringing the power of a big (and usually expensive) GPU to an occasional user, or a user that just didn’t have the budget or space for a powerful local GPU.

GPUs then found their way into the x86 CPU, as well as ARM-based SoCs, in the form of shared memory integrated GPUs.

As laptops became notebooks, thin and light, the space, power, and heat dissipation needed for a powerful GPU became problematic. Experiments were tried with bringing out the high-speed interconnection used by GPUs known as PCIe, but the complexities of cabling, connectors, and line drivers proved to be too expensive and too cumbersome to be effective.

And then USB-C/Thunderbolt was introduced and changed the equation. Now PCIe signals could be transported across a low-cost high-bandwidth cable and connector making the external AIB/GPU a practical docking option for the thin and light notebooks.

The GPU was used in so many configurations, and applications, it became necessary to use a prefix to designate which type of GPU and application one was referring to and so we got the following:

dGPU—the basic, discrete (stand-alone) processor that always had its own private high-speed (GDDR) memory. dGPUs are applied to AIBs and system boards in notebooks.

iGPU—a scaled down version, with fewer shaders (processors) than a discrete GPU which uses shared local RAM (DDR) with the CPU.

vGPU—an AIB with a powerful dGPU located remotely in the cloud or a campus server.

eGPU—an AIB with a dGPU located in a stand-alone cabinet (typically called a breadbox) and used as an external booster and docking station for a notebook

ServerServer

Schematically, the various GPUs look like the following diagram.

GPUs are in PCs, in the form of dGPUs and iGPUs and often both are present in a PC at the same time.

GPUs are in smartphones and tablets as part of an Soc.

GPUs are in today’s modern game consoles, and are being integrated into automobiles for entertainment systems, customizable dash boards, and the exciting world of autonomous driving.

GPUs power supercomputers, servers, cameras, scientific instruments, airplane and ship cockpits, robots, TVs, digital cinema projectors, visualization, simulation, VR and AR systems, and various toys and home security devices.

And it started because there was a need and demand to have faster, more realistic games. But the GPU market is far from a game, it is a mission-critical, market with high demands, high-stakes, and extraordinary development and advancement exceeding Moore’s law by orders of magnitude.

Look around, how many GPUs do think are in your life? Probably more than you’d imagine.

LATEST NEWS
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Autonomous Observability: AI Agents That Debug AI
Autonomous Observability: AI Agents That Debug AI
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Read Next

From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities

IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT

Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)

Autonomous Observability: AI Agents That Debug AI

Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference

Copilot Ergonomics: UI Patterns that Reduce Cognitive Load

The Myth of AI Neutrality in Search Algorithms

Gen AI and LLMs: Rebuilding Trust in a Synthetic Information Age

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter