Famous Graphics Chips: ATI’s Radeon 8500

Jon Peddie
Published 06/09/2021
Share this on:

faceThe Radeon 8500 AIB launched by ATI in August 2001 used a 150 nm manufacturing process, for its R200 (codename, Chaplin) GPU. The AIB worked with the DirectX 8.1. and OpenGL 1.3 APIs.

The R200 introduced several new and enhanced features, but the most noteworthy was the ATI TruForm feature. TruForm was a Semiconductor Intellectual Property (SIP) block developed by ATI (now AMD) for hardware acceleration of tessellation. The following diagram is a simple example of a tessellation pipeline rendering a sphere from a crude cubic vertex set.


Figure 1: Tessellation can reduce or expand the number of triangles (polygons) in a 3D model (Image by Romainbehar for Wikipedia)

Tessellation can be relative according to the distance of an object to the view to adjust for level-of-detail. This allows objects close to the viewer (the camera) to have fine detail, while objects further away ones can have coarse meshes, yet seem comparable in quality. It also reduces the bandwidth required for a mesh by allowing them to be refined once inside the shader units.

Using N-Patches (also known as PN triangles) ATI’s TruForm, was a new higher order surface composed of curved rather than flat triangles. It permitted surfaces to be generated entirely within the graphics processor, without requiring significant changes to existing 3D artwork that was composed of flat triangles. That, postulated ATI, would make the technology accessible for developers to implement it and avoid breaking compatibility with older graphics processors, while providing an excellent visual experience. It was supported by DirectX 8s N-patches, which calculates how to use triangles to create a curved surface.

ATI’s R200 GPU was an average sized chip for the time, 120 mm² in size. It had 60 million transistors and features 4-pixel shaders and 2 vertex shaders, 8 texture mapping units, and 4 ROPs. ATI commented at the time that the R200 was more complex than a Pentium III processor.

The Radeon 8500 ran at 275 MHz and had 64 MB DDR using a 128-bit memory bus. It was a single-slot AIB and didn’t need an additional power connector, since it only drew 23 W. The AIB had a AGP 4x interface and offered three display outputs: DVI, VGA, and S-Video.


Figure 2: ATI Radeon 8500 (Source Tech Power up)

The R200 was ATI’s second-generation GPU to carry the Radeon brand. As most AIBs of the time, the 8500 also included 2D GUI acceleration for Widows and offered video acceleration with a built in MPEG CODEC.

Whereas the R100 had two rendering pipelines, the R200 featured four which ATI branded as Pixel Tapestry II. That increased the AIBs fill rate to 1 Gigapixel/s.

ATI built the original Radeon with three texture units per pipeline so it could apply three textures to a pixel in a single clock cycle. However, game developers chose not to support that feature. So, instead of wasting transistors, ATI reduced the R200 to two texture units per pipeline. That matched the Nvidia GeForce3 and made the developers happy.

But ATI was clever and enabled Pixel Tapestry II.to apply six textures in a single pass. Legendary game developer John Carmack was working on the upcoming Doom 3 commented at the time, “The standard lighting model in DOOM, with all features enabled, but no custom shaders, takes five passes on a GF1/2 or 2 Radeon.” He said that same lighting model would take “either two or three passes on a GF3, and should be possible in a clear + single pass on ATI’s new part.” [1]

With the original Radeon, ATI introduced the Charisma hardware transform and lighting engine. The R200’s Charisma Engine II was the company’s second-generation hardware accelerated, fixed function transform & lighting engine and benefits from the R200’s increased clock speed.

ATI redid the vertex shader in the R200 and branded it the Smartshader engine. Smartshader is a programmable vertex shader, and was identical to Nvidia’s GeForce3 vertex shader, as both companies conformed to the DirectX 8.1 specifications.

In late 2000, just before the roll out of the Radeon 8500/R200, ATI introduced their HyperZ technology. Basically, a Z-compression scheme. ATI claimed HyperZ, could offer 1.5 gigatexels per second fill rate performance, even though the R200’s theoretical rate was 1.2 gigatexels. In testing, the HyperZ did indeed provide a performance improvement

ATI’s HyperZ technology consisted of three features working in conjunction with one another to provide an “increase” in memory bandwidth.



Figure 3: ATI’s HyperZ (Image by Shmuel Csaba Otto Traian for Wikipedia)

ATI’s HyperZ borrowed some concepts from the deferred rendering process developed by Imagination Technologies for their PowerVR tiling engine.

Quite a bit of memory bandwidth can be used to repeatedly access the Z-buffer to determine which, if any, pixels might be in front of the one being rendered. The first step in the HyperZ process was to check the z-buffer before a pixel was sent to the rendering pipeline. That provided a culling of unneeded pixels to be abandoned before the R200 rendered them.

Then the z data was passed through a lossless compression process to compresses the data in the Z-buffer. That also reduced the memory space needed for the z-data, and conserved data transfer bandwidth accessing the Z-buffer.

When the Z-data was used, a Fast Z-Clear process emptied the Z-buffer after the image had been rendered. ATI had a particularly efficient Z-buffer clearing process at the time.

The first Radeon employed 8×8 blocks. To decrease the bandwidth needed, ATI reduced the blocks size to 4×4. The R200 could discard 64 pixels per clock instead of 8 by the original Radeon. (The GeForce3 could discard 16 pixels per clock.)

ATI also implemented an improved Z-Compression algorithm that, according to their spec sheets, gave them a 20% increase in Z-Compression performance.

Jim Blinn introduced the concept of bump mapping in 1978.[2] It created artificial depth normal based on the illumination of the surface of an object. However, game developers didn’t start using bump mapping till early 2000; Halo 1 was one of the first games to use it in 1998.

Want more tech news? Subscribe to ComputingEdge Newsletter today!

Putting it all together

Prior to the adoption of bump mapping, normal and parallax mapping to simulate higher mesh detail, 3D shapes required large quantities of triangles. The more triangles are used, the more realistic surfaces appeared.

To alleviate the burden of huge numbers of triangles, tessellation was employed. TruForm tessellated 3D surfaces using the existing triangles and added triangles to them to add detail to a polygonal model. The result is it increases image quality without significantly impacting frame rates.

However, TruForm wasn’t used much by game developers because it required the models to work outside of DirectX 8.1. Because of the lack of industry-support for the technology most developers ignored it. Also, in 2000 Nvidia had eclipsed ATI for market share of AIBs, and developers weren’t as willing to make the investment in supporting a unique feature on the number two supplier. By the time ATI, now part of AMD got to the Radeon X1000 series in 2007, TruForm was no longer a hardware feature.

With the Radeon 9500 and hardware supporting Shader Model 3.0 from Microsoft, the render to vertex buffer feature in it could be used for tessellation applications. Tessellation in dedicated hardware returned in the ATI’s Xenos GPU for the Xbox and Radeon R600 GPUs.

Support for hardware tessellation only became mandatory in Direct3D 11 and OpenGL 4. Tessellation as defined in those APIs is only supported by newer TeraScale 2 (VLIW5) products introduced by AMD in September 2009 and GCN-based products (available from January 2012 on). In AMD’s GCN (graphics core next) the tessellation operation is part of the geometric processor.

When the Radeon 8500 came out, ATI was going through a difficult management shakeup in the software group and the drivers the company issued were buggy. To make matters worse the company cheated on some benchmarks and reported higher scores than were attainable by reviewers. ATI also had problems with its Smoothvision antialiasing.[3]

TruForm, the feature that should have propelled ATI to a leadership position was lost due to mismanagement, and poor marketing. The technology leadership ATI had demonstrated through its development was wasted.

[1] Goldstein, Maarten, Carmack On NVidia & ATI, ShackNews (August 1, 2001), https://www.shacknews.com/article/15230/carmack-on-nvidia-ati

[2] Blinn, James F. Simulation of Wrinkled Surfaces, Computer Graphics, Vol. 12 (3), pp. 286-292 SIGGRAPH-ACM (August 1978)

[3] Shimpi, Anand Lal,  ATI’s Radeon 8500 – New drivers expose potential, (November 14, 2001), https://www.anandtech.com/show/850

Jon Peddie, is a recognized pioneer in the graphics industry, president of Jon Peddie Research and named one of the most influential analysts in the world. He lectures at numerous conferences and universities on topics pertaining to graphics technology and the emerging trends in digital media technology. Former president of Siggraph Pioneers, he serves on advisory boards of several conferences, organizations, and companies, and contributes articles to numerous publications. In 2015, he was given the Life Time Achievement award from the CAAD society. Peddie has published hundreds of papers, to date; and authored and contributed to 11 books, His most recent, Ray Tracing: A tool for all.