The graphics cards that helped define PC gaming

(Image credit: Wikimedia - User Hyins)

Features

(Image credit: Future)

This article first appeared in

PC Gamer magazine

issue 354 in February 2021. Every month we run exclusive features exploring the world of PC gaming—from behind-the-scenes previews, to incredible community stories, to fascinating interviews, and more.

It's easy to forget about where we came from in PC gaming, especially when we're arguing over gigabytes of memory and teraflops of performance. But there's actually a lot that we can glean from the annals of GPU history—the colossal leaps in power that GPUs have taken in under 25 years goes some way to explaining why today's top graphics card costs $1,499.

You have to walk before you can run, and there were many attempts to nail an image resolution of just 800x600 before anyone could dream up the pixel count required for the latest games at 4K. Yet you'd also be surprised by just how many features so prevalent in modern GPUs were first introduced back at the dawn of the industry. But let's start right at the beginning—when active cooling was optional and there were chips aplenty.

(Image credit: Fritzchens Fritz)

1. 3dfx Voodoo

It's March, 1996—England is knocked out of the Cricket World Cup by Sri Lanka, a young boy celebrates his fourth birthday (that's me), and 3dfx releases the first of what would be a couple of game-changing graphics cards: the Voodoo. It's a graphics card looked back on fondly by many in the PC Gamer office. Clocked at just 50MHz and fitted with a whopping 4/6MB of total RAM, the Voodoo was clearly the superior card for 3D acceleration at the time. The top-spec could handle an 800x600 resolution, but the lower-spec was capable of only 640x480. Despite its 2D limitations, it would prove a highly successful venture, and set 3dfx on a trajectory into PC gaming fame.

Note: the 3dfx Voodoo is often referred to as the Voodoo1, although that name only caught on after the release of the Voodoo2.

(Image credit: Fritzchens Fritz)

2. Nvidia Riva 128

A chipset company by the name of Nvidia would soon offer competition to the 3dfx in the form of the Nvidia Riva 128, or NV3. The name stood for ‘Real-time Interactive Video and Animation', and it integrated both 2D and 3D acceleration into a single chip. It was a surprisingly decent card following the Nvidia NV1, which had tried (and failed) to introduce quadratic texture mapping.

This 3D accelerator doubled the initial spec of the Voodoo1 at 100MHz core/memory clock, and came with a half-decent 4MB SGRAM. It was the first to really gain traction in the market for Nvidia, and if you take a look at its various layouts—memory surrounding a single central chip—you can almost make out the beginnings of a long line of GeForce cards, all of which follow suit.

But while it offered competition to 3dfx's Voodoo1, and higher resolutions, it wasn't free of its own bugbears—and neither would it be alone in the market for long before a 3dfx issued a response in the Voodoo2.

(Image credit: Fritzchens Fritz)

3. 3dfx Voodoo2

Now this is a 3D accelerator that requires no introduction. Known far and wide for its superb performance at the time, the Voodoo2 is famed for its lasting impact on the GPU market, great frame rates, and continued use of a multi-chip design. A smorgasbord of chips, the Voodoo2 featured a 90MHz core/ memory clock, 8/12MB of RAM, and—once connected via a port on twinned cards—the Voodoo2 could even support resolutions up to 1024x768.

Dual-wielding cards played a big role in the past decade of GPU performance. It was possible for a PC user to connect two cards together for better performance back in 1998—and it was worth doing, too. 3dfx managed to stay on top with the Voodoo2 for some time, but it wasn't long until it would make a few poor decisions and be out of the graphics game entirely.

(Image credit: Wikimedia - User Hyins)

4. Nvidia GeForce 256

The first bearing the GeForce name still in use today, the GeForce 256 was also the 'world's first GPU'. "But what about the Voodoos and the Rivas?" I hear you ask. Clever marketing on Nvidia's part has the GeForce 256 stuck firmly in everyone's minds as the progenitor of modern graphics cards, but it was really just the name Nvidia gave its single-chip solution: a graphics processing unit, or GPU.

As you can probably tell, this sort of grandiose name, a near-parallel to the central processing unit (CPU) raking in cash since the '70s, was welcomed across the industry.

That's not to say the GeForce 256 wasn't a worthy namesake, either. Integrating acceleration for transform and lighting into the newly-minted GPU, alongside a 120MHz clock speed and 32MB of DDR memory (for the high-end variant). It also fully-supported Direct3D 7, which would allow it to enjoy a long lifetime powering some of the best classic PC games released at that time.

(Image credit: Future)

5. Nvidia GeForce 8800 GTX

Once Nvidia rolled out the GeForce 8800 GTX, there was no looking back. Precursor to ultra-high-end, enthusiast graphics cards, such as the RTX 3090, if you want to talk about a card that really got peoples' attention it's the GeForce 8800 GTX. Launched back in 2006 to much fanfare, the 8800 GTX was the largest GPU ever built at the time. With 128 Tesla cores inside the G80 GPU, and 768MB of GDDR3 memory, the 8800 isn't an unfamiliar sight for a modern GPU shopper. It bears the marks of many a modern GPU—even if it might be a little underpowered by today's standards. Despite a pre-launch recall threatening to scupper the 8800 GTX launch plans, this graphics card ruled over the GPU market at launch and even stuck around for some time afterwards thanks to a unified shader model, which was introduced with the architecture alongside Direct3D 10.

(Image credit: Future)

6. ATI Radeon HD 5970

And what's AMD been doing all this time? Semiconductor company ATI was busy building heaps of console chips right the way through the '90s and early '00s, and made some excellent GPUs in its own right, such as the X1900 XTX. It was later purchased by AMD in 2006. After the abortive HD 2000 and 3000 series, the HD 4870 and 4850 were quality cards, but the one that made the biggest splash after the move was the Radeon HD 5970. The HD 5970 was essentially a large Cypress GPU, 1,024MB pool of memory, and a sizeable 256-bit memory bus... multiplied by two.

This twin-GPU tradition continued right the way up to the AMD Radeon R9 295X2 and the Nvidia Titan Z. But once multi-GPU support started dwindling, solo cards became the predominant form factor. And with multi-GPU support in the developer's court due to the introduction of DirectX 12, they may never return.

Jacob Ridley

There's no 'Silicon Valley' where Jacob grew up, but part of his home country is known as 'The Valleys' and can therefore be easily confused for a happening place in the tech world. From there he graduated to professionally break things and then write about it for cash in the city of Bath, UK.

See comments