Nvidia GeForce 256: The Invention of the GPU
It’s late 1999, and the 3D graphics market is crowded. We’ve seen the rise and fall of various accelerators, from the Voodoo days to the TNT2. But Nvidia has just released something they are calling a GPU (Graphics Processing Unit).
At first, it sounds like marketing fluff. But when you look at the architecture, you realize it’s a major breakthrough.
Hardware T&L
Until now, the "Transform and Lighting" (T&L) stage of the 3D pipeline was handled by the system CPU. This meant that the more complex your 3D models were, the more you bogged down your main processor. The GeForce 256 moves T&L directly onto the graphics chip.
This is a massive deal. It frees up the CPU to handle game logic, AI, and physics, while the GPU handles the heavy lifting of calculating where vertices should be and how they should be lit.
// In the pre-T&L days, we did this on the CPU
for (int i = 0; i < vertexCount; i++) {
TransformVertex(&vertices[i], &viewProjectionMatrix);
CalculateLighting(&vertices[i], &lightSource);
}
// Then we sent the result to the graphics card
With the GeForce 256, we just send the raw geometry and the lighting parameters, and the chip does the rest at speeds no CPU can match.
The Specs
With a 256-bit rendering engine and support for DirectX 7, the performance jump is staggering. In games like Quake III Arena or Unreal Tournament, the smoothness is on another level.
Looking Ahead
The "GPU" moniker is more than just a name. It signals that the graphics card is becoming a co-processor in its own right. As we move into the next millennium, I expect to see these chips become even more programmable. We’re moving from fixed-function pipelines to something much more flexible.
If you’re a developer, it’s time to start thinking about how to push more polygons, because the bottleneck just moved from the graphics card back to the CPU... for now.