A few months back I did a two-part series called “Upheaval in the GPU Wars” which chronicled the ups and downs of Nvidia and ATI over the decade in their race to offer the best graphics solutions to consumers. As I mentioned in the last post, I had intended for that series to become a three-part affair as I would address Intel and their history of offering sub-par graphics solutions and its effect on the industry. However, the format I was using was rather long-winded to get to the point, and chronicling Intel’s history in this area would be effortlessly boring charting failure after failure. To top that off, it wouldn’t really get to the point of talking about Intel’s problem. As such, I’m going to frame the discussion of Intel’s graphics issues in a different light, directly addressing the problem of their horrid performance.
Intel released the very first GMA (Graphics Media Accelerator) in 2004 as part of the Pentium 4 chipset. GMAs are Integrated Graphics Processors (IGPs), graphics processors located on the northbridge that use main memory instead of their own private pool of memory. This bandwidth limit by itself desperately limits top performance but Intel is not a graphics house and as such the GMAs have all had rather despicable performance. Quite plainly, early GMAs were meant for basic display output and DVD playback. This was at a time when even the most basic discrete GPUs could at least do some form of 3D acceleration. Intel’s later revisions like the GMA3000 did very little to improve this situation. For those of you not in the know, most GPUs have their generation defined by what version of DirectX they are fully compliant with. While these IGPs were marketed as having full support for DX9, in truth they only supported a small subset of the API suite and had zero support for graphics features used in games. In short, Intel GMAs were a joke. Continue Reading
In the last part of the series, I talked about the efforts of ATI to develop their DX10 GPUs through successes and rough spots. This next part is now what will now be a three part series on the development of GPUs will focus on Nvidia, the other major designer of graphics cards in consumer computers. Nvidia’s history has been arguably more successful, though their faults have been more significant as well.
While ATI first tested their Unified Shader design on a gaming console with the Xbox 360’s Xenos, Nvidia did not use their game console solution to test the waters. Nvidia was contracted by Sony to build the GPU in the PS3 after Sony’s internal team was unable to successfully meet deadlines and performance road maps. Nvidia built the Reality Synthesizer or “RSX” around the G70 core, their last DX9 solution. In many ways, the choice to build the RSX around a fixed-function core like G70 made sense – the RSX was more powerful than other GPUs of comparable technology and the PS3’s 2006 release date made it far too late to begin experimenting with Unified Shaders when Vista was expected only months later. However, the PS3 was released only days after Nvidia released their first DX10 graphics card – the GeForce 8 series (G80). G80 was a remarkably successful graphics series. Equipped with 128 powerful stream processors, the G80 was and even 3 years later is a very potent graphics card solution. G80 was released with the flagship 8800 GTX, which was up to 40% more powerful than ATI’s flagship, though also more expensive. As ATI began to clean up their act with the HD 3000 series, Nvidia was able to keep pace with the G92 core – a shrunken version of the G80 with better performance and lower power usage. This debuted as the 8800 GT which was known as one of the best value graphics cards of the generation – more powerful than a HD 3870 for around the same price. Nvidia had such a heavy lead over ATI in terms of overall performance that they rested on their laurels for the next generation. The GeForce 9 series was simply more revised versions of the old G80 core, using G92 and other variants. The improved power efficiency allowed for Nvidia to release their own dual-GPU solution to compete against ATI’s. The 9800 GX2 was the most powerful graphics card in existence at the time of release, but was at such an exorbitant price that its market penetration was rather limited. Also under the 9-series Nvidia released the MCP79, better known as the 9400, which serves as both a GPU and entire chipset on one chip and has been used extensively in laptops over Intel’s integrated graphics. Nvidia used to have a very vibrant chipset business but due to legal trouble with Intel has been forced to stall any plans for future motherboards. Continue Reading
This is a two-part series on the post-DirectX 10 history of ATI and Nvidia, the two largest commercial GPU vendors in the world. A word of caution: though I have tried to make this accessible to as many people as possible, I must still give detail and be informative. As such, you may find this series rather dense and technical. For that I apologize. This is meant to provide perspective for those more intimate with the constant clash between these two GPU vendors.
For the uninitiated, graphics cards are becoming one of the most vital parts of computers today, though they are relatively young components in comparison to others like the CPU, RAM or Hard Drive. Beginning in the late 90s, companies started selling accelerator cards meant to drive 2D graphics through APIs like Microsoft’s DirectX. Before this, graphics tasks were bound to the CPU, even for games. To my generation, the notion of running high-end games and applications with only a CPU seems impossible, because in our lifetime GPUs are so important. In the late 90s, having a GPU running 2D graphics, let alone later 3D accelerators was a luxury. It wasn’t until games like Quake II and Half-Life that the benefits of 3D accelerators became clear. Over the past decade, graphics cards have become the fastest growing segment in PC hardware, with new cards arising much more frequently than new processors. In addition, the graphics card has become the most powerful hardware in a computer, though this is often left untapped unless by a powerful game. The most powerful GPUs can have a processing power in excess of 2 TeraFLOPS (floating operations per second – a standard indicator of processing power) while the most powerful Intel processor doesn’t go beyond 100 GigaFLOPS. Suffice to say, these are incredible pieces of silicon and PCB.
GPU cards were designed specifically to be compatible with Microsoft’s DirectX graphics API. As such, their features were dependent on the abilities and features given by DirectX (look back to my previous post on graphics APIs to fill in the blanks on what DirectX actually is). So when referring to new generations of graphics cards, they are often described by the version of DirectX they supported, as well as OpenGL but this was meant for professional applications before being adopted into games. This article focuses specifically on the post-DirectX 10 era and it will be important to understand what differentiates DX10 from previous versions. DX10 was initially released alongside Windows Vista in 2006. Along with upgrading many existing features to more modern design, DX10 introduced the Unified Shader Model to graphics rendering. Starting with DX8, graphics chips were broken down into multiple hardware elements called shaders that processed specific types of data – vertex, pixel and geometry post-DX10. The Unified Shader Model threw out this old design of specific units and re-imagined hardware as a cluster of very small, very rudimentary processor cores linked together in sets of hundreds that could process any type of data that passed through. This was the Unified Shader Architecture, and allowed for remarkable improvements in performance and efficiency. It also allowed developers to rethink GPUs as not just graphics hardware but rather as massive vector processors, able to process large sets of redundant data at once. Continue Reading