I’ve talked quite a bit about my annoyance with the meager performance of Intel’s graphics solutions and I’ve also mentioned Apple’s dilemma in wanting to maintain superior graphics technology in their notebooks. Long story short, Apple wants to use strong graphics that Intel can’t deliver. This issue has stagnated the lower end of Apple’s notebook line as they are unable to adopt Westmere-based Core-i chips without sacrificing major graphics performance. However, news yesterday that Apple may stick with Intel’s integrated graphics on the upcoming Sandy Bridge part and news days earlier that Intel and Nvidia may have reached settlement in their legal battle opens the door to interesting speculation on the future of Apple (and indeed the rest of the PC industry’s) notebook line. First: history lesson!
Way back in 2006, Apple used Intel’s GMA integrated graphics on the Macbook line, a combination of cost and market power issues. Building Macbooks straight from the reference design of Intel’s Centrino platform probably allowed Apple to buy the components – CPU and chipset – at discount. As well, before Apple’s Intel transition, the company had little influence on the PC market with their unique hardware. Apple was not in a position to negotiate with Nvidia or ATI for Mac versions of high end parts – they couldn’t justify the effort in sales numbers. Case in point, while the original Macbook used the piss-poor GMA 950, that was still an improvement over the 2005 iBook’s Mobility Radeon 9550. With the adoption of Intel’s platform, vendors could more easily adopt their hardware. The Macbook Pros, having the larger 15 and 17-inch frame, used Intel’s Centrino platform supporting modern discrete GPUs, while the smaller Macbooks maintained the budget Intel integrated solution. Underwhelming as it was, this kept Apple’s notebooks relatively in line with the rest of the PC industry which also used Intel’s graphics for similar costs advantages. Continue Reading
In the last part of the series, I talked about the efforts of ATI to develop their DX10 GPUs through successes and rough spots. This next part is now what will now be a three part series on the development of GPUs will focus on Nvidia, the other major designer of graphics cards in consumer computers. Nvidia’s history has been arguably more successful, though their faults have been more significant as well.
While ATI first tested their Unified Shader design on a gaming console with the Xbox 360’s Xenos, Nvidia did not use their game console solution to test the waters. Nvidia was contracted by Sony to build the GPU in the PS3 after Sony’s internal team was unable to successfully meet deadlines and performance road maps. Nvidia built the Reality Synthesizer or “RSX” around the G70 core, their last DX9 solution. In many ways, the choice to build the RSX around a fixed-function core like G70 made sense – the RSX was more powerful than other GPUs of comparable technology and the PS3’s 2006 release date made it far too late to begin experimenting with Unified Shaders when Vista was expected only months later. However, the PS3 was released only days after Nvidia released their first DX10 graphics card – the GeForce 8 series (G80). G80 was a remarkably successful graphics series. Equipped with 128 powerful stream processors, the G80 was and even 3 years later is a very potent graphics card solution. G80 was released with the flagship 8800 GTX, which was up to 40% more powerful than ATI’s flagship, though also more expensive. As ATI began to clean up their act with the HD 3000 series, Nvidia was able to keep pace with the G92 core – a shrunken version of the G80 with better performance and lower power usage. This debuted as the 8800 GT which was known as one of the best value graphics cards of the generation – more powerful than a HD 3870 for around the same price. Nvidia had such a heavy lead over ATI in terms of overall performance that they rested on their laurels for the next generation. The GeForce 9 series was simply more revised versions of the old G80 core, using G92 and other variants. The improved power efficiency allowed for Nvidia to release their own dual-GPU solution to compete against ATI’s. The 9800 GX2 was the most powerful graphics card in existence at the time of release, but was at such an exorbitant price that its market penetration was rather limited. Also under the 9-series Nvidia released the MCP79, better known as the 9400, which serves as both a GPU and entire chipset on one chip and has been used extensively in laptops over Intel’s integrated graphics. Nvidia used to have a very vibrant chipset business but due to legal trouble with Intel has been forced to stall any plans for future motherboards. Continue Reading
This is a two-part series on the post-DirectX 10 history of ATI and Nvidia, the two largest commercial GPU vendors in the world. A word of caution: though I have tried to make this accessible to as many people as possible, I must still give detail and be informative. As such, you may find this series rather dense and technical. For that I apologize. This is meant to provide perspective for those more intimate with the constant clash between these two GPU vendors.
For the uninitiated, graphics cards are becoming one of the most vital parts of computers today, though they are relatively young components in comparison to others like the CPU, RAM or Hard Drive. Beginning in the late 90s, companies started selling accelerator cards meant to drive 2D graphics through APIs like Microsoft’s DirectX. Before this, graphics tasks were bound to the CPU, even for games. To my generation, the notion of running high-end games and applications with only a CPU seems impossible, because in our lifetime GPUs are so important. In the late 90s, having a GPU running 2D graphics, let alone later 3D accelerators was a luxury. It wasn’t until games like Quake II and Half-Life that the benefits of 3D accelerators became clear. Over the past decade, graphics cards have become the fastest growing segment in PC hardware, with new cards arising much more frequently than new processors. In addition, the graphics card has become the most powerful hardware in a computer, though this is often left untapped unless by a powerful game. The most powerful GPUs can have a processing power in excess of 2 TeraFLOPS (floating operations per second – a standard indicator of processing power) while the most powerful Intel processor doesn’t go beyond 100 GigaFLOPS. Suffice to say, these are incredible pieces of silicon and PCB.
GPU cards were designed specifically to be compatible with Microsoft’s DirectX graphics API. As such, their features were dependent on the abilities and features given by DirectX (look back to my previous post on graphics APIs to fill in the blanks on what DirectX actually is). So when referring to new generations of graphics cards, they are often described by the version of DirectX they supported, as well as OpenGL but this was meant for professional applications before being adopted into games. This article focuses specifically on the post-DirectX 10 era and it will be important to understand what differentiates DX10 from previous versions. DX10 was initially released alongside Windows Vista in 2006. Along with upgrading many existing features to more modern design, DX10 introduced the Unified Shader Model to graphics rendering. Starting with DX8, graphics chips were broken down into multiple hardware elements called shaders that processed specific types of data – vertex, pixel and geometry post-DX10. The Unified Shader Model threw out this old design of specific units and re-imagined hardware as a cluster of very small, very rudimentary processor cores linked together in sets of hundreds that could process any type of data that passed through. This was the Unified Shader Architecture, and allowed for remarkable improvements in performance and efficiency. It also allowed developers to rethink GPUs as not just graphics hardware but rather as massive vector processors, able to process large sets of redundant data at once. Continue Reading