NVIDIA GeForce GTX 690 Launch Review

Author: Chris Ledenican
Editor: Howard Ha
Publish Date: Thursday, May 3rd, 2012
Originally Published on Neoseeker (http://www.neoseeker.com)
Article Link: http://www.neoseeker.com/Articles/Hardware/s/Nvidia_GTX_690/
Copyright Neo Era Media, Inc. - please do not redistribute or use for commercial purposes.

The GeForce GTX 680 was the fastest and most power efficient single GPU graphics card on the market and even while it rips though games at 1080p, there are still some gamers looking for an even smoother gaming experience at 5760x1080 or while using 3D Surround. These are exactly the type of gamers NVIDIA has designed the new GTX 690 for, as it is the biggest and baddest graphics card to ever be released in the consumer market thus far.

The GTX 690 is built on a foundation of the GK-104 GPU, which is able to perform roughly 10% better than AMD’s current flagship Tahiti XT GPU while being up to 30% more efficient. It is this power efficiency that allowed NVIDIA to build the GTX 690 without scaling down the architecture, or dramatically lowering the GPU Boost clock. By default the GTX 690 has a Boost clock speed of 1019MHz, which is just a hair under that of the GTX 680. Additionally, NVIDIA uses binned GK-104 graphics processors that each leverage the full potential of the core. However, since there are two cores on the GTX 690, everything is doubled, arming this card with a total of 3072 CUDA cores.

As the GTX 690 has the full GK-104 specifications and has only slightly scaled back clock speeds, it could very well run at nearly twice the performance of the GTX 680. On top of this, the GTX 690 has some advantages over its single GPU counterpart. The key advantages are that the GTX 690 consumes less power than running dual GTX 680 SLI, is quieter and only requires a single PCI Express slot.

Get Adobe Flash player

Beyond just the performance, NVIDIA has also gone all-out with the design of the graphics card. The exterior of the card features a large, trivalent chromium-plated aluminum frame that provides best-in-class strength and durability. The card also features laser-etched LED lighting on the side, fan housing made from injection molded magnesium alloy and scratch resistant polycarbonate windows above the heatsinks. It is also the first graphics card from NVIDIA to have lettering laser-etched into the surface, ensuring precise design and the finest craftsmanship possible.

With everything the GTX 690 has going for it, the card isn’t going to be cheap and it effectively costs as much as two GTX 680s. Still, since it is built on a single PCB, not to mention more power efficient and quieter than dual GTX 680s, it should be worth the cost for anyone with a whopping grand to spend on a graphics card.

Like Fermi, Kepler GPUs are compiled of different configurations of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. The GeForce GTX 690 has dual GPUs that combined give the card 8 GPCs, 16 next-generation Streaming Multiprocessors (SMX), and 8 memory controllers.

Starting at the top of the GK-104 block, Kepler has a single GigaThread Engine which fetches the specified data from system memory and copies them to the frame-buffer. The Engine then creates and dispatches the threads from the memory to the GPCs, where it delivered to the execution units. Following the GigaThread Engine are a total of four Graphics Processing Clusters (GPCs), which is where the majority of operations are performed. This is due to each GPC having a dedicated raster engine, as well as resources for shading, texturing and computation.

The memory sub-system of the Kepler architecture has also been redesigned to support higher speed clock speeds. This overhaul of the memory interface allowed NVIDIA to push the operating frequency of the memory up to 6008MHz (4002MHz effective). The memory operates on a dual 256-bit wide GDDR5 interface, which equates to a total bandwidth rating of 384GB/s (192.2GB/s per core). Additionally, the dual GK-104 GPUs have eight memory controllers (four per core), along with 1024KB L2 cache (512KB per core), and since each GPC has its own Raster Unit there are a total of 64 Raster Operation Units.

Inside each GPC are two SMX units which have been optimized to offer the best performance-per-watt by running the shaders at the same frequency as the GPU clock, and not double it. This approach gives Kepler twice the performance-per-watt of the Fermi architecture while allowing more CUDA cores to be packed into a single SMX unit. Inside each SMX are 192 CUDA cores which equates to a total of 3072 CUDA cores, triple the amount in the GTX 590. Of course since the CUDA core clock is equal to the GPU clock, the performance per CUDA core is reduced from the previous generation but the 1:1 clock design allows the GTX 680 to achieve the same throughput all while staying within a lower power envelope.

Looking at the functions of the execution units, the CUDA cores are designed to perform the pixel, vertex and geometry shading, as well as the physics compute calculations. The texture units on the other hand perform texture filtering, load/store units and fetch and save data to memory. Meanwhile, Special Function Units (SFUs) handle transcendental and graphics interpolation instructions. Finally, the PolyMorph Engine handles vertex fetch, tessellation, viewport transform, attribute setup, and stream output.

The new Boost Clock feature is one of the biggest changes to the Kepler family. In essence, the Boost Clock works along the same lines as Intel's Turbo Boost, which dynamically adjusts the clock speeds in real-time, thus increasing the performance. However, Boost Clock is different in the sense that the maximum Boost Clock frequencies are not necessarily where the GPU clock will cap during gaming. Instead, Boost Clock works at both a hardware and software level to dynamically boost the GPU clock speed and under most circumstances, will increase the GPU clock speed well above the actual Boost Rating. Of course not all silicon is the same, so each Kepler board will have its own unique Boost Clock speed.

The typical board power defined for the GTX 690 is 300W. This means that the Boost Clock will increase the clock speeds to fit into this power envelope under load. Additionally, GPU Boost operates completely autonomously so there are no game profiles and no intervention required by the end user, providing an instant performance boost to gamers. The technology also works on a microsecond level, and does constant checks of the GPU voltage and conditions to see if the clocks can go higher or if they need to be throttled down to the base 3D clock. Since the GTX 690 has two of GPUs each independently utilize boost, meaning each core can run at a different frequency during gaming.

The Kepler series comes with a host of new technologies, some of which are exclusive to the new architecture, while others affect current generation NVIDIA hardware via quick driver update.

The first of the new technologies added to NVIDIA based hardware is Adaptive Vsync. Before going into how the new technology works, let’s first examine the issue it addresses. To create a smooth gaming experience, many gamers rely on Vsync to cap their frame at 60FPS. This prevents issues such as screen tearing which happens at higher frame rates, and also should prevent the frame rate from dipping below what would be considered smooth. The issue however is that Vsync syncs with the screen's refresh rate, so when the frames dip below 60FPS, Vsync dips to the next lowest refresh rate of 30FPS. This dip causes what is known as frame stutters.

To address this, NVIDIA has added a new feature to their latest R300 drivers dubbed Adaptive Vsync. Essentially what this feature does is dynamically turn Vsync off if the frame rate dips below 60FPS. With Vsync disabled, the frame rate more smoothly transitions to lower frames-per-second instead of dropping to a lower refresh rate altogether. This helps prevent the in-game stutter and tearing mentioned earlier, thus creating a smoother gaming experience.

As you can see from the graph below, Adaptive Vsync allows the frames to drop and rise at a smoother rate than traditional Vsync. The frames in the graph did at times nearly reach 30FPS, but the difference is the drop is gradual, which prevents stuttering when the frame rate drops too fast as opposed to the frame rate dipping to around 30FPS.

The R300 drivers also add FXAA to the NVIDIA Control Panel. This opens up the technology to hundreds of games, because it is no longer up to developers to implement it in order for it to utilized.

FXAA is a technology developed by NVIDIA to reduce visible aliasing in games. This is done by applying FXAA along with other post processing steps such as motion blur and bloom. Additionally, since FXAA is post processing shading technology and not a deferred shader (like MSAA) it improves the performance while reducing the strain on the memory.

On top of this, NVIDIA has also added an entirely new anti-aliasing technology called TXAA. TXAA is a film style anti-aliasing technology designed to utilize the high texture performance of the Kepler architecture. The technology is a GC film style AA that combines hardware anti-aliasing to achieve smooth edges. In the case of 2x TXAA there is an optional temporal component for even better image quality. In total, TXAA can be used in 1x and 2x configurations and between the two, TXAA 1 offers image quality similar to 8x MSAA but with a much lower impact on performance. TXAA 2 meanwhile offers higher in-game aliasing, and the impact at running the technology at maxed out results is equivalent to running MSAA at 4x.

Lastly we have another Kepler exclusive, an update to the NVIDIA Surround technology. Like Fermi based hardware, Kepler supports both 3D Vision and Surround functions, but Kepler can run both technologies on a single graphics card. In the case of the Fermi, two graphics cards were required to run more than two displays, but this is no longer the case with Kepler, which can simultaneously drive up to four displays out-of-the-box, without need for adapters.

NVIDIA has also optimized the Surround technology to utilize the best available interface. This was achieved by using the middle screen as the main display, putting the task bar in an easy to access location. Currently the main display is located to the center screen, but a future update could add manual support to allow the user to adjust the setup to best fit their needs.

NVIDIA sent out a few teases before the launch of the GTX 690, but it was the crowbar that received the most attention. As soon as news regarding the crowbar went up on the net, rumors of the possible signifigance of the crowbar started to fly. By far the wildest, but most widely spread rumor was that the crowbar had something to do with the Half Life series, leaving many gamers on the edge of their seats hoping Valve had an announcement regarding the game. The fact of the matter though is it had nothing to do with Half Life, so we suspected the crowbar was to be used in a more traditional fashion; opening a crate!

The GTX 690 is NVIDIA's biggest, baddest graphics card they have built to-date. This means it is going to be their new flagship product, positioning it above all other single and dual GPU graphics cards. Since it is the new flagship product in their stack, it goes without saying the GTX 690 is manufactured for the enthusiast market, in terms of both pricing and performance.

NVIDIA has gone all out with the design of the GTX 690. Unlike other graphics cards, the GTX 690 is completely built from high-quality materials. The frame of the cover is made of cast aluminum, and is protected with trivalent chromium plating. Trivalent chromium gives the GTX 690 a sleek, yet powerful look and is highly durable. The fan housing of the GeForce GTX 690 is made from injection molded magnesium alloy. Magnesium alloys are used throughout the automotive and aerospace industry (including the engines of the Bugatti Veyron and F-22 Raptor) for their light weight, heat dissipation, and acoustic dampening properties - which are the same reasons we see it used in the GTX 690. To create the intricate geometries required for the fan housing, NVIDIA used a form of injection molding called thixomolding, in which liquid magnesium alloy is injected into a mold. This allows for fine geometries and a tight, perfectly coupled fit.

The GTX 690 uses two GK-104 graphics processors, each of which are built on a 28nm fabrication process, have a die size of 295mm² and pack in 3.54 billion transistors per core, for a total of 7.08 billion. Since AMD has not yet released their HD 7990, we can't make a direct comparison between the two, though the Tahiti GPU is 365mm² and has roughly 4.31 billion transistors. With the smaller node, NVIDIA was able clock the GTX 690 above 1GHz, as it has a base of 1006MHz and a Boost clock of 1058MHz. However, as stated before, the Boost Clock can go higher than the target frequency. Additionally, the GTX 690 includes 4 GPCs giving it a grand total of 8 Streaming Multiprocessors, 1536 CUDA cores, 32 ROPs, and 128 Texture units.

As for the memory, the GTX 690 utilizes a dual 256-bit wide GDDR5 memory bus. Like the memory of the GTX 680, the GDDR5 chips have been optimized to run at extremely high frequencies, and as we stated in earlier the total bandwidth is for this card is 384GB/s. This is achievable because the memory is clocked at 1500MHz (6000MHz effective), which is equal to the memory speed of the GTX 680.

The back of the PCB really shows NVIDIA has utilized all the available space on the board. Just looking around the PCB we can see there is a 10-phase power supply, dual 8-pin power connectors, a single SLI connector for Quad-SLI and of course dual GPUs. Beyond this though the GTX 690 is also built on a 10-layer 2oz copper PCB that provides high-efficiency power delivery with less resistance, lower power and less heat generation. The GTX 690 also runs on a Gen 3.0 PCIe interface which has double the maximum data rate over Gen 2.0, giving the card up to 32 GB/s of bi-directional bandwidth on a x16 connector.

The GTX 690 was designed to offer best-in-class performance-per-watt and with a power rating of 300 watts, which is impressive for a dual core graphics card. The GTX 690's closest competitor is the HD 6990. In the best case scenario, the HD 6990 will drain around 375 watts of power, and has dual 8-pin power configuration to ensure the board can continuously run stable. The GTX 690 also has a dual 8-pin setup, but unlike the HD 6990 this gives this graphics card a full 75 watts of additional power. According to our math this makes the GTX 690 20% more power efficient than the HD 6990, while being substantially faster.

Since the GTX 690 has 75 watts of additional power to play with, the overclocking headroom should be massive. NVIDIA could have gone with a 6+8 pin configuration to give the card exactly 300 watts. Since they didn't, all that extra power can be utilized when increasing the clock frequencies, which will improve the overclocking headroom.

Additionally, the GTX 690 has a large GeForce logo along the side of the heatsink. The logo is LED lit and by default it will glow a solid green. The LED can be manually adjusted though via software, allowing the user to either turn the LED off, or set custom profiles.

Once again NVIDIA has retooled the video outputs for the GTX 690. Unlike other cards on the market that support multiple monitors, the GTX 690 doesn't require any special adapters because it already has three DVI ports on the rear panel, along with a single mini-DP port. This means three monitors are supported out-of-the-box, and an adapter will only be needed for 3+1 3D Vision support. In total there are two DL-DVI ports, a single HDMI port and a full-sized DisplayPort. Out of these ports, both the DisplayPort and DVI connections can support resolutions of up to 2560x1600, while the HDMI port is capable of supporting resolutions of up to 1080p and comes with native support for all the latest HDMI 1.4a features.

The sheer amount of video outputs does reduce the spacing dedicated to the airflow exhaust. However, since each core has its own heatsink and the airflow exhausts equally from both ends of the graphics cards, the lack of ventilation out of the back of the card should not hamper the overall thermal performance. Also, to ensure no one is confused when connecting multiple monitors, NVIDIA has labeled the DVI connectors as 1 through 3. This should make setting up Surround easier, as it gives the user a visual guide to follow.

The GTX 690 has been designed from the ground up to have the best of everything, and the thermal solution is no exception. Under the hood, each GK-104 core has its own dedicated cooling unit that consists of a self-contained vapor chamber and a dual slot heatsink. Overall, the thermal solution is similar to what NVIDIA used on the GTX 590, but this time around the size of the fin stack has been increased. The fan has been optimized for both thermal and acoustic performance and NVIDIA has used quality components for improved efficiency.

First off, there are two individual custom designed fin stacks that each incorporate a large copper vapor chamber at the bottom and a nickel-plated fin stack at the top. In addition, the GTX 690 features a center-mounted axial fan. The fan has been designed to optimize the fin pitch and angle at which air from the fan hits the fin stack.  This allows for smoother airflow, which in turn lowers the noise output. To improve the airflow even more, NVIDIA has added ducted airflow chambers that optimize the airflow through the fin stack and also reduce turbulence. The channels also promote smooth airflow, and all components under the fan are low-profile enough that they won’t cause turbulence or obstruct airflow.

The fan also utilizes a custom design, allowing it to maximize the airflow yet still run at low acoustic levels. This was achieved through a large centrally mounted axial fan that measures 90mm. The fan is specifically designed to push ample air though both fin stacks while being exceptionally quiet. According to NVIDIA the maximum acoustic level for the fan is only 47dBa, which makes it quieter than most case fans. This means the GTX 690 will be quieter than a system using dual GTX 680 graphics cards, but offer nearly the same level of performance. The fan’s control software is also fine-tuned so the changes in fan speed occur gradually, rather than in discrete steps.

The PCB consists of massive 10 phase power supply, dual 295 mm² GK104 GPUs, 16 memory chips and the dual 8-pin power connectors. Of course as a dual core graphics card, the GTX 690 also has to have a bridge chip. In this case, NVIDIA uses an on-board PLX bridge chip that provides independent PCI Express x16 access to both GPUs for maximum multi-GPU throughput. The PCB itself is not that large considering the amount of components Nvidia has packed onto this beast.

GPU Boost & Overclocking:

The base GPU Boost clock of the GTX 690 is set at 1019MHz, but since this is just a target the Boost clock traditionally runs higher. The GTX 690 we were sent was no exception, as our Boost speed was 1071MHz, which is 5% faster than NVIDIA's GPU Boost target. These clocks were hit with no alterations to the power target or GPU offset, meaning the Boost feature dynamically adjusted the clock speeds 5% higher with no user interaction.

When it came to overclocking, we again used the EVGA Precision X software utility. In our labs the GTX 690 performed similar to the GTX 680, as both were able to increase the GPU clock speed to within 40MHz of each other. In order to achieve the highest possible overclock, we increased the power target to the maximum 132%, which increased the voltage to around 1175mV. At this level, both the cores were able to Boost up to an impressive 1240MHz and still remain completely stable. Just looking at these clock speeds in comparison to the default frequencies, the Boost Clock was increased by 221MHz, which is a 21.6% increase. However, since the Boost Clock on our GTX 690 usually runs at 1071MHz when set at default, it is more like achieving a 15.7% increase.

The memory also overclocked substantially, which was surprising considering it is already clocked at 6000MHz effective. Our end results netted an additional 1630MHz (6520MHz effective) up from 1502MHz. At this speed, the memory bandwidth was increased to 208.6GB/s, an 8.5% overclock.

Hardware Configuration:


Benchmarks DX11:

Test Settings:



(Note: All models might not be included in this review. The table below is to be used for comparison purposes)
AMD Specifications
AMD Radeon HD 7970 AMD Radeon HD 5830 AMD Radeon HD 5870 AMD Radeon HD 6950 AMD Radeon HD 6970
Processing Cores
2048 1120 1600 1408 1536
Core Clock
925MHz 800MHz 850MHz 800MHz 880MHz
Memory Clock
1375MHz 1150MHz 1200MHz 1250MHz 1375MHz
Memory Interface
384-bit 256-bit 256-bit 256-bit 256-bit
Memory Type
Fabrication Process
28nm 40nm 40nm 40nm 40nm
NVIDIA Specifications
Nvidia GTX 690
Nvidia GTX 680 Nvidia GTX 470 Nvidia GTX 570 Nvidia GTX 580
Processing Cores
1536 448 480 512
Core Clock/ Boost Clock
915MHz / 1019MHz
1006MHz / 1058MHz 607MHz 742MHz 782MHz
Memory Clock
1500MHz 837MHz 1250MHz 1002MHz
Memory Interface
256-bit x2
256-bit 128-bit 320-bit 384-bit
Memory Type
Fabrication Process
28nm 40nm 40nm 40nm

Futuremark's latest 3DMark 2011 is designed for testing DirectX 11 hardware running on Windows 7 and Windows Vista. The benchmark includes six all new benchmark tests that make extensive use of all the new DirectX 11 features including tessellation, compute shaders and multi-threading.

In 3DMark 11, the GTX 690 simply destroyed all the other single and dual GPU graphics cards we had previously tested. The most impressive thing for us what that even prior to overclocking, the GTX 690 was able to score nearly 6000 marks in the most demanding suite.

Unigine Heaven became very popular very fast, because it was one of the first major DirectX 11 benchmarks. It makes great use of tessellation to create a visually stunning heaven.

Like the prior benchmark, Ungine hands the GTX 690 the clear victory in the GPU wars. Here the GTX 690 was able to nearly run at twice as fast than GTX 680 and in comparison to the previous generation dual core GTX 590 and HD 6990, it was 57% and 46% faster, respectively.

Batman: Arkham City is the sequel to the smash hit, Batman: Arkham Asylum. The game was created with the Unreal 3 Engine, and includes areas with extreme tessellation, high res textures and dynamic lighting. Batman, also includes native support for PhysX and is also optimized for Nvidia 3DVision technology.

The Top graph reflects our results at 1920x1080, while the lower graph reflects our results Eyefinity and Surround results at 5760x1080.

The GTX 690 is going to cut through games like butter at 1080p. The results in Batman Arkham City were well above 100FPS even with the in-game settings at high. Breaking the results down by percentages, the GTX 690 came out 46% faster than the GTX 680, 54% faster than the HD 6990 and 40% faster than the GTX 590.

The GTX 690 was designed to run in Surround, as even at 5760x1080 the card achieved an impressive frame rate of 72FPS at the default settings, and 85FPS when overclocked.

Battlefield 3 is designed to deliver unmatched visual quality by including large scale environments, massive destruction and dynamic shadows. Additionally, BF 3 also includes character animation via ANT technology, which is also being utilized in the EA Sports franchise. All of this is definitely going to push any system its threshold, and is the reason so many gamers around the world are currently asking if their current system is up to the task.

The Top graph reflects our results at 1920x1080, while the lower graph reflects our results Eyefinity and Surround results at 5760x1080.

Wow! Battlefield is one of the most visually demanding games on the market, but the GTX 690 was still able to run it well over 100FPS. The GTX 690 was nearly 70% faster than the GTX 680 in this benchmark, 72% faster than the HD 6990 and 63% faster than the GTX 590. We would further compare it other graphics cards, but really what would be the point?

At 5760x1080, the GTX 690 dominated all the other graphics cards and was the only card in the chart that could come close to playing Battlefield at such a high resolution at nearly 60FPS.

Crysis 2 is a first-person shooter developed by Crytek and is built on the CryEngine 3 engine. While the game was lacking in graphical fidelity upon its release, Crytek has since added feature such as D11 and high quality textures. This improved the in-game visuals substantially, which in turn pushes even high-end hardware to the max.

The Top graph reflects our results at 1920x1080, while the lower graph reflects our results Eyefinity and Surround results at 5760x1080.

In Crysis 2 the GTX 690 again came close to running at double the performance of the GTX 680, but ultimately it was just shy by about 20%. That is still impressive as it is really closing the gap between single and dual GPU solutions while having the same performance per-core. Additionally, once we overclocked both the GPUs, the GTX 690 was actually 93% faster than the GTX 680.

Once again the GTX 690 gave us a smooth gaming experience at 5760x1080. This time the frame rate wasn't as close to 60FPS as in the previous benchmarks, but 44FPS is still well above the 30FPs mark for running without stutter.

DiRT 3 is the third installment in the DiRT series and like it's predecessor incorporates DX11 features such as tessellation, accelerated high definition ambient occlusion and Full Floating point high dynamic range lighting. This makes it a perfect game to test the latest DX11 hardware.

The Top graph reflects our results at 1920x1080, while the lower graph reflects our results Eyefinity and Surround results at 5760x1080.

For anyone gaming at 1080p most of the time, the GTX 690 might be overkill. Our results show the GTX 690 was able to hit 158FPS in DiRT 3, making it 45% faster than the HD 6990, 52% faster than the GTX 590 and 68% faster than the GTX 680.

The GTX 690 continues to impress at 5760x1080, as it was able to keep the frame rate well above 60FPS.

Metro 2033 puts you right in the middle of post apocalyptic Moscow, battling Mutants, rivals and ratio-active fallout. The game is very graphics intensive and utilizes DX11 technology, making it a good measure of how the latest generation of graphics cards perform under the latest standard.

The Top graph reflects our results at 1920x1080, while the lower graph reflects our results Eyefinity and Surround results at 5760x1080.

Even though Metro 2033 has been on the market for quite some time, it is still one of the toughest games on hardware in terms of graphical horsepower required. So the fact that the GTX 690 was able to run it above 110FPS, even at 1080p, is impressive. If we look at the comparisons to its closest competitors, the GTX 690 is 53%, 44% and 17% faster than the GTX 680, GTX 590 and HD 6990, respectively.

The GTX 690 did an impressive job in this benchmark at 5760x1080. There aren't many cards on the market that can run Metro 2033 above 40FPS at this resolution, so the GTX 690 is definitely in a league of its own when playing games at such a high resolution, with all the in-game settings at high.

Total War: Shogun 2 is a game that creates a unique gameplay experience by combining both real-time and turn-based strategy. The game is set in 16th-century feudal Japan and gives the player control of a warlord battling various rival factions. Total War: Shogun 2 is the first in the series to feature DX11 technologies to enhance the look of the game, but with massive on-screen battles it can stress even the highest-end graphics cards.

The GTX 690 easily tore through this DX11 game at 1080p. Other than the HD 6990, none of the graphics cards in our charts reached above 100FPS, so the fact the GTX 690 hit the high hundreds is impressive.

The performance during our Surround testing was equally impressive. At both the stock and overclocked settings, the GTX 690 was able to run Total War above 50FPS, meaning there is going to be no stuttering while playing this game across three displays.


To measure core GPU temperatures, we run three in-game benchmarks and record the idle and load temperature according to the min and max temperature readings recorded by MSI Afterburner. The games we test are Crysis 2, Lost Planet 2 and Metro 2033. We run these benchmarks for 15 minutes each. This way we can give the included thermal solution and GPU time to reach equilibrium.

Well this isn't something you see every day when it comes to a dual core graphics card. Our testing indicates that even during the most demanding games, the dual GK-104 graphics cores remain at a lower temperature than most single GPU solutions. In fact, the GTX 690 was able to keep the cores cooler than both the GTX 680 and HD 7970, while being quieter than both cards to boot.

Power Consumption:

To measure power usage, a Kill A Watt P4400 power meter was used. Note that the numbers represent the power drain for the entire benchmarking system, not just the video cards themselves. For the 'idle' readings we measured the power drain from the desktop, with no applications running; for the 'load' situation, we took the sustained peak power drain readings after running the system through the same in-game benchmarks we used for the temperature testing. This way we are recording real-world power usage, as opposed to pushing a product to it's thermal threshold.

The power consumption of the GTX 690 is obviously still high since it uses dual cores, but it consumes 10% less power than the GTX 690 and HD 6990, while being nearly 50% faster.

When it comes right down to it, the NVIDIA GeForce GTX 690 is simply the fastest graphics card on the market. This fact is undisputable, as in each benchmark the GTX 690 was able to leave the previous generation GTX 590 and HD 6990 cards in its wake. Of course the difference varied by game, but overall the GTX 690 was around 50% faster than both the GTX 590 and HD 6990. This makes the GTX 690 the first graphics card available that really has enough horsepower to play DX11 games at 5760x1080 and still achieve a frame rate of around 60FPS. In fact, there were only a few games (such as Crysis 2 and Battlefield 3) where the GTX 690 was not consistently at 60FPS across three displays. However, the GTX 690 was still able to run these demanding games at around 50FPS, well above the stutter-free point of 30FPS.

In comparison to the single GPU GTX 680, the GTX 690 didn’t quite reach double the performance increase, but at times it was close. Again, if we look at the difference between the two cards in the most demanding of our benchmarks, Crysis 2 and Battlefield 3, the GTX 690 was 78% and 70% faster, respectively. That is the closest we have seen a dual GPU model come to almost reaching the twice the performance level, but beyond that the GTX 690 has some additional advantages. These consist of better thermal performance, acoustics and power consumption over dual GTX 680s. Beyond this however, it is also the highest quality graphics card on the market, and the first to incorporate so many quality materials to enhance the visual appeal and thermal performance.

The GTX 690 also comes with all the latest features including Boost Clocks, 3D Vision Surround via single graphics card, Adaptive Vsync and improved anti-aliasing technologies. All of these features add to the already impressive feature-set of the GTX series, and increases the appeal of the graphics card to a broader market. Of course since the GTX 690 has dual Kepler cores, it is better suited at handling these technologies and when running games with FXAA enabled, we were reaching frame rates well beyond the scores shown in the review with MSAA enabled. In addition, the GTX 690 also makes it easier to set up Surround as it has three DVI ports, which eliminate the need for cable adapters when running three monitors.

Overall, the GTX 690 is hands down the best-of-the-best. It offers best-in-class performance, noise and thermal levels, and is extremely efficient considering the raw gaming power. It is also designed for Surround and 3D Surround technologies, being the only single PCB solution available that can play any game at 5760x1080 without stutter. However, at $1000 it is an exclusive and expensive product, but if you have the means to enter the club, you won’t be disappointed.


Copyright Neo Era Media, Inc., 1999-2014.
All Rights Reserved.

Please do not redistribute or use this article in whole, or in part, for commercial purposes.