News Headlines
- Wed, Aug 10
- Rumor: Alolan Raichu, Morelull, Team Skull and more revealed for Pokémon Sun and Moon
- The Girl and the Robot Review: The Hero, The Heroine, and the Journey Ahead
- Rayman Origins will be available for free starting August 17 through Uplay
- Sony announces September 7 event in New York City simply titled "PlayStation Meeting"
- Quantum Break: Timeless Collector's Edition to arrive at retail and Steam September 14
New Articles
Related Articles
Nvidia GeForce GTX 680 Launch Review - PAGE 1
Chris Ledenican - Thursday, March 22nd, 2012 Like (1) Share (1)Today marks an important step in the evolution of the graphics card market, as Nvidia is releasing their first new architecture since 2010. The new architecture is dubbed “Kepler” and utilizes a 28nm node, sports a redesigned SMX architecture and is built on three core principles of making a faster, smoother and richer gaming environment.
Starting with the core, Nvidia has built on the foundation of the Fermi architecture by utilizing a similar CUDA core design, but Kepler uses an entirely new geometry pipeline to improve tessellation as well as improved displacement mapping and compute operations to better performance across the board. Additionally, the Kepler architecture eliminates the “hot-clock” design used by Fermi, so the CUDA core will no longer run at double the GPU clock speed. This allowed Nvidia to instead triple the CUDA cores inside the core, without the power consumption spiking through the roof. However, the tradeoff is the CUDA cores inside Kepler are not as fast as the ones seen in Fermi, so we can’t make a direct comparison between the two. Some Nvidia reps have suggested that 512 Fermi cores equate to around 768 Kepler cores, give or take a few.

The GPU also features an entirely new clock design that utilizes both a stock and boost clock speed. The stock speed is considered a guarantee of the lowest GPU frequency used while gaming, while GPU Boost dynamically increases the clock speed above the stock frequency to ensure the best results. GPU Boost works at both a hardware and software level, and monitors the GPU's power consumption and heat level before dynamically boosting the clock speed. While the GPU boost frequency of the GTX 680 is set at 1058MHz, it is possible to see clocks beyond 1100MHz as long as the conditions are acceptable. GPU Boost is also considered the average speed the GPU should run at under most load conditions.
Nvidia has also entirely reworked the memory subsystem of Kepler based boards. This allowed them to push the memory frequency up to 1500MHz (6000MHz effective). At this speed the memory is able to achieve a bandwidth rating of 192GB/s, even while running on a 256-bit interface. In our opinion this an important piece of the Kepler design, as it allows Nvidia to create the board at a lower cost than if they were using a larger fame buffer and memory interface. In turn, the design allows them to price the GTX 680 lower than the HD 7970. If Nvidia’s claim of the GTX 680 being 10% faster than the HD 7970 holds true, AMD is going to have to drop their prices if they want to stay competitive.
On top of all the aforementioned specifications, Nvidia builds upon this already impressive feature set with support for 3D Surround via single graphics card, improved anti-aliasing features and improvements to PhysX. Obviously we have a lot to cover, so let’s move on and see what makes this beast tick, and more importantly, how it performs against AMD's Radeon HD 7970.

~51fps in bf3 @ 2560 x 1600 w/ 4xAA & 16xAF? thats a big meatball!
and theres enough performance left to force all those other fancy features, like transparency AA.
when you got it, nvidia, you got it.
the next generations of GPUs will be very interesting. i dont think the gtx 680 will be anywhere near quite as powerful to max out next-gen console games as i would bf3 (2560x1600 & beyond, bunches of AA, big ol' AF, etc) assuming directx doesnt have any major efficiency reworks. its a fun ride.
either i wait and see what the 690 is like (and its cost!), or ill probably have to settle for eyefinity on the 7970, and sacrifice the CUDA that Folding@Home greatly benefits from.
Any news on how much bandwidth the 7970 and 680 need? As far as I'm aware PCIe 2.1 x16 hasn't yet been saturated yet.
AVP matched the 7970 with both at stock levels.
Only 8.6% faster at the highest settings in Arkham City on with both at stock clocks.
Smokes the 7970 by 18.6% at best in Battlefield.
Loses to the 7970 in Crysis by 21.6% at the highest settings.
Up to 24% faster than the 7970 in Dirt three in the middle tier settings. Around 18% in the others.
Loses by 15-25% in metro compared to the 7970. And Metro is not an AMD biased or optimized game. Until the 6xxx cards came out AMD always lost by a wide margin. The 7970 either is a better architecture for the game or currently has better drivers. I have no doubt the 680's performance could have been better with the game as it's a new architecture and could probably use some driver work.
The 680 also loses in Total War between 10 and 17%.
On average that makes the 680 about 5% slower at the highest settings.
Like I said, hardly a 7970 killer, very efficient architecture and it certainly trades blows the AMD but it's not nearly 10-15% faster across the board as stated in the article. I really have to question whether you did the math or just eyeballed it.
The overclocking of the 680 was 15.5% core and about 19% vram.
The 7970 on the other hand had a gpu OC of 21.6% (925mhz to 1125 mhz) and about 14.5% vram OC. So the overclocking is kind of 'meh' compared to it. It's a nice increase no doubt but it's nothing to write home about. The 7970 is able to go past the limits set in CCC in the 7970 review but the gtx 680 actually capped out before its software limits, whether that's due to power or physical limitations of the architecture at that voltage I don't know though.
The pricing and efficiency is what makes this a great card, not the raw performance.
Perspective is important.
to play next-gen console ports, your compy will need to be considerably more powerful to run 'em. you want to notch up the resolution, add mods and force spiffy gfx options? be ready to throw a ton more power at it.
and emulation is a whole 'nother story.
just sayin' by the time next-gen console games are ported to PCs, a gtx 680 probably wont be too relevant.
Weird how it can sometimes have negative scaling (if that is the correct usage of this somewhat technical term).
Thanks for the response on our GTX 680 review. I noticed this is the second time you have mentioned not being sure of the reliability of our scores. Can you tell use what you would like to see different in the way we approach and analyses our benchmarking results ?
Thanks