Nvidia FX5950 Ultra Review

Author: Howard Ha
Editor: Howard Ha
Publish Date: Thursday, October 23rd, 2003
Originally Published on Neoseeker (http://www.neoseeker.com)
Article Link: http://www.neoseeker.com/Articles/Hardware/Reviews/fx5950ultra/
Copyright Neo Era Media, Inc. - please do not redistribute or use for commercial purposes.

The 3d card market has been rocked by a lot of surprises and controversy this year - it's become a sort of soap opera of the technology masses. What's become apparent from our reporting standpoint is that Nvidia is beginning to suffer from its position as the top video card supplier in the world, while ATI is winning a lot of hardcore and early adopter confidence with its products: it's the same Jason versus Goliath story being told on the Intel/AMD front. Is Nvidia really beginning to lose its glitter? Is it becoming the behemoth that no longer respects the hardcore users that it had won over during its early battles with 3dfx? Nvidia doesn't think so, but the enthusiast crowd has definitely been wooed over by ATI's price to performance ratio, and many of them are now wary after all the negative press that Nvidia has been receiving.

It's ironic because on one hand the public ridicules Nvidia for their video card practices, and on the other hand there's no end of praise from the same hardcore public for the Nforce2 chipset that pretty much dominates the AMD enthusiast market. And of course Nvidia's reputation amongst the regular crowd has remained relatively untarnished - the hardcore enthusiast crowd need not represent the opinion of the general masses. Still, Nvidia is definitely not just sitting on their laurels. As long as they don't doesn't find itself in any more scandals, we think the release of the FX5950 Ultra and the FX5700 Ultra will give them a nice picker upper this holiday season and early in 2004: these two cards are evidence that Nvidia is fighting as hard as ever to deliver the products that hardcore users are asking for.

We have both the FX5950 ultra and FX5700 Ultra boards at our test labs, but we received our FX5700 only last Friday and have been pressed to test it fully. For that reason today's article will focus mostly on the FX5950. Our initial tests of the FX5700 Ulra, however, are very encouraging... we think the 5700 will give ATI's 9600XT a real run for its money! With 3 times the processing power of the FX5600 Ultra it's no wonder… I'll show you some initial results later that are quite astonishing.

First let's take a look at the FX5950 in all its glory...

Top view of the FX5950

Memory heat spreader on back of the FX5950

As you can see from those pictures, Nvidia's FX5950 is yet another dual slot design - not surprising since many of you have already seen leaked pictures of the card in our news. The picture of the heatspreader on the back is actually a picture supplied by Nvidia - our own reference sample does NOT use the same fancy aluminum spreader, and instead uses a much less refined looking black cooler, like the one found in our FX5900 Review.

Here's a more detailed look at the card:

Not a typical impeller design

skeeved fins

That's one HUGE looking card. In reality though, the 5950 reference card is around 1" shorter than the Nvidia FX5900 Ultra reference card that we reviewed earlier this year. That's a nice improvement considering the FX5950 is clocked much faster. Nobody's going to be too impressed though, since the reference board is still 1" longer than the 9800PRO and just shy of that much longer than the 9800XT. When we talked to ATI a few weeks ago they were telling us how the engineers working on the 9800XT were having a rough time with the card design because they were so concerned about keeping it too close to the same size as the 9800PRO - they ended up lengthening the card a little because they realized that even with a slight bump in length, they won't be nearly as long as Nvidia's top end cards. We took a picture of the 5950, 5900, 5700, and 9800PRO to show you the relative sizes:

From Left To Right: FX5900 Ultra, FX5950 Ultra, FX5700 Ultra, RADEON 9800PRO

While the FX5950 has yet another really huge fan system design, it's actually the best fan design to come with one of the high end Nvidias. The size isn't a surprise either - ever since the 5800FX was released readers have been prepared for these honkers. The real surprise though, is in the LACK of noise this thing generates... on boot up the FX5900 would spin up fairly loud, but this baby boots up with a quiet purr that can barely compete against the other components in the system. Beautiful.

With respect to size and fan size, don't forget that the current trend is for some of the more creative vendors to have shorter board designs and more interesting fans, so the FX5950 production boards from vendors may even turn out to be nearly the same length as 9800XT boards.

Fan noise

When we were briefed about the FX5950 in September, one thing that caught our eye was that Nvidia intended to include "silent" cooling solutions on their reference boards. Now sure enough, everyone's had their fun with the "dustbuster" moniker for the FX5800's super loud cooler. When pictures of the NV38 leaked out on the internet and we showed people the expected design of the FX5950, the first thing we'd be asked was "Is that going to be another dustbuster?!?"

Thankfully, the FX5950's fan design only LOOKS loud - it's actually whisper quiet, even under load. In fact, the cooler is rated at 34dB, and we found it to be quieter than both the 9800PRO and 9600PRO fans. One reason for the reduced noise appears to be a more modest airflow through the filter. The blower style impeller and the enclosed fan design has allowed Nvidia to both reduce the RPMs and increase cooling efficiency with a lower airflow.

Specs Comparison

A basic comparison of some of the specs is listed below. While the FX5950 is to be clocked at 475Mhz (950Mhz) the reference sample is equipped with memory rated at 500Mhz.

Comparison of Next Generation Video Cards

9800PRO 256MB (R350)

9800XT
(R360)

FX5900 Ultra (NV35)

FX5950 Ultra (NV38)

GPU

0.15micron

0.15micron

0.13Micron

0.13Micron

GPU clock

380Mhz

412Mhz

450Mhz

475Mhz

Memory Clock

340Mhz (680Mhz DDR)

365Mhz (730Mhz Mhz DDR)

450Mhz (900Mhz DDR)

475Mhz (950Mhz DDR)

Memory Bus

256-bit memory bus

256-bit memory bus DDR/DDR2

256-bit DDR

256-bit DDR

Memory

256MB

256MB

256MB

256MB

Memory Bandwidth

?

?

27.2 GB/s

30GB/sec

Max FSAA

6x

6x

4x

4x

MSRP at Launch

$499
(Q1 2003)

$499
(Sept 2003)

$499
(May 2003)

$499
(October 2003)

Street Price (as of Oct 2003)

$360-460$489-539$379-450Not listed yet.

Directx9: The Tables Are Turned

The introduction of DirectX 9 into the gaming community has turned the tables on Nvidia quite severely. Most of you by now have read that the FX cards simply cannot compete against the R3XX cards from ATI, and with good reason: where before the FX cards were performing on sheer fillrate and vertex performance, DX9.0 introduces pixel shader 2.0, which means that a card's shader performance factors heavily into its performance.

The FX cards lack 24bit floating-point precision and do not seem to work well with Microsoft's HLSL (High Level Shader Language) compiler for DX9. This means that either Nvidia has to use the slower 32bit-mode or even fall back on lower quality 16bit mode, neither of which is an ideal solution for a company in an industry where performance and image quality are two key components for success.

Driver Optimisations Are the Devil!

I remember a few years ago, when Nvidia was dominating the market, users would be thrilled with every release of drivers optimising games performance. How the landscape of driver optimisation has changed today! Today, hardcore gamers, and reviewers especially, are faced with the reality that apples to apples comparisons between video cards are made all the harder by driver optimisations that not only influence performance, but also image quality.

Now with every release of drivers that tout performance enhancements, users are left to wonder in what creative way programmers were able to increase performance in games - and how those tweaks affect image quality.

Nvidia has suffered the worst publicly, from being called cheaters to being accused of misleading their customers by enhancing performance through selective rendering of different graphics. I think it's clear that both ATI and Nvidia try to optimise by balancing between various methods of filtering (like aggressive Trilinear filtering), but whether Nvidia was purposely misleading gamers is something only Nvidia can know. I'm a little wary of thinking the worst of the company when it's very possible that some of the bad press is generated by driver BUGS that were left in there unintentionally.

Regardless of whether these types of "optimisations" are intentiaonl, users are now forced to wonder in what conditions their cards are rendering images that are of less quality than competitive cards. The problem is difficult for even reviewers to pinpoint because image quality differences often only occur in specific scenes and sometimes even specific angles in scenes.

For this reason the only thing we can do as reviewers and consumers is to not only stay informed of driver revisions, but also rely on a larger number of NEW game benchmarks.

Of course just saying that we want to benchmark using new games is no easy task: most games do not have built in benchmarking tools, and the small subset of new games that DO have these tools are known to driver developers. Other sites have been trying to circumvent the problem by recording their own timedemos in games which allow demo recording, and some sites are simply using FRAPS with less scientific methods (ie running through a level?! with FRAPS on?!). At Neoseeker we're still looking at different options for benchmarking, and in the meantime we're using both some of the older benchmarks that we've had for a while, and a new mix of benchmarks.

Benchmarking & Performance Preface

For performance tests we changed our suite of benchmarks this time around. We're still in a transition stage and the limited time between the public release of Catalyst 3.8 and Nvidia's private release of 52.16 drivers to 5950 reviewers meant that we could only introduce 4 new benchmarks into the mix:

AquaMark 3
Halo PC Timedemo (DirectX9 title)
Jedi Knight: Jedi Academy timedemo
Final Fantasy Benchmark

We have kept most of our old benchmark tests as well, but we will slowly move away from using those as references as we are able to evaluate and include new benchmarks into our suite.

Note also that we do not yet have a 9800XT to compare against. ATI is still waiting to get some more review samples in so that we can do a proper and fair comparison. What we've done instead is overclock the 9800PRO to 412/702 core and memory to get an idea of how the 9800XT _might_ perform. Actually the proper spec for the memory core is 720Mhz so bear that in mind until we either have the 9800XT proper for testing, or until we rebench after successfully clocking to 720Mhz memory speeds on our 9800PRO.

Our test platform has not changed yet, as we feel the set up is still a valid mid-high end system that will represent the typical gamer and we intend on keeping this benchmark setup until 2004, when we'll start looking at possibly using a new high end system.

Test Setup:

The full specs of our benchmark system are below:

Intel P4 2.8Ghz chip
AVC Sunflower P4 Cooler
MSI 648Max motherboard
512MB OCZ PC2700 DDR Memory
Seagate 120GB ATA133 Barracuda ST3100
WinXP with SP1

While some benchmarks will be CPU limited, the current test setup will for the most part be powerful enough as a test platform.

AquaMark3

AquaMark 3 is powered by Massive Development's krass Engine, which uses DirectX9 features such as Vertex and Pixel Shader 2.0. The same engine powers both AquaNox 2: Revelation and Spellforce, an RTS title. Because it's built on an actual game engine and employs several advanced features, Massive feels Aquamark 3 represents a "realistic" performance test of your system for current and upcoming games. Considering that even high end cards jugg down to a CRAWL in parts of this test at the higher resolutions, we tend to agree with Massive ;)

For the sake of brevity I've only included two charts at the moment for our AM3 results. We DID test the game in all resolutions ranging from 800x600 through to 1600x1200, but we chose the two upper resolutions to report on. Note also that we are benchmarking with the commercial version of the benchmark, which allows us to set custom settings. The standard free download will only allow you to benchmark with 8X AF enabled and no AF whatsoever. You'll find that most gamers with sub $200 cards will prefer some AF and minimal AA to preserve quality while maintaining decent framerates.

AquaMark 3 is a benchmark recommended by ATI for comparison testing against Nvidia, and no wonder - the FX5900 and FX5950 are performing at the level of a 1.5 year old Radeon 9700PRO when AA and AF are enabled! On the flip side, when neither feature is enabled, the FX cards absolutely dominate against the ATI cards. Based on these results we think it's worth breaking down how the FX and RADEON cards compare to one another when either of the 8XAF or 4X AA are enabled on their own, because again, most gamers will opt to turn on AF and possibly use 2X AA or no AA if they can run at higher game resolutions.

Halo PC Timedemo

Halo for PC was only released last month. Because it was completely rewritten with DirectX 9 support it took a long time to port the game to the PC platform. Halo has been receiving some flak due to its apparently poor performance on even high end hardware - especially since it's game engine ignores Anisotropic Filtering and Anti-Aliasing. We received beta test versions of the game from Microsoft several months back and we were shocked at the speeds we obtained at 1024x768 resolution.

Now, with the game in public release, most of you will find the performance to be still pretty demanding. It turns out that the game's use of pixel shader 2.0 causes a lot of grief for Nvidia FX owners. Thankfully, Nvidia scrambled to optimize their drivers and version 52.16, which we use in our benchmarks, has some significant performance enhancements over the 45.XX series - the 45.XX drivers is the series used in most benchmarks where the ATI cards totally abolished the NVIDIA cards on this benchmark.

For our benchmarks we used the built in Timedemo function of Halo. The built in Timedemo functionality takes 4 in-game cutscenes and times how long it takes to render all the frames in the scenes. Tweakfactor has an article on tweaking and benchmarking Halo and their guide will allow you to compare your results against ours.

We're also using the patched 1.02 game. If you are comparing your benchmarks with ours you MUST use version 1.02 or above of the game. Any prior version has significantly different framerate reporting because the timedemo would examine memory consumption for every frame rendered... which adds a HUGE overhead. Note also that because the Halo Timedemo has to load different cut scenes in sequence, it is sensitive to system variations: hard drive, memory, and system performance all contribute in part to the results obtained in this benchmark.

For our testing we decided to examine Halo using both pixel shader 2.0 and pixel shader 1.4. The game itself can be configured to use all three pixel shader versions (the third being 1.1).

By default Halo uses MS's HLSL with pixel shader 2.0. From the above you can see that the 5950 is around 5% slower than the RADEON 9800PRO and about 10% slower than the overclocked 9800PRO at 1024x768. However, when you push up to 1600x1200, the 5950 pushes ahead of even the overclocked 9800PRO.

With shader 1.4 the 5950 has really close results in both resolutions, but interestingly enough here the overclocked 9800PRO seems to gain more from moving down to pixel shader 1.4 at the 1600x1200 resolution. I just find it interesting since it's widely assumed that the FX cards are the ones to gain the most out of moving away from the default shader 2.0 codepath.

Jedi Knight: Jedi Academy

While a new release, Jedi Academy is arguably one of the "older" architecture games in our list of 4 new benchmarks - this being because it is based on a heavily modified version of the Quake 3 Arena engine. I can assure you though that the graphics of Jedi Knight are very much inline with a late 2003 game title, especially with FSAA and AF turned on @ 1600x1200. In fact, at those maxed out settings only the higher end cards will give you decent framerates. Lighting effects for weapons (such as the super sweet dual lightsaber set ;) and Anisotropic filtering of textures is very nice in Jedi Academy (you can see some of our screenshots if you want to check out the eye candy).

Our testing with Jedi Academy involves 2 custom recorded actual gameplay sequence timedemos. Like many other sites, we're beginning to record our own timedemos and not release them, for fear of the dreaded benchmark specific optimizations that companies have been accused of - so, these timedemos were recorded at our labs and are NOT available for download. The demos are each multiplayer free-for-all matches in 2 multiplayer maps: Taspir and Rift Sanctuary. We hope that using actual gameplay timedemos will reflect a more realistic performance comparison than using cutscenes, as we were going to do before we figured out how to do proper benchmarking with this title.

The FX5950 absolutely demolishes the ATI cards in this benchmark. At first when we saw these results we were certain there was something suspicious going on. But after going through the timedemos it didn't look like there were any skimping on the FX cards with respect to image quality. We'll continue to examine this in more detail to see if we uncover anything. What we DO know is that Nvidia has spent time heavily optimising the Quake 3 engine rendering in their cards, so that might have paid off here in Jedi Academy (keep in mind that while the JK: JA game engine is heavily modified, it IS still based on the Q3A engine).

What we DID notice was that Catalyst 3.8 was not 100% stable in our timedemo tests, and we wonder if there are bugs in 3.8 that might hinder the performance of the RADEON cards in Jedi Knight.

Final Fantasy XI Benchmark Version 2.0

The Final Fantasy Benchmark is a simple flyby sequence to determine how your system will perform under default settings of the game. The sequence is a flyby through the Yuhtunga Jungle and features large landscapes, mountains, valleys, and jungles. While this is a flyby there are actual in game characters and maps. We’re not sure how the scores are tabulated but there IS a correlation between system performance and the scores obtained. Note also that the benchmark ignores AA, so we run the test with application preferences set in drivers, rather than force Anti-Aliasing or Anisotropic filtering.

The differences between each card are so slight in this benchmark that it's hard to value the results too highly. It does however agree with the trend that we've seen in our other benchmarks.

For those who are brave enough to trust these older benchmarks still, I've included some of them below. I'll be uploading UT2003 benchmarks as well (our charts didn't turn out properly tonight) though we've likely left behind 3dmarks 2001SE and 2003 altogether amidst worries of their current relevancy.

The 5940 overtakes the ATI cards, but again we're not certain you can trust these benchmarks at face value anymore.

What About IQ?

Due to our limited time we didn't go into an indepth Image Quality comparison. I think everyone knows the story well too and since the NV38 and R360 really only feature higher clock and memory speeds it's not a huge surprise that the image quality question remains unchanged: ATI's FSAA is still superior to Nvidia's FSAA, to the point where not only is the ATI 4XFSAA better than Nvidia's 4XFSAA, but it's 2X FSAA is getting pretty close to looking nearly as good as Nvidia's 4XAA. Not to mention the fact that the 6X AA mode is just superior to anything Nvidia has got, for those who are willing to take the huge performance hit.

We again stand by our opinion that image quality of the two cards are so similar in so many scenarios that you have to look long and hard to find most differences. Most reviewers spend a lot of time to find just the right scenes where they can zoom in and show difinitively that there's a quality difference. This doesn't mean that IQ comparisons are not useful... the most important part about IQ analysis is that reviewers can keep vendors more honest by identifying cases where drivers are unintentionally or intentionally leaving out details or features.

For the most part the most jarring differences are surfaces and edges where anisotropic filtering and anti-aliasing (respectively) are not properly applied - on one card then, you'd see a nice anti-aliased edge or a very nice filtered texture, whilst on the other card you will see jaggies and indistinct textures.

Conclusions

Lately it's harder to draw wide ranging conclusions from a narrow set of benchmarks like ours. We hope to add more benchmarks to our suite, including MS Flight Simulator 2004 and Max Payne 2: The Fall of Max Payne. Microsoft sent us Flight Sim 2004 but it can't seem to recognize CD4 properly so we're back to playing with that until later. Max Payne 2 is shaping up with some nice graphics and we'll be looking closely at that game. I think most of you by now have noticed that Anandtech uses a suite of greater than 10 or 15 games for their reviews. I think this is the way we'll see most reviews go as reviewers become more and more aware of the dangers of limiting yourself to too small a sample set of games.

One area where we'd like to explore further is Tomb Raider: Angel of Darkness. You're going to see a LOT of sites benchmarking with this game because ATI highly recommends it to their reviewers. Tomb Raider: AOD is an interesting title because previous benchmarks show the FX5900 to have nearly HALF the framerates as the 9800XT. Yikes. So in a follow up article we're going to include AOD in our suite just to see just how things have changed with the 52.16 Detonator drivers and the FX5950. Don't forget too that ATI is still looking to send us a 9800XT for a real head to head comparison between the two cards.

As far as conclusions go it's very obvious that the FX5950 is the right direction for Nvidia. Their 0.13micron process allows them to push higher speeds, and their research into cooling technologies has at least rendered their cooling solution a very quiet, if still large device. The FX5950 is definitely a performance demon that can put as much as a 15% performance increase over the FX5900, though for the most part it only beats out its older counterpart by around 5%. Compared to the 9800 series of cards we can only draw conclusions vis a vis the 9800PRO and 9800PRO performance @ 412Mhz/702Mhz. The FX5950 in that comparison can't be said to be doing too badly. Even AquaMark 3, an ATI endorsed benchmark, shows the FX5950 to be MUCH stronger than our 9800PRO with AA and AF disabled, though the story changes dramatically once those features are enabled.

Perhaps Nvidia lovers will take heart with the fact that at least our Jedi Academy results show the FX5950 to unquestionably leave its competition in the dust in ALL tested scenarios, with or without AA/AF, and at any resolution. The lead that the FX5950 had in those benchmarks is nearly as sweet a victory for Nvidia as ATI's 98800PRO lead over the Nvidia FX5900 in previously published Tomb Raider: Angel of Darkness benchmarks.

With all this in consideration, we'll withold final judgement until we've had a chance to put the FX5950 AND the 9800XT through more benchmarks. However, until then, we can say that the FX5950 does not disappoint, and that it will make a good stocking stuffer for many gamers this season.

»Neoseeker.com

Copyright Neo Era Media, Inc., 1999-2014.
All Rights Reserved.

Please do not redistribute or use this article in whole, or in part, for commercial purposes.