Geforce Fx 5900nu Vs Ati 9800pro 128mb
<div class="IPBDescription">which to purchase?</div> hey guys, ive recently been planning on buying a new video card that would take the place of my current one; a horrific geforce 4 mx420. right now, im facing a dilemma. with a $200-$250 budget, im debating whether i should buy a geforce fx5900non ultra 128mb, or an ATI 9800pro 128mb (both of which are in about the same price range). ive been reading very mixed reviews on which dominates over the other and im getting so confused <!--emo&???--><img src='http://www.unknownworlds.com/forums/html//emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif' /><!--endemo--> . anyone have an idea which one i should go for, and why?
Comments
nu = non ultra... i think
If you dont know what it means, then don't buy a card based on someone saying it overclocks well, as I doubt you would want to do that. I would personally go with the 9800 Pro, I've heard lots of horror stories on the 5900's.
well regardless of if i plan on overclocking my video card or not, the geforce fx5900 still has higher clocking speeds which would imply that it performs better and that i should purchase it for being so. sigh <!--emo&???--><img src='http://www.unknownworlds.com/forums/html//emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif' /><!--endemo--> , im so confuzzled.
<span style='font-size:8pt;line-height:100%'>Minor edit by Marik_Steele to make URL work</span>
There are two types of 9800SE cards. There's one based off of the 9500 chipset, and there's one based off of the 9700 chipset. Obviously, the 9700 based 9800SE is better. Why? It has 8 pixel pipelines (256 bit) with 4 locked ones while the 9500 based card has only 4 (128 bit) period. With some softmod drivers (just Google "softmod") you can unlock those 4 extra pipelines and have a 9800 Pro for $50-$100 less than you would have had to pay.
You have to make sure you've got the right card based off of a 9700 though. <a href='http://www.newegg.com/app/ViewProductDesc.asp?description=14-131-236&depa=1' target='_blank'>Here</a> is a 9700 based 9800SE with 8 pixel pipelines. You can tell it has all 8 because the 4 memory chips are on either side of the GPU.
If you've got the money for a 9800 Pro, by all means get it. If you're looking for the best card for the best price, you might want to check this one out. It's not without risk. You might end up getting a card where the other 4 pipelines aren't just disabled, but actually "broken". Even then, you could always send it back and grab another card.
I've got another video card related question also. If you have a card with external power, does it plug directly into the PSU, or do you plug it into the wires coming out of the PSU. I've got a few open slots in the wires coming out of my PSU and I'm just wondering if I have to rip open the box surronding the PSU in order to install my new video card.
Sadly?
Go with the ATI. They are much better than Nvidia. Also the Nvidia card will take up one of your PCI slots because the card is so huge.
Games out today are pushing the GF4 over its limits if you want the absolute highest quality graphics. Dues Ex 2, Far Cry, Battlefield: Vietnam, PainKiller, and soon Stalker, DOOM 3, and HL2, are all very hardware taxing if you want it to look dropdead gorgeous. Which I do. Which is why I'm looking to replace my obsolete, aging Ti4400... :P
Games out today are pushing the GF4 over its limits if you want the absolute highest quality graphics. Dues Ex 2, Far Cry, Battlefield: Vietnam, PainKiller, and soon Stalker, DOOM 3, and HL2, are all very hardware taxing if you want it to look dropdead gorgeous. Which I do. Which is why I'm looking to replace my obsolete, aging Ti4400... <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo--> <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
Ill give that useless POS a home if you want.... <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink.gif' border='0' style='vertical-align:middle' alt='wink.gif' /><!--endemo-->
but if not get the 9800 pro its a lot better
well regardless of if i plan on overclocking my video card or not, the geforce fx5900 still has higher clocking speeds which would imply that it performs better and that i should purchase it for being so. sigh <!--emo&???--><img src='http://www.unknownworlds.com/forums/html//emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif' /><!--endemo--> , im so confuzzled. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
If anything higher clockspeeds could mean worse. Higher speed = higher temperature = bad. Just because it isnt clocked as high doesnt mean the 9800 Pro doesnt score as well, go to <a href='http://www.tomshardware.com' target='_blank'>Toms hardware</a> and search around for benchmarks. And all these people telling you to buy cards based on softmodding, ignore them. The chance that you actually get a card that CAN be soft modded, along with the chance that you dont kill it (and even attempting soft mod will probably void the warranty), is extremely low. Even if the cards DO overclock, they definately weren't designed to run with 8 pixel pipelines at higher clock speeds, thus frying them that much quicker.
<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
the geforce 4 mx420 is a geforce 4 in name only. It's basically a geforce 2.
Some people like anti-aliasing and anisotropic filtering. Gf3 will run only very simple pixelshaders(ps1.1 only I think) and it is quite slow which will suck in most comming games, like HL2, doom3, far cry and stalker.
Clock speed means very little when you don't know how much it gets done in one clock. A simple card like a geforce 4 mx440 can work at quite high frequencies like 275 MHz and get much much less done in one clock than a normal geforce 4 at the same clock frequency.
The memory clock is not all that relevant either, usually compression and bandwidth saving techniques are used and it is important to know how good the card is at using it's bandwidth before you can make any judgements at all on which has more usable bandwidth.
nVidia cards(this current generation, things may well change with the nv40) is quite bad at running pixelshaders, tiny programs run for each pixel in the graphics card, these can do some very beautiful things such as water that refracts light realisticly which will be more and more important in future games. Nvidia uses lower precision shaders in many games because they cannot run it fast enough otherwise(this may have an impact on image quality, not allways however).
Right now ATi has nicer antialising and worse anisotropic filtering(angle dependent, weird angles get less attention, this is to save transistors).
Nvidias strong point right now is raw fillrate in older games and openGL games(they allways had a lovely openGL driver).
Please define best and worst. Seriously. You don't backup a single claim in there. ATI and nVidia bothe have good drivers, its a matter of what card outperforms the other.
<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
If you used the developer version of 3dmark you can "go of the rail" and look at the surroundings, and what do you know, HOM(something isn't rendered). Adding static clip planes is not an optimization you can do in a game and it's only effect is to artificially inflate nV's score by throwing out things before you could possibly know if they are to be rendered or not. A valid optimization in a benchmark is one that you can do in a game, no precomputing is allowed, 3dmark has a few driver version that they accept for nV and even those are STILL cheating by futermarks own admission(lower shader precision), but at least such optimizations can be used in a game. Degrading image quality for the sake of performance in a benchmark is quite a low thing to do.
ATi did NOT whine, <a href='http://www.extremetech.com/' target='_blank'>exteme tech</a> did.
What's the use of a benchmark if one IHV does every cheat and optimization they think they can get away with and one does not(ATi did some PS instruction reordering which gained them a percent or 2, they admitted it and removed it even though it is a valid optimization that is mathematically equivalent to the original shader and is an optimization you might find in a game.). Nvidia lied and said their cheats where some weird driver bugs(you don't accidentally add static clip planes just outside view at all times). Nvidia gained rougly 30% from their "optimizations".
Also look at nvidia in shadermark or the HL2 preview and you'll see that full precision pixelshaders significantly hurts performance on nv hardware.
Nvidia doesn't even appear to use full trilinear filtering in ANY DX game. Instead it forces a pseudo-trilinear mode(affectionately reffered to as brilinear filtering) which cannot be overriden by the user. I certainly wouldn't mind having it as an option as it does not significantly reduce image quality, but forcing it is a bit iffy.
John Carmack has said that he thinks developers will treat nv3x cards as dx8 cards and he appears to be correct as well(he himself is treating it as a dx8 card, so is halo, HL2, tomb raider:AOD(urgh, horrid game) etc.). ATi performance in doom 3 was twice as good as nV when full precision was used, nV had better performance than ATi when mainly fx12(integer 12 bit) shaders where used. The lack of precision won't matter much to doom 3 according to carmack but it may well matter to other games. Certainly it will if you use high dynamic range rendering, integer shaders are right out as well here.
Gabe Newell has also said a fair few things about nV performance in HL2, they allegedly spent 6 times as much time on nV hardware specific optimizations as they did for ATi because mid end nv3x hardware had such horrid performance.
Personally I wouldn't count on nV putting "optimizations" in their drivers or the developer creating a lower precision path for all games that use full precision ps2 that I might want to play in the future.
Checking reviews recently it would appear as though nV's drivers cause more trouble than ATi's as well.
<!--QuoteBegin-Soylent green+--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Soylent green)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Facts<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Owned
Their code paths are aging as they haven't redesigned their actual core in a very long while (as in, complete overhaul instead of just slapping on new modules, which is more or less what they've been doing).
As an example, the GFFX may be clocked at 800MHz.. but if it takes four cycles to finish an operation, whereas the ATI card can do four of the operations in ONE cycle, an ATI card at 200MHz would be four times FASTER than the 800MHz 'beast'.
It's white-sheet BSing to get non-savvy customers to buy their inferior product.
Another funny thing is, no nVidia card can ACTUALLY do Pixel Shader 2.0 effects. The card translates any PS2.0 instruction it's given back to PS1.4, and THEN renders it. That translation only slows it down further.
nVidia cards are slower. They also have WORSE image quality, especially in FSAA and Anisotropic tests. They're following the path of 3Dfx with their 'Cg' language. In short, they need to shape up and stop relying on rabid fanboys to buy their crappy cards, or they'll be washed up pretty quickly. Deal with it.
Now, on to which card to buy. The 9800 Pro is an excellent card, and much faster than the 9600XT. If you can get your hands on one for only $200, DO SO. Make sure it's a genuine Built By ATI, and not a third-party... never had a problem with an official card, so that's all I'll buy. Have had friends who went for the off-brand, and they've had... problems.
And as to what someone said before... the 9800 non-pro is NOT a four-pixel pipeline card. The 9600 (non-pro, Pro, and XT) is a 4-pipeline card. The 9500 non-pro was a four-pipe card, but the 9500 Pro (which I own) is an 8-pipe card, just like the 9700... just clocked a little lower.