Opengl 2.0 Officially Launched

SkulkBaitSkulkBait Join Date: 2003-02-11 Member: 13423Members
<a href='http://www.theregister.co.uk/2004/08/10/opengl_2/' target='_blank'>http://www.theregister.co.uk/2004/08/10/opengl_2/</a>

Its about damn time.

Comments

  • Jim_has_SkillzJim_has_Skillz Join Date: 2003-01-19 Member: 12475Members, Constellation
    Ah, this is nice to see.
  • SwiftspearSwiftspear Custim tital Join Date: 2003-10-29 Member: 22097Members
    So that means that existing OpenGL games will look prettyer? Or that new OpenGL games are capable of doing more?
  • SkulkBaitSkulkBait Join Date: 2003-02-11 Member: 13423Members
    edited August 2004
    The latter... Sorta. Actually IIRC its a big overhaul to parts of the API, venor specific extentions are gone for the most part, n' stuff like that. So, I guess it just really makes it easier for programmers to take advantage of OpenGL features that were already there.
  • Har_Har_the_PirateHar_Har_the_Pirate Join Date: 2003-08-10 Member: 19388Members, Constellation
    edited August 2004
    nvidia should enjoy this
  • QuaunautQuaunaut The longest seven days in history... Join Date: 2003-03-21 Member: 14759Members, Constellation, Reinforced - Shadow
    They won't. They just barely got OpenGL1.5 support 2 weeks ago.
  • GeminosityGeminosity :3 Join Date: 2003-09-08 Member: 20667Members
    Wonder if it's nicer to program than the old OGL1. I hated it ><
    Now if only I hadn't lost my DirectX projects when my harddrive went 'kablooie' =/
  • DOOManiacDOOManiac Worst. Critic. Ever. Join Date: 2002-04-17 Member: 462Members, NS1 Playtester
    Better a few years late than never! woot! :D

    I hope more devs go back to coding in OpenGL than direct3d. Sure its platform independant (and you're aren't just being microsoft's ho), it just plain looks better.
  • WindelkronWindelkron Join Date: 2002-04-11 Member: 419Members
    k, I didn't understand a word of that article <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html//emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif' /><!--endemo-->
  • TalesinTalesin Our own little well of hate Join Date: 2002-11-08 Member: 7710NS1 Playtester, Forum Moderators
    Windelkron, short version, 'blah blah blah, Doomeh is a nub.' <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html//emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif' /><!--endemo-->

    It generally means that OpenGL is putting up a fight to stay current, rather than fading away. Adding in new features, making existing features easier to code with, and so on.

    Though nVidia won't really benefit too much... even in OpenGL, their performance is slow, unless they have a custom path. Especially given that their new cards don't 'do' any particular open standard on-board, favoring toward their closed-source 'Cg' language.
    This is why iD had to write a custom path for the nVidias (and is why they run faster... they're doing less than a quarter the color-calculations!)... if they are forced to use the standard OpenGL path, the only card that breaks 30fps is the 6800 Ultra. The FXes hover between 7 and 15fps, when using *actual* OpenGL with full colour calcs.
    This is also why you get horrible image corruption on nVidia cards under D3... too many corners cut, numbers sacrificed, in the name of 'we have a higher framerate!'
  • NumbersNotFoundNumbersNotFound Join Date: 2002-11-07 Member: 7556Members
    edited August 2004
    I was reading about why Nvidias were so much faster in Doom3... If I understand right, it went something like this:

    An ATI card processes lighting in two steps. First, it does an 8x8 pixel matrix and tests it for lighting. If any part of that 8x8 matrix needs lighting, it breaks it down into 4x4 pieces and sends that through it's rendering pipes.

    An Nvidia card does only a 4x4 matrix. If that 4x4 matrix needs lighting, it sends it down the pipe.


    The ATI card CAN be a lot faster, but in a game like Doom3, which has tons of lighting effects, the chances of getting an 8x8 matrix that needs no effects drops a lot, thus it ends up having to process much more data.

    Hopefully I read the article I got this out of right... It got a bit techincal and I could be totally off... <a href='http://www.3dcenter.org/artikel/2004/07-30_english.php' target='_blank'>Original Article Here</a>


    Also, could I see an example of these "horrible image corruptions?" You're the first person I've heard say that such a thing exists.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    edited August 2004
    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->This is why iD had to write a custom path for the nVidias (and is why they run faster... they're doing less than a quarter the color-calculations!)... if they are forced to use the standard OpenGL path, the only card that breaks 30fps is the 6800 Ultra.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    The nv3x path has been scrapped, doesn't exist, gone, demolished, finito. Apparently nvidias drivers fixed alot of their problems(if one was a feeling a bit malicious you could say that the nv3x path isn't gone, it just "moved into their driver" and out of the doom 3 engine <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->). JC decided to scrap it and just use ARB2 because it's less bothersome to maintain only one code path.
  • TalesinTalesin Our own little well of hate Join Date: 2002-11-08 Member: 7710NS1 Playtester, Forum Moderators
    edited August 2004
    404, not true. Why the nVidia cards are faster is because they cut the full, to-spec 24-bit color calculations down to 12 or even 8-bit color. They're doing between half and a quarter the work at any given time.

    Also, the nVidias can't use alpha-substitution normalmap compression. They have to use their own, which does NOT result in a pretty final product.. normalmaps don't like being compressed on their own. In the standard OpenGL track though, the alpha channel of a texture is not used on a normalmapped surface. So, they figured out that they could strip the alpha channel, and wedge the normalmap information into the newly-emptied space. Voila, beautiful (or at least moreso than standalone) normal compression.

    The most common nVidia glitch is 'shadow banding'. This is resultant from the aforementioned cut color calculations. The card doesn't use full precision (far from it, in fact) and so 'misses' bits here and there. It's the difference between the dithering you see in a 16-bit screenshot, as compared to a 32-bit screeny. This also takes place on a bit of the specularity, though that doesn't stick around long enough, nor is it widespread enough to be particularly noticable.
    Mis-compression of textures (garbled textures) is slightly rarer, but still somewhat common.. if you know what you're looking for. Usually it'll only be one type of texture, or just a bit here or there. But some people have reported that the Imps will occasionally be mistextured, either as flat black, some combination of a different in-game texture (the one that was wall-textured was funny as hell), or flat black.

    Also, when anisotropy or antialiasing is enabled, further dithering glitches pop out.



    The short version is, the nVidia cards are faster because they are only having to do between 25% and 50% the work of an ATi. Their customized codepath cuts corners like you wouldn't believe, sacrificing image quality for the 'we're faster!' factor that will draw in the generic target audience... aka, uninformed buyers who will buy anything that has the largest number slapped on it.

    The ATi cards are following the OpenGL spec. If you force an nVidia to run in the standard, un-optimized path, they run between 40% and 70% SLOWER than an ATi. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif' /><!--endemo-->

    (edit) And no, the nVidia-specific codepath in Doom3 still exists.. as methods to force it to use the standard codepath ALSO exist. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif' /><!--endemo--> (/edit)
  • MouseMouse The Lighter Side of Pessimism Join Date: 2002-03-02 Member: 263Members, NS1 Playtester, Forum Moderators, Squad Five Blue, Reinforced - Shadow, WC 2013 - Shadow
    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->OpenGL 2.0 also adds support for point sprites, "which replace point texture coordinates with texture coordinates interpolated across the point". Essentially, points are treated as textures and textures as points, enabling some interesting particle effects.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    Could someone elaborate?
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    Nope, the nv3x path does not exists _in_ doom 3. If nv3x chips are in fact using fx12 shaders they are doing it with the help of some naughty app detection in their drivers, nothing else.

    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->    I'm hoping you can clear up some apparent confusion about DOOM3's rendering paths.

        1) There is word that you have removed the NV30-specific rendering path
        2) The reason for the above is apparently because NVIDIA's drivers have improved to the point where NV3x hardware are running the standard ARB2 path at about equal speed with the NV30-specific path

        Could you say if the above is true?

        Correct.

        Also, based on information you provided to the public (via your .plan files as well as your interviews with us), has there been any significant changes made to the ARB2 path where quality is sacrificed for the sake of performance?

        I did decide rather late in the development to go ahead and implement a nice, flexible vertex / fragment programs / Cg interface with our material system. This is strictly for the ARB2 path, but you can conditionally enable stages with fallbacks for older hardware. I made a couple demo examples, and the artists have gone and put them all over the place...

        What would be the best way to benchmark the game on various hardware? This is actually quite a problem for a site like ours. Given that there are different rendering paths as well as possibly drivers doing difficult-to-verify call traces (perhaps some shader replacements and all those sorts of things), how would we be able to present comparable performance data analysis amongst different hardware? Obviously there are two ways to look at this : one would be from the angle of gamers who are looking to upgrade their pre-DX9 video cards, another would be for those who are already on a DX9-class video card that may be tempted to change to one that runs the game better than the one they have.

        Dumping the NV30 path makes this much easier. All the cards anyone is really going to care about benchmarking will use the ARB2 path.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    <a href='http://www.beyond3d.com/forum/viewtopic.php?t=13401&view=previous' target='_blank'>http://www.beyond3d.com/forum/viewtopic.ph...1&view=previous</a>
  • TalesinTalesin Our own little well of hate Join Date: 2002-11-08 Member: 7710NS1 Playtester, Forum Moderators
    Mm. I won't dispute your source... another gaming forum, rather than a press release or any officially-verifiable publication.

    However, the nVidia cards are STILL running with cut back-end color calculations. And this article is the first I've heard about the nVidia-specific path being dropped.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    edited August 2004
    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->  Mm. I won't dispute your source... another gaming forum<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    No a reputable 3d forum which has had contact with Carmack before and since, and he was asked to verify it, so it's very unlikely to be a fake as it would require a great deal of corruption.

    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->However, the nVidia cards are STILL running with cut back-end color calculations. And this article is the first I've heard about the nVidia-specific path being dropped.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    It's because no one has any strong feelings about it. nVidia requiring a custom path is likely to envoke much discussion and bad blood wherever fanboys reside and make a big stink that's pretty hard to ignore.
  • TalesinTalesin Our own little well of hate Join Date: 2002-11-08 Member: 7710NS1 Playtester, Forum Moderators
    When you sacrifice image quality to prop up flagging speed and give a false impression to the consumer, I get annoyed.

    After all, it isn't the first time they've used... creative marketing.
  • TommyVercettiTommyVercetti Join Date: 2003-02-10 Member: 13390Members, Constellation, Reinforced - Shadow
    Yet another reason that I'm buying that X800 XT PE over the 6800.
  • Browser_ICEBrowser_ICE Join Date: 2002-11-04 Member: 6944Members
    Interesting thread guys.

    Thank you.
Sign In or Register to comment.