Finally picked this up - a few questions.
Mkilbride
Join Date: 2010-01-07 Member: 69952Members
So anyways, I know it's a beta, and I expected it to be glitchy, there is no way past that, I get that.
But I was wondering if there are any like tweaks or fixes the community may have come up with?
Because I know my system isn't top of the line anymore:
Q6600 @ 3.4GHZ
GTX280 Superclocked
4GB PC8000(1000MHZ RAM)
Windows 7 64-bit
But even on low, I am playing with like 25FPS average on a 7-8 person server...and even going from the highest setting, to the lowest setting, doesn't change my FPS. Now yes, I understand it's a beta, but I was hoping there was someway past this. I was hoping that by going to low, I could average 40-45 FPS at least, which is acceptable. I really hope this gets optimized better, cause on low it looks ass ugly. :P, but yeah, I also noticed you guys use PhysX, that surprised me! I didn't think any Indie games would use that. Guess I'll need another GPU for PhysX.
Thanks for any help offered.
But I was wondering if there are any like tweaks or fixes the community may have come up with?
Because I know my system isn't top of the line anymore:
Q6600 @ 3.4GHZ
GTX280 Superclocked
4GB PC8000(1000MHZ RAM)
Windows 7 64-bit
But even on low, I am playing with like 25FPS average on a 7-8 person server...and even going from the highest setting, to the lowest setting, doesn't change my FPS. Now yes, I understand it's a beta, but I was hoping there was someway past this. I was hoping that by going to low, I could average 40-45 FPS at least, which is acceptable. I really hope this gets optimized better, cause on low it looks ass ugly. :P, but yeah, I also noticed you guys use PhysX, that surprised me! I didn't think any Indie games would use that. Guess I'll need another GPU for PhysX.
Thanks for any help offered.
Comments
Wait, are you guys telling me this game doesn't have Multi-core support? In 2010? Usually when creating the Engine, it needs to have support for it from the ground up, not tacked on... Yikes :P, can't wait for that update.
Yeah, can't OC any further, but I'll check out that thread, thanks.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->r_shadows 0 - turns off dynamic shadows which improves performance ("r_shadows" turns them back on, NOT "r_shadows 1")<!--QuoteEnd--></div><!--QuoteEEnd-->
Fixed that for you :P
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->As someone who has programmed/ designed multi-threaded software (althoug not games, thankfully!), I can vouch for it's difficulty. Or at least it's difficulty to "do right".
There are two reasons one would use parallelisim (multi-threadin): One, it lends itself naturally to the design. If your program is going to need to do two seperate tasks at the same time, then a multi-threaded approach is the natural one. For example, if you need to "design an algorithm to bake two cakes", it's pretty easy to say "seperate into two threads/chefs, each of them baking their own cake". It actually becomes more difficult, from the design side, to try and keep that single threaded. Eg. "How to bake two cakes at the same time, with only one chef". Where you have to complete multiple tasks at once and try and balance your time between them.
The other reason is for performance. If you have a single task, but hardware that would support multiple lines of execution, you can finish that task faster, if you break up the work so that multiple things can be done at once. It's not as easy as the first one, because usually the design does not naturally lend itself to being split up. It is a single task afterall. So you have to get creative in how you are willing to split it up. An example would be single photo rendering (Saw a demo last year from AMD using dual cpu dual core opteron's ). Naturally you would just go through and render out the whole thing in one pass. But instead, to take advantage of all the cores, they broke it up into sections, so one core would work on the top, the other the bottom, at the same time.
<b>The sad part is, that for gaming, it's often the second case.</b> Infact, some things just naturally CAN"T be parallised at all. (Eg. Baking a single cake. No matter what you do, you HAVE to mix it before you can bake it. There is no way to split it up so that you can bake the cake before your mix the ingredients. So parallelism (an extra chef) wouldn't help for that situation).
It is difficult to change your design, to figure out how to split up tasks and divide the work evenly between them. Some times it's even impossible.
All this is just talk about design, not even touching on implementation. Not only is it tricky to design multi-threads, it's tricky to program them. There are lots of little tricks/ gotchas and things you must be mindfull of (eg. if two threads try to change the same data at the same time) when doing the actual implementation. Much more so then if you didn't worry about parallelisation. If you have two people doing two things at once, you have to make sure they are coordinated, dont' get into each others way, share the equipment properly, etc. (Eg. two chefs in a kitchen).
We are no doubt going in this direction. It's cheaper performance/price wise to make 2 slower cores, rather than 1 faster one. But it's not all roses. Some tasks can't even be made to use this, or if they do, will show minimal benefit. And those that do are quite often very tricky.
I dont' envy those game programmers/designers
Aggies<!--QuoteEnd--></div><!--QuoteEEnd-->
and
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->From what I understood, Unreal 3 will be utilizing second core for discreet tasks not for the main game thread: video, level loading, physics. Makes me wonder what kind of approach Quake 4 employing and how Unreal 3 will be better off on Dual-Core platform vs. Single.
I agree that only big budget projects that create new game engines will be able to dedicate man-hours for multi-core support.
Cheers<!--QuoteEnd--></div><!--QuoteEEnd-->
Also,
<a href="http://www.gamasutra.com/view/feature/1830/multithreaded_game_engine_.php" target="_blank">http://www.gamasutra.com/view/feature/1830...ame_engine_.php</a>
-> If you have time to kill.
Source got this update, also. the Doom 3 / Quake 4 Engine got a poor upgrade, though by comparison. But yeah, I really can't wait til Multi-core support is put in, and it's optimized more, or else this game really won't do too well, I am a big fan of NS1, been playing since 2004, not the start, but a long time, and was excited about this, and was hoping the poor performance was just people with pre-built computers trying to run it on Max and such. I am having fun for the little I can play, but there is alot of warping and general bugginess, and turning off flashes didn't seem to help much, but shadow did, but now it stutters instead of just low FPS, lol.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec--><!--QuoteEnd--></div><!--QuoteEEnd-->Unlike some types of applications, games strive for 100% CPU utilization to give players the best experience their hardware can provide. That's easy enough with a single processor core, but more challenging when the number of cores is multiplied by two, and especially by four. Multithreading is needed to take advantage of extra processor cores, and Valve explored several approaches before settling on a strategy for the Source engine.
Perhaps the most obvious way to take advantage of multiple cores is to distribute in-game systems, such as physics, artificial intelligence, sound, and rendering, among available processors. This coarse threading approach plays well with existing game code, which is generally single-threaded, because it essentially just involves using multiple single threads.
Game code tends to be single-threaded because games are inherently serial applications—each in-game system depends on the output of other systems. Those dependencies create problems for coarse threading, though, because games tend to become bound by the slowest system. It may be possible to spread multiple systems across a number of processor cores, but performance often doesn't scale in a linear fashion.
Valve initially experimented with coarse threading by splitting the Source engine's client and server systems between a pair of processor cores. Client-side systems included the user interface, graphics simulation, and rendering, while server systems handled AI, physics, and game logic. Unfortunately, this approach didn't yield anywhere close to a linear increase in performance. Valve found that its games spend 80% of their time rendering and only 20% simulating, resulting in an imbalance in the CPU utilization of each core. With standard single-player maps, coarse threading was only able to improve performance by about 20%. Doubling performance was possible, but only by using contrived maps designed to inflate physics and AI loads artificially.
In additional to failing to scale well, coarse threading also introduced an element of latency. Valve had to enable the networking component of the engine to keep the client and server systems synchronized, even with the single-player game. Looking forward, Valve also realized that coarse threading runs into problems when the number of cores exceeds the number of in-game systems. There are more than enough in-game systems to go around for today's dual- and quad-core processors, of course, but with Intel's 80-core "terascale" research processor hinting at things to come, coarse threading appears to have little long-term potential.
As an alternative to—and indeed the opposite of—coarse threading, Valve turned its attention to fine-grained threading. This approach breaks down problems into small, identical tasks that can be spread over multiple cores, making it considerably more complex than coarse threading. Operations executed in parallel must be completely orthogonal, and scaling gets tricky if the computational cost of each operation is variable.
Interestingly, Valve has already implemented fine-grained threading in a couple of its in-house development tools. Valve uses proprietary VVIS and VRAD applications to distribute the calculation of visibility and lighting for game levels across all the systems in its Bellevue headquarters. These apps have long taken advantage of distributed computing, much like Folding@Home, but are also well suited to fine-grained threading. Valve has seen close to linear scaling adapting the apps to take advantage of multiple cores, and has even delayed upgrading systems in its offices until it can order quad-core CPUs.
Why are you telling person with perfectly good computer to potentially break it? Because NS2 runs too slow? It's NS2's fault and no one else's. Most powerful computers get 30-60 FPS.
r_shadows gives absolutely no boost because the bottleneck isn't in draw calls or on graphics card side.
BTW UWE should change that crash uploader not to ask for confirmation because I can't activate it when in fullscreen mode. I'd go even further and automatically upload stack traces of exceptions that happen on unmodified clients. Valve uploads FPS stats from all their clients (min, avg, max). If UWE had done that they'd noticed that average of min FPS is around 15-20 and average average is below playability level on some maps.
Thanks for any help offered.<!--QuoteEnd--></div><!--QuoteEEnd-->
I have the same problem. Changing the graphical settings doesn't affect my fps much. This is a similar problem I had with the first Stalker. With full dynamic lighting on, I was getting 20 fps regardless of the graphics settings I chose (minus resolution). When I switched to static lighting, my fps jumped to 150. I'm using an nvidia 9600 gso.
But I was wondering if there are any like tweaks or fixes the community may have come up with?
Because I know my system isn't top of the line anymore:
Q6600 @ 3.4GHZ
GTX280 Superclocked
4GB PC8000(1000MHZ RAM)
Windows 7 64-bit
But even on low, I am playing with like 25FPS average on a 7-8 person server...and even going from the highest setting, to the lowest setting, doesn't change my FPS. Now yes, I understand it's a beta, but I was hoping there was someway past this. I was hoping that by going to low, I could average 40-45 FPS at least, which is acceptable. I really hope this gets optimized better, cause on low it looks ass ugly. :P, but yeah, I also noticed you guys use PhysX, that surprised me! I didn't think any Indie games would use that. Guess I'll need another GPU for PhysX.
Thanks for any help offered.<!--QuoteEnd--></div><!--QuoteEEnd-->
Your FPS certainly does sound awfully low for a 3.4GHz. =(
What reso are you running the game at? Have you tried running in Windowed mode?
I wonder what FPS NS2HD is getting.