I don't wanna be the bad guy but :
It would be nice to get an satisfying explanation on why the overall quality just decreased these days.
Server are the same (mine included). Client are more or less the same. Ok, kenny bought a shiny new PC but not everyone. I believe there is no EMP bomb that fell on the InternetS that forces all packets to be rerouted through Japan... So... That doesn't leave much choices.
I have a good ping on the servers i connect to (30-50 most of the time, 80 max in rare exception) as most of the other players. I never play on server above 18 slots. I use to check ping outside the game. They are the same.
Examples :
-If I stop shotting at a skulk after 15~20 bullets (early game) he just die after a second sometimes (clearly noticeable). Yes the health doesn't go down instantly as a compensation on the skulk side but recently it is clearly growing. It's like behaving like a samurai in a movie. You know he's gonna die, you stop taking care of this quadruped. And you sheath your sword and start to walk away. Then we hear the metal click sound of the sword locking, then the skulk just falls...
On the skulk side, ambush spots as they are known by the opponent start to be death nests. Even if you anticipate and make a move, you're ultimately too late. At some point you can't manage it.
-If I enter a room running and a skulk is waiting for me. I'm sure I emptied a full clip before he sees me firing half of it (and I'm generous). That is a noticeable amount of time. Sometime you don't even see them moving before they die. Ok, It's always the case for online games, but this situation cannot be unnoticed with this last build. Almost close to Battelfield 4 (sorry, just saying).
-Same goes for FPS. Significant drop on client side since last patch. If FPS has an influence on the so called netcode... what can i say ?
-Once I had the same symptoms posted by Metroid. I have a SSD (not full), an I7, a huge load of ram. Well where did i do wrong ? I mean I can render almost real time in blender while editing a simple scene. Restarting/resetting a game shouldn't look like this. If calculation are at 0, well maybe the calculation aren't in the queue yet...
-We clearly see that if people have a ping above 100 the more they use mouse-wheel the more teleportation happens.
But the fun thing is that if ppl have a really good ping (under 30) the same happens. But "net_stats" says no choke, no error and such. If he says so...
I mean everything is the same except the build number of NS2. Unless somebody left "all_debug=1" which would be a good classic joke (everybody fell for that one, at least one time) : My opinion is "I'm afraid there is an issue there, not on the rest".
IronHorseDeveloper, QA Manager, Technical Support & contributorJoin Date: 2010-05-08Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
edited October 2014
@UncleCrunch
Are you even playing on servers with (properly) modified network rates?
Such lag compensated related issues happen in AAA games with insane budgets too :
Compared to other fast paced games NS2 does make 50ms ping feel like 250ms.
I kinda like this aspect of the game, allows me to blame my lack of skill on something else!
Compared to what games exactly? I can say the same thing other way around, throwing claims like that without reference is useless.
Was just setting up the joke XD! But since you asked, there's a couple of Source games that on average certainly feel a lot tighter.
I'm aware comparing a plain FPS with NS2 is not fair. High velocity aliens with wildly twisting hitboxes are bound to exacerbate latency issues, close range melee means a lot of abrupt turning, it's already quite a feat the game is even as playable as it is.
Are you even playing on servers with (properly) modified network rates?
It would be good if the server list would advertise properly modified servers for all us low-brow non server admins. A tickrate in the title tells me nothing with all the new tweakable settings. Plus might motivate the other server owners to invest.
Such lag compensated related issues happen in AAA games with insane budgets too :
Not a very strong argument against why a tighter experience shouldn't be more of a focal point in such a fast paced game. Especially with strong technical coders on the CDT volunteering their time. As long as they contribute your budget isn't gone. So why not put a couple of days towards making this experience better. Or better, just backlog it until some time frees up. Not saying redo the netcode, but at least work towards a status quo of better tuned servers.
Not a very strong argument against why a tighter experience shouldn't be more of a focal point in such a fast paced game. Especially with strong technical coders on the CDT volunteering their time.
It is a strong argument given the vast budget differences and resources between the two developer teams : 4-8 people with fan funding versus 120+ people with an EA near unlimited budget.
You can focus on something all you'd like - but if the resources aren't there to develop it.. all the focus in the world may not necessarily mean much.
That being said, CDT HAS made this experience better.. these network variables HAVE been unlocked due to our team, allowing you to mimic HL1/gold src engine level of network updates if you know how and have the hardware to support it.
If instead you are suggesting to change the default values for all servers.. well that would be a poor idea due to the hardware variances and the fact that even today, 2 years later, many populated servers are STILL mismanaged and suffer huge performance issues even using default values or no mods. (I won't name drop but if you live in the US you know exactly which servers do this and don't respond to emails)
It is a strong argument given the vast budget differences and resources between the two developer teams : 4-8 people with fan funding versus 120+ people with an EA near unlimited budget.
That's a good argument why it isn't there /yet/, not why it never could. Plus, you can't with a straight face claim the team is too small to improve netcode, and then in the same post say you added parameters that do just that! With that logic the small team fallacy would be equally applicable to any other feature released in 267-269.
What I suspect you are implying is "we have more important things planned out till the foreseeable future" or "this is not something I personally see any value in investing more time into".
If instead you are suggesting to change the default values for all servers.. well that would be a poor idea due to the hardware variances and the fact that even today, 2 years later, many populated servers are STILL mismanaged and suffer huge performance issues even using default values or no mods. (I won't name drop but if you live in the US you know exactly which servers do this and don't respond to emails)
Here, I'l quote one missed suggestion from that same post:
It would be good if the server list would advertise properly modified servers for all us low-brow non server admins.
Sounds like the new parameters caused a schism in the server pool, introducing partially broken and possibly optimized servers alongside vanilla ones. There's no way a player could tell the difference, except hoping NoLag tags are not made up.
Maybe it's time to address mismanaged servers in a way that suits a smaller team with a community that loves to contribute. Like better self education resources for server admins, I bet one of the CDT, or even any community member can take an evening to write up a more detailed configuration and hardware guide.
And if such a guide exists, promote it on the UW outlets! Lobby the server admins, package it with the server!
You are 95% there, it's very much in the public eye, why not add it to the bottom of the backlog and if it ever turns up, great! If not, at least people here felt heard.
Servers do not just need a stable 30 tickrate. Update rates and bandwith per player are also needed among others, and most servers currently running do it wrong!
(if anyone wonders if the taw servers do it right... we are working on it. )
Update rates and bandwith per player are also needed among others, and most servers currently running do it wrong! (if anyone wonders if the taw servers do it right... we are working on it. )
It would account for the bad experience people claim to have if most servers do it wrong. Sounds like a good place to start improving things!
There's only a page of popular populated servers, even less if you ignore the atypical player count ones.
From what I have seen I wouldn't recommend changing any rates with NS2 currently, aside from maybe increasing the max data cap a bit. There are problems with NS2's net code that require some significant debugging to figure out I suspect, something which there is currently no reliable tools to do so with. However netcode problems are probably some of the most difficult things to debug, as the state of everything on the server is not something that can be easily monitored in real time. Probably every game ever released with lag compensation has had bugs, some which took a long time to fix, and some which have never been fixed.
How Spark handles lag compensation has theoretical advantages to say Source which is dependent on server ticks for snapshots, so increasing the tickrate of NS2 is actually quite pointless, unless you want smoother drifters. That said increasing the update rates and adjusting interpolation to match would be beneficial and does provide improvements, however it also seems to exacerbate some of the netcode issues. Hopefully debugging tools can be added and some of the issues can be found, but its certainly no easy task.
What would such a debugging tool entail? A mod that records and an external tool to unify and analyse client/server recordings? Or would the recorder need to be lower level.
To fix this problem, the guy who understands that subsystem should just spend some time.
If everyone on the server has 50 ping then I expect the lag to be around 150 (double ping + tick). And I don't understand how skulk lands two bites (it is normally 450 ms) during this time.
The tool would need to be engine level, you cannot do much currently in LUA to validate lag compensation results (aside from some stuff on a listen server).
As for delays with response to inputs, that could unfortunately be more complex. I am not sure exactly how Spark handles player inputs - but I think they are processed immediately when the server receives them (ala Source post Orange Box), but they could also be queued on the server and processed all on the next tick. With the server tick and update rates being out of sync also, there is the potential for quite a bit of variability. You could also factor in human reaction times and the perception of taking the damage, the second which could use improvement in NS2.
As a note, i've also noticed that NS2 can tend to over exaggerate pings and I sometimes wonder if the packet latency calculation is based on ping which often times seems inflated.. hmmm
IronHorseDeveloper, QA Manager, Technical Support & contributorJoin Date: 2010-05-08Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
edited October 2014
To add to what dragon said :
One of the symptoms we found in testing the highest rates that a very powerful server could handle, we found that many clients would run into a wall processing LUA, creating hitches.
Meaning at the highest levels, (~4x the current update rates) the bottleneck became processing the world on the client - which is very difficult to improve upon. (although the CDT has recently implemented some potential improvements for the next patch)
But this doesn't mean you can't currently increase values a bit for improvements, with little to no downsides, so long as your server can handle the extra traffic and processing and you know precisely how to configure it. Any improper configuration will have issues.
Compared to other fast paced games NS2 does make 50ms ping feel like 250ms.
i've actually seen this mentioned in some form or another by a lot of players
i don't see how 100ms interp accounts for way more than 100ms of extra lagginess
and i don't think the tickrate is the fundamental problem
other games with low tick rates (eg. quake live has 40) still manage to get updates to their clients in time
if lag compensation is capped (most fast-paced/competitive games do this), then the most egregious problems should become less of an issue. that would at least be some progress
in other news, I don't think I've actually dodged a NS2 GL in a while (and I only join unmodified 100% perf servers <80 ping)
compared to other games with the same latencies, it takes an insane amount of extra time to detect the trajectory in NS2
in NS2 feels like 2 or 3 whole nades explode before I even know the arc of the first one
maybe the nades move slightly faster than in other games, but it's still broke ass
i don't know the internals of NS2 netcode, but other games handle updates of non-hitscan projectiles on the server. that way clients have a chance to dodge them
launchers are supposed to be balanced by strong aspects (damage/aoe) and weak points (being easy to dodge)
not getting hit by one seems random since you get 0 reaction time. it doesn't feel like you can use any reflex/movement/awareness skills
and i don't think the tickrate is the fundamental problem
other games with low tick rates (eg. quake live has 40) still manage to get updates to their clients in time
Tickrate of 40 is two times faster than 20 of NS2 standard configuration. But it doesn't matter, because this is just a 25ms difference.
and i don't think the tickrate is the fundamental problem
other games with low tick rates (eg. quake live has 40) still manage to get updates to their clients in time
Tickrate of 40 is two times faster than 20 of NS2 standard configuration. But it doesn't matter, because this is just a 25ms difference.
We are looking for the amounts like 200-300ms.
25ms is a pretty big difference, in a fast-pace FPS.
But this doesn't mean you can't currently increase values a bit for improvements, with little to no downsides, so long as your server can handle the extra traffic and processing and you know precisely how to configure it. Any improper configuration will have issues.
Without insight into the engine server admins are basically just guessing. Bit of a waste for the effort that went into the feature. Is there anyone here who has set up properly and can share? Any tips how to precisely configure it assuming median hardware? @xDragon seems to have some insight in the adverse consequences of some of the settings, anything concrete you could mention?
I'm going to shamelessly borrow an image from Aeroripper here -
For a server admin, you can figure out pretty easily what kind of overhead your server will have just by looking at net_stats. For starters, you will want to 100% make sure you are giving your server at least 2 cores, period. After that, you will want to join your server and look at the average value for move in the net_stats table. Not so much the %, but the ms/move. That will give you the best gauge of your server's performance, combined with your maximum player count. If your MS/move is near or above 1, you will NOT want to change any settings, as most likely your server will already be having trouble late game, or running very close to the limit (figuring 18-20 players). If my understanding is correct, that MS/move value represents the amount of CPU time each move takes to calculate. If you figure 30 moves a second (default), and 20 players, that would bring you to 600ms. That would represent about 60% of your CPU budget per second, and I would say that in vanilla NS2 you do not want to exceed that value. Entity updates can easily take 40% late game in NS2, already putting you over the limit. I would say to ensure good play ability, you wouldn't want your total move usage to exceed 50%. That *should* leave you enough overhead for most lategame scenarios.
As for increasing rates, you can increase the client update rate to 30 without toooo much concern, so long as you increase the data cap at the same time. The performance impact on the server from this is pretty minimal, so long as you allocated 2+ cores. Then you could slightly lower interp to say 67 or 70. That change would have some positive impacts on the responsiveness of the game, but not any kind of huge improvements and seems to potentially exacerbate other issues. To push the update rate even higher, you would need to increase the move rate (from above) and the tickrate, and the move rate drastically increases CPU usage.
To clarify terms:
Tickrate in NS2 is just like the tickrate from Source games. The only major difference here is Source relies on the ticks for lag compensation, so higher tickrates increase the accuracy of the lag compensation by a small margin.
Move rate in NS2 is comparable to cmdrate from Source, which is how often client commands (input) is sent to the server. This impacts part of how 'responsive' the game feels, mainly to your inputs (weapon switches etc).
Update rate in NS2 is like update rate from Source, which is how often server updates are sent to a client. This impacts how quickly your client responds to changes in the game (you taking damage, other players dying etc).
Source lets clients choose their settings, usually clamped between server enforced max/mins. NS2 forces all clients to use what the server sets.
IronHorseDeveloper, QA Manager, Technical Support & contributorJoin Date: 2010-05-08Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
edited October 2014
Don't forget interpolation in the explanations, dragon, as it's the safety net that catches missed packets and oddities and ensures smooth gameplay
FYI: We found around 60 or lower produces odd results.
Source lets clients choose their settings, usually clamped between server enforced max/mins. NS2 forces all clients to use what the server sets.
First of all that's not true, ns2 clients can also set the interpolation rate and moverate totally freely them-self but they should not as it doesn't takes the server rates into account and ppl tend to screw up with those settings mostly when it comes to ns2 after my experience ( no it won't help you to set the interpolation rate to 40 ms, it will only make it useless ).
And also to that one guy who thought it would be fun to bombard my server by setting his moverate to 120. No it's not nice to do that and the reason my firewall blocked you ... :S
Here shortly what interpolation is (taken from the source dev wiki ):
By default, the client receives about 20 snapshot per second. If the objects (entities) in the world were only rendered at the positions received by the server, moving objects and animation would look choppy and jittery. Dropped packets would also cause noticeable glitches. The trick to solve this problem is to go back in time for rendering, so positions and animations can be continuously interpolated between two recently received snapshot. With 20 snapshots per second, a new update arrives about every 50 milliseconds. If the client render time is shifted back by 50 milliseconds, entities can be always interpolated between the last received snapshot and the snapshot before that.
Source defaults to an interpolation period ('lerp') of 100-milliseconds (cl_interp 0.1); this way, even if one snapshot is lost, there are always two valid snapshots to interpolate between. Take a look at the following figure showing the arrival times of incoming world snapshots:
The last snapshot received on the client was at tick 344 or 10.30 seconds. The client time continues to increase based on this snapshot and the client frame rate. If a new video frame is rendered, the rendering time is the current client time 10.32 minus the view interpolation delay of 0.1 seconds. This would be 10.22 in our example and all entities and their animations are interpolated using the correct fraction between snapshot 340 and 342.
Since we have an interpolation delay of 100 milliseconds, the interpolation would even work if snapshot 342 were missing due to packet loss. Then the interpolation could use snapshots 340 and 344. If more than one snapshot in a row is dropped, interpolation can't work perfectly because it runs out of snapshots in the history buffer. In that case the renderer uses extrapolation (cl_extrapolate 1) and tries a simple linear extrapolation of entities based on their known history so far. The extrapolation is done only for 0.25 seconds of packet loss (cl_extrapolate_amount), since the prediction errors would become too big after that.
Entity interpolation causes a constant view "lag" of 100 milliseconds by default (cl_interp 0.1), even if you're playing on a listenserver (server and client on the same machine). This doesn't mean you have to lead your aiming when shooting at other players since the server-side lag compensation knows about client entity interpolation and corrects this error.
Ok so now you know that next to your ping there is another delay at what you see compared to what is going on at the server, the interpolation ( default 100 ms ). So to make the game feel more responsible we want to have a as short as possible interpolation time frame without losing its' feature of being able to interpolate over 2 server -> client snapshots. This means to effectively lower the the interpolation rate we have to increase the send-rate of the server.
And there we are now entering the new chapter of adjusting server net-vars:
Basically there are 3 server variables when it comes to networking:
tick-rate - controls the overall update rate of A.I. units (beside that this also gaps the max update rate of the server which means the send-rate can't be higher than the tick-rate)
send-rate - rate of server -> client snapshots per second and how often the server has to calculate player movements therfore
bandwidth limit - how many data in bytes is the server allowed to send out per player per seconds
So we want to interpolate above 2 snapshots at the client this means the interpolation should be: 2/ send-rate = interpolation rate in secs
So lets say we want to half the interpolation rate (to 50 ms) this means we end up with a needed send-rate of 40 per sec. The tickrate controls still the max update rate of a server therefor to set the send-rate to 40 we have to also increase the tick-rate to 40.
Next you have to have in mind that the server needs to have enough player movement data to calculate the server -> client updates correctly this means we need to get enough client -> server updates. So most people would now say let's just increase the moverate ( client -> server snapshots per secs ) to 40 and we will be fine.
Sadly that's not true as you also have to have in mind when setting the moverate that packets between client and server might get lost on the way ( the reason there is interpolation ), so to cover up for packet looses and delays we should add a buffer of about 20% to the moverate so the server always has a few extra player movements updates from the client to cover up packet looses (via extrapolation).
Overall all those more calculations need more cpu power as you might have guest. A higher moverate is totally fine for clients as they already compute more updates than needed to generate a high fps. But the server has to cover up for every extra snapshot or calculation.
Here i can't really give any statistics on how changing each variable changes the cpu needs of the server. This differs from server to server and the used cpu model.
But what needs to be covered is that the cpu should never need more time to calculate one move than there is time for it based on the send-rate.
Also you should have in mind that with more unit interactions in the late game the needed cpu power increases with the round time. So having at least 20% idle cpu power available when the round starts is a must-have. Otherwise i can promise you that the cpu will not be able to handle the late game.
Overall this post should not be a guide how to tweak a server. I made this post so all of you get a ensign why you have to be very careful while talking about net variable. Every server setup is different and there are tons of things you have to have in mind while tweaking those variables.
Going too easy on them will end up very fast in making the game experience a nightmare at the given server etc. . So please only try yourself in talking and tweaking the network behavior if you fully understand what's going on there.
Also another fun fact when it comes to tweaking the net-vars is that the game code might be not coded/designed for higher update rates and therefor you might see weird issues and bugs with higher rates than the default ones.
Edit:
@DaanVanYperen and other: If you join servers with modified network variable settings (beside increased bandwidth limits) there will be a warning telling you so. Now it's up to you if you trust the given server admin if you join those servers.
Saying so there are a few server admins matso and i have talked to and taught how to use the variables correctly. Known servers this applies to atm are woozas ( network variables were tweaked there by matso and wooza so noticeable hitching got less), the thirsty onos (supafred decided to not tweak any net vars beside the bandwidth limit to avoid choking in late game) and my own server.
I may forgot a few server admins but most sticked to the default rates beside increasing the bandwidth limit per player due to the fact that the increased cpu needs were not worth the game performance gain after testing.
Regarding server rate variables, I had head that there was a plan to set up "premade" settings that can be adjusted based on Server hardware...is that even possible due to the variety of server hardware outhere?
Comments
I don't wanna be the bad guy but :
It would be nice to get an satisfying explanation on why the overall quality just decreased these days.
Server are the same (mine included). Client are more or less the same. Ok, kenny bought a shiny new PC but not everyone. I believe there is no EMP bomb that fell on the InternetS that forces all packets to be rerouted through Japan... So... That doesn't leave much choices.
I have a good ping on the servers i connect to (30-50 most of the time, 80 max in rare exception) as most of the other players. I never play on server above 18 slots. I use to check ping outside the game. They are the same.
Examples :
-If I stop shotting at a skulk after 15~20 bullets (early game) he just die after a second sometimes (clearly noticeable). Yes the health doesn't go down instantly as a compensation on the skulk side but recently it is clearly growing. It's like behaving like a samurai in a movie. You know he's gonna die, you stop taking care of this quadruped. And you sheath your sword and start to walk away. Then we hear the metal click sound of the sword locking, then the skulk just falls...
On the skulk side, ambush spots as they are known by the opponent start to be death nests. Even if you anticipate and make a move, you're ultimately too late. At some point you can't manage it.
-If I enter a room running and a skulk is waiting for me. I'm sure I emptied a full clip before he sees me firing half of it (and I'm generous). That is a noticeable amount of time. Sometime you don't even see them moving before they die. Ok, It's always the case for online games, but this situation cannot be unnoticed with this last build. Almost close to Battelfield 4 (sorry, just saying).
-Same goes for FPS. Significant drop on client side since last patch. If FPS has an influence on the so called netcode... what can i say ?
-Once I had the same symptoms posted by Metroid. I have a SSD (not full), an I7, a huge load of ram. Well where did i do wrong ? I mean I can render almost real time in blender while editing a simple scene. Restarting/resetting a game shouldn't look like this. If calculation are at 0, well maybe the calculation aren't in the queue yet...
-We clearly see that if people have a ping above 100 the more they use mouse-wheel the more teleportation happens.
But the fun thing is that if ppl have a really good ping (under 30) the same happens. But "net_stats" says no choke, no error and such. If he says so...
I mean everything is the same except the build number of NS2. Unless somebody left "all_debug=1" which would be a good classic joke (everybody fell for that one, at least one time) : My opinion is "I'm afraid there is an issue there, not on the rest".
I kinda like this aspect of the game, allows me to blame my lack of skill on something else!
Compared to what games exactly? I can say the same thing other way around, throwing claims like that without reference is useless.
I know for certain that COD makes 50ms feel like 400ms (they refuse to fix their crappy lag compensation).
It is rather not the ping that's the problem, usually the tick rate.
Are you even playing on servers with (properly) modified network rates?
Such lag compensated related issues happen in AAA games with insane budgets too :
Was just setting up the joke XD! But since you asked, there's a couple of Source games that on average certainly feel a lot tighter.
I'm aware comparing a plain FPS with NS2 is not fair. High velocity aliens with wildly twisting hitboxes are bound to exacerbate latency issues, close range melee means a lot of abrupt turning, it's already quite a feat the game is even as playable as it is.
It would be good if the server list would advertise properly modified servers for all us low-brow non server admins. A tickrate in the title tells me nothing with all the new tweakable settings. Plus might motivate the other server owners to invest.
Not a very strong argument against why a tighter experience shouldn't be more of a focal point in such a fast paced game. Especially with strong technical coders on the CDT volunteering their time. As long as they contribute your budget isn't gone. So why not put a couple of days towards making this experience better. Or better, just backlog it until some time frees up. Not saying redo the netcode, but at least work towards a status quo of better tuned servers.
You can focus on something all you'd like - but if the resources aren't there to develop it.. all the focus in the world may not necessarily mean much.
That being said, CDT HAS made this experience better.. these network variables HAVE been unlocked due to our team, allowing you to mimic HL1/gold src engine level of network updates if you know how and have the hardware to support it.
If instead you are suggesting to change the default values for all servers.. well that would be a poor idea due to the hardware variances and the fact that even today, 2 years later, many populated servers are STILL mismanaged and suffer huge performance issues even using default values or no mods. (I won't name drop but if you live in the US you know exactly which servers do this and don't respond to emails)
What I suspect you are implying is "we have more important things planned out till the foreseeable future" or "this is not something I personally see any value in investing more time into".
Here, I'l quote one missed suggestion from that same post:
Sounds like the new parameters caused a schism in the server pool, introducing partially broken and possibly optimized servers alongside vanilla ones. There's no way a player could tell the difference, except hoping NoLag tags are not made up.
Maybe it's time to address mismanaged servers in a way that suits a smaller team with a community that loves to contribute. Like better self education resources for server admins, I bet one of the CDT, or even any community member can take an evening to write up a more detailed configuration and hardware guide.
And if such a guide exists, promote it on the UW outlets! Lobby the server admins, package it with the server!
You are 95% there, it's very much in the public eye, why not add it to the bottom of the backlog and if it ever turns up, great! If not, at least people here felt heard.
Source 2009 had 100ms interp, ns2 has 100ms interp.
https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
Also it shows many games use a tickrate of 30. Just saying!
Servers do not just need a stable 30 tickrate. Update rates and bandwith per player are also needed among others, and most servers currently running do it wrong!
(if anyone wonders if the taw servers do it right... we are working on it. )
It would account for the bad experience people claim to have if most servers do it wrong. Sounds like a good place to start improving things!
There's only a page of popular populated servers, even less if you ignore the atypical player count ones.
How Spark handles lag compensation has theoretical advantages to say Source which is dependent on server ticks for snapshots, so increasing the tickrate of NS2 is actually quite pointless, unless you want smoother drifters. That said increasing the update rates and adjusting interpolation to match would be beneficial and does provide improvements, however it also seems to exacerbate some of the netcode issues. Hopefully debugging tools can be added and some of the issues can be found, but its certainly no easy task.
To fix this problem, the guy who understands that subsystem should just spend some time.
If everyone on the server has 50 ping then I expect the lag to be around 150 (double ping + tick). And I don't understand how skulk lands two bites (it is normally 450 ms) during this time.
As for delays with response to inputs, that could unfortunately be more complex. I am not sure exactly how Spark handles player inputs - but I think they are processed immediately when the server receives them (ala Source post Orange Box), but they could also be queued on the server and processed all on the next tick. With the server tick and update rates being out of sync also, there is the potential for quite a bit of variability. You could also factor in human reaction times and the perception of taking the damage, the second which could use improvement in NS2.
As a note, i've also noticed that NS2 can tend to over exaggerate pings and I sometimes wonder if the packet latency calculation is based on ping which often times seems inflated.. hmmm
One of the symptoms we found in testing the highest rates that a very powerful server could handle, we found that many clients would run into a wall processing LUA, creating hitches.
Meaning at the highest levels, (~4x the current update rates) the bottleneck became processing the world on the client - which is very difficult to improve upon. (although the CDT has recently implemented some potential improvements for the next patch)
But this doesn't mean you can't currently increase values a bit for improvements, with little to no downsides, so long as your server can handle the extra traffic and processing and you know precisely how to configure it. Any improper configuration will have issues.
i've actually seen this mentioned in some form or another by a lot of players
i don't see how 100ms interp accounts for way more than 100ms of extra lagginess
and i don't think the tickrate is the fundamental problem
other games with low tick rates (eg. quake live has 40) still manage to get updates to their clients in time
if lag compensation is capped (most fast-paced/competitive games do this), then the most egregious problems should become less of an issue. that would at least be some progress
in other news, I don't think I've actually dodged a NS2 GL in a while (and I only join unmodified 100% perf servers <80 ping)
compared to other games with the same latencies, it takes an insane amount of extra time to detect the trajectory in NS2
in NS2 feels like 2 or 3 whole nades explode before I even know the arc of the first one
maybe the nades move slightly faster than in other games, but it's still broke ass
i don't know the internals of NS2 netcode, but other games handle updates of non-hitscan projectiles on the server. that way clients have a chance to dodge them
launchers are supposed to be balanced by strong aspects (damage/aoe) and weak points (being easy to dodge)
not getting hit by one seems random since you get 0 reaction time. it doesn't feel like you can use any reflex/movement/awareness skills
Tickrate of 40 is two times faster than 20 of NS2 standard configuration. But it doesn't matter, because this is just a 25ms difference.
We are looking for the amounts like 200-300ms.
standard ns2 tickrate is 30 and should never drop below that.
They are referring to the update rate I think, which is at 20. NS2's tickrate doesn't really affect anything other than non player entities.
Without insight into the engine server admins are basically just guessing. Bit of a waste for the effort that went into the feature. Is there anyone here who has set up properly and can share? Any tips how to precisely configure it assuming median hardware? @xDragon seems to have some insight in the adverse consequences of some of the settings, anything concrete you could mention?
For a server admin, you can figure out pretty easily what kind of overhead your server will have just by looking at net_stats. For starters, you will want to 100% make sure you are giving your server at least 2 cores, period. After that, you will want to join your server and look at the average value for move in the net_stats table. Not so much the %, but the ms/move. That will give you the best gauge of your server's performance, combined with your maximum player count. If your MS/move is near or above 1, you will NOT want to change any settings, as most likely your server will already be having trouble late game, or running very close to the limit (figuring 18-20 players). If my understanding is correct, that MS/move value represents the amount of CPU time each move takes to calculate. If you figure 30 moves a second (default), and 20 players, that would bring you to 600ms. That would represent about 60% of your CPU budget per second, and I would say that in vanilla NS2 you do not want to exceed that value. Entity updates can easily take 40% late game in NS2, already putting you over the limit. I would say to ensure good play ability, you wouldn't want your total move usage to exceed 50%. That *should* leave you enough overhead for most lategame scenarios.
As for increasing rates, you can increase the client update rate to 30 without toooo much concern, so long as you increase the data cap at the same time. The performance impact on the server from this is pretty minimal, so long as you allocated 2+ cores. Then you could slightly lower interp to say 67 or 70. That change would have some positive impacts on the responsiveness of the game, but not any kind of huge improvements and seems to potentially exacerbate other issues. To push the update rate even higher, you would need to increase the move rate (from above) and the tickrate, and the move rate drastically increases CPU usage.
To clarify terms:
Tickrate in NS2 is just like the tickrate from Source games. The only major difference here is Source relies on the ticks for lag compensation, so higher tickrates increase the accuracy of the lag compensation by a small margin.
Move rate in NS2 is comparable to cmdrate from Source, which is how often client commands (input) is sent to the server. This impacts part of how 'responsive' the game feels, mainly to your inputs (weapon switches etc).
Update rate in NS2 is like update rate from Source, which is how often server updates are sent to a client. This impacts how quickly your client responds to changes in the game (you taking damage, other players dying etc).
Source lets clients choose their settings, usually clamped between server enforced max/mins. NS2 forces all clients to use what the server sets.
FYI: We found around 60 or lower produces odd results.
First of all that's not true, ns2 clients can also set the interpolation rate and moverate totally freely them-self but they should not as it doesn't takes the server rates into account and ppl tend to screw up with those settings mostly when it comes to ns2 after my experience ( no it won't help you to set the interpolation rate to 40 ms, it will only make it useless ).
And also to that one guy who thought it would be fun to bombard my server by setting his moverate to 120. No it's not nice to do that and the reason my firewall blocked you ... :S
Here shortly what interpolation is (taken from the source dev wiki ):
By default, the client receives about 20 snapshot per second. If the objects (entities) in the world were only rendered at the positions received by the server, moving objects and animation would look choppy and jittery. Dropped packets would also cause noticeable glitches. The trick to solve this problem is to go back in time for rendering, so positions and animations can be continuously interpolated between two recently received snapshot. With 20 snapshots per second, a new update arrives about every 50 milliseconds. If the client render time is shifted back by 50 milliseconds, entities can be always interpolated between the last received snapshot and the snapshot before that.
Source defaults to an interpolation period ('lerp') of 100-milliseconds (cl_interp 0.1); this way, even if one snapshot is lost, there are always two valid snapshots to interpolate between. Take a look at the following figure showing the arrival times of incoming world snapshots:
The last snapshot received on the client was at tick 344 or 10.30 seconds. The client time continues to increase based on this snapshot and the client frame rate. If a new video frame is rendered, the rendering time is the current client time 10.32 minus the view interpolation delay of 0.1 seconds. This would be 10.22 in our example and all entities and their animations are interpolated using the correct fraction between snapshot 340 and 342.
Since we have an interpolation delay of 100 milliseconds, the interpolation would even work if snapshot 342 were missing due to packet loss. Then the interpolation could use snapshots 340 and 344. If more than one snapshot in a row is dropped, interpolation can't work perfectly because it runs out of snapshots in the history buffer. In that case the renderer uses extrapolation (cl_extrapolate 1) and tries a simple linear extrapolation of entities based on their known history so far. The extrapolation is done only for 0.25 seconds of packet loss (cl_extrapolate_amount), since the prediction errors would become too big after that.
Entity interpolation causes a constant view "lag" of 100 milliseconds by default (cl_interp 0.1), even if you're playing on a listenserver (server and client on the same machine). This doesn't mean you have to lead your aiming when shooting at other players since the server-side lag compensation knows about client entity interpolation and corrects this error.
Ok so now you know that next to your ping there is another delay at what you see compared to what is going on at the server, the interpolation ( default 100 ms ). So to make the game feel more responsible we want to have a as short as possible interpolation time frame without losing its' feature of being able to interpolate over 2 server -> client snapshots. This means to effectively lower the the interpolation rate we have to increase the send-rate of the server.
And there we are now entering the new chapter of adjusting server net-vars:
Basically there are 3 server variables when it comes to networking:
tick-rate - controls the overall update rate of A.I. units (beside that this also gaps the max update rate of the server which means the send-rate can't be higher than the tick-rate)
send-rate - rate of server -> client snapshots per second and how often the server has to calculate player movements therfore
bandwidth limit - how many data in bytes is the server allowed to send out per player per seconds
So we want to interpolate above 2 snapshots at the client this means the interpolation should be: 2/ send-rate = interpolation rate in secs
So lets say we want to half the interpolation rate (to 50 ms) this means we end up with a needed send-rate of 40 per sec. The tickrate controls still the max update rate of a server therefor to set the send-rate to 40 we have to also increase the tick-rate to 40.
Next you have to have in mind that the server needs to have enough player movement data to calculate the server -> client updates correctly this means we need to get enough client -> server updates. So most people would now say let's just increase the moverate ( client -> server snapshots per secs ) to 40 and we will be fine.
Sadly that's not true as you also have to have in mind when setting the moverate that packets between client and server might get lost on the way ( the reason there is interpolation ), so to cover up for packet looses and delays we should add a buffer of about 20% to the moverate so the server always has a few extra player movements updates from the client to cover up packet looses (via extrapolation).
Overall all those more calculations need more cpu power as you might have guest. A higher moverate is totally fine for clients as they already compute more updates than needed to generate a high fps. But the server has to cover up for every extra snapshot or calculation.
Here i can't really give any statistics on how changing each variable changes the cpu needs of the server. This differs from server to server and the used cpu model.
But what needs to be covered is that the cpu should never need more time to calculate one move than there is time for it based on the send-rate.
Also you should have in mind that with more unit interactions in the late game the needed cpu power increases with the round time. So having at least 20% idle cpu power available when the round starts is a must-have. Otherwise i can promise you that the cpu will not be able to handle the late game.
Overall this post should not be a guide how to tweak a server. I made this post so all of you get a ensign why you have to be very careful while talking about net variable. Every server setup is different and there are tons of things you have to have in mind while tweaking those variables.
Going too easy on them will end up very fast in making the game experience a nightmare at the given server etc. . So please only try yourself in talking and tweaking the network behavior if you fully understand what's going on there.
Also another fun fact when it comes to tweaking the net-vars is that the game code might be not coded/designed for higher update rates and therefor you might see weird issues and bugs with higher rates than the default ones.
Edit:
@DaanVanYperen and other: If you join servers with modified network variable settings (beside increased bandwidth limits) there will be a warning telling you so. Now it's up to you if you trust the given server admin if you join those servers.
Saying so there are a few server admins matso and i have talked to and taught how to use the variables correctly. Known servers this applies to atm are woozas ( network variables were tweaked there by matso and wooza so noticeable hitching got less), the thirsty onos (supafred decided to not tweak any net vars beside the bandwidth limit to avoid choking in late game) and my own server.
I may forgot a few server admins but most sticked to the default rates beside increasing the bandwidth limit per player due to the fact that the increased cpu needs were not worth the game performance gain after testing.