IronHorseDeveloper, QA Manager, Technical Support & contributorJoin Date: 2010-05-08Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
"In order to reduce the interpolation buffer length, UWE would first have to increase the rate at which update messages are sent. I don't know what the implications for that might be (in terms of server or client performance, bandwidth usage, etc.)"
They would have to double the network traffic.. Which would bring 24 player servers to their knees I would imagine.
Might be possible in 6v6 though.. Shrug
I do recall updates being dynamic now though , but still capped at some point.?
They would have to double the network traffic.. Which would bring 24 player servers to their knees I would imagine.
Might be possible in 6v6 though.. Shrug
I do recall updates being dynamic now though , but still capped at some point.?
You can check your network traffic with net_stats in game, you'll see that send rate never really goes over 3kb/s and receive varies anywhere from 5 up to around 15kb/sec depending on players and the events around you.
(24 x 3) + (24 x 10) = 312 Kb/s
Even if the server traffic were to be doubled you still would not break 10mbit combined up/down.
It's hardcoded in the server to set clients MaxSendRate to 25,000bps (24~ Kb/s) when they connect, any server worth it's salt is most likely on a 100mbit connection anyway, so traffic wise it would be inconsequential.
I can only speculate that the increased overhead lies in the network subsystem in the form of increased CPU usage.
why does the screen latency beeing called in here? just simple ingame checks for a good or bad screen - meanwhile my old ACER AL2216W had an huge latency i died before i could see enemys on my screen (CS:GO, CS:S, CS1.6, Quake etc.) - changed it into an P225HQ and the problem is solved at most (could be still better a bit). the problem beeing called here is a very old one, even can be fixed if so with LCD Overdrive mostly...
anyways im watching this thread now for the next weeks before im gonna step back into the game. playing on the field doesnt make fun if you stop fire after 10 shots on a skulk and wait another 500 MS for his death just to save some ammo lol..
this is just a serious problem in fact and should be fixed with high priority. cant remember myself expect battlefield 3 to take over 200 MS time to wait till the enemy dies on my screen in the last 12 years im playing shooters and ns2 at the current state is just the peak of the mountain if its all about netcode - never seen so far such a hard delayed netcode.
dunno never had such a feeling like "im playing on a 56k modem" in any other game only as my old screen was plugged and sucked alot i was a bit fucked about that disadvantage.
You can check your network traffic with net_stats in game, you'll see that send rate never really goes over 3kb/s and receive varies anywhere from 5 up to around 15kb/sec depending on players and the events around you.
(24 x 3) + (24 x 10) = 312 Kb/s
Even if the server traffic were to be doubled you still would not break 10mbit combined up/down.
It's hardcoded in the server to set clients MaxSendRate to 25,000bps (24~ Kb/s) when they connect, any server worth it's salt is most likely on a 100mbit connection anyway, so traffic wise it would be inconsequential.
I can only speculate that the increased overhead lies in the network subsystem in the form of increased CPU usage.
Do you have any comment on whether the updaterate, maxsendrate and possibly tickrate will be modifiable any time soon? or at all?
No point in increasing the data throughput if the server can't handle input. It'll just result in tons of dropped packets (e.g. the horrible levels of choke that used to be more common).
Eventually, I'd love to see the tickrate become adjustable (something like 30, 60, or 90 maybe) as many of us are running servers on overclocked hardware. I'd love to experiment with different playercounts/tickrates to see just how far I can push server performance.
I really dislike the interp feature, and I think it should be a server side setting that can be turned off if so desired. This is a competitive game, and people need to choose a server with the lowest MS for them.
You can check your network traffic with net_stats in game, you'll see that send rate never really goes over 3kb/s and receive varies anywhere from 5 up to around 15kb/sec depending on players and the events around you.
(24 x 3) + (24 x 10) = 312 Kb/s
Even if the server traffic were to be doubled you still would not break 10mbit combined up/down.
It's hardcoded in the server to set clients MaxSendRate to 25,000bps (24~ Kb/s) when they connect, any server worth it's salt is most likely on a 100mbit connection anyway, so traffic wise it would be inconsequential.
I can only speculate that the increased overhead lies in the network subsystem in the form of increased CPU usage.
Do you have any comment on whether the updaterate, maxsendrate and possibly tickrate will be modifiable any time soon? or at all?
No point in increasing the data throughput if the server can't handle input. It'll just result in tons of dropped packets (e.g. the horrible levels of choke that used to be more common).
Eventually, I'd love to see the tickrate become adjustable (something like 30, 60, or 90 maybe) as many of us are running servers on overclocked hardware. I'd love to experiment with different playercounts/tickrates to see just how far I can push server performance.
In NS2 the interp, the mechanism that tries to put everyone on an even playing field, is set at 100ms. That means that in practice everyone below 100(ms) ping should have the same visuals. This is all well and good apart from one thing. Most players don't play at 100 ping.
It seems that you (and a few other people posting in this thread) have a faulty understanding of what "interp" actually is. Interp has nothing to do with "evening the playing field" and it is certainly not about making all players with less than 100 ms ping get the same visuals at the same time.
Interp is short for interpolation, which, in layman's terms, means to fill in the gaps between 2 points.
In NS2, the rate at which the server sends update messages to clients is 20 messages per second (or one message every 50 milliseconds). If the game did not do interpolation, all the other players in your view would appear to jitter around at 20 FPS even if your graphics card was rendering a nice, smooth 100 FPS.
In order for the movement of other players to appear smooth, the game needs to interpolate between 2 update messages. Unfortunately, that means that the game can't immediately display the game state according to the most recent update message that it received. In order for the game to always have 2 update messages to work with, it needs to hold them in a buffer that is long enough to contain at least 2 update messages.
Given that update messages are received every 50 ms, the theoretical minimum buffer length is 50 ms. However, this would only work in an ideal world where every message can be expected to arrive perfectly on time with zero ping fluctuation. In the event that an update message was late, the game would have to start guessing in the mean time. By using a 100 ms buffer, you can be sure that there will always be 2 update messages in the buffer and, in super-ideal cases, there might even be 3.
In order to reduce the interpolation buffer length, UWE would first have to increase the rate at which update messages are sent. I don't know what the implications for that might be (in terms of server or client performance, bandwidth usage, etc.)
I only have a very basic understanding of the technical voodoo that is ping compensation, yes.
What I do have though is lots and lots of FPS experience in different engines and games. That experience has given a reliable feel of when something is off. I might not be able to pinpoint the exact area that needs tweaking but I do know that the state of the game as it is now is not very good for higher level play.
Anyone with a similar amount of experience or just has sharp senses notices immediately that NS2 has some really prominent delay. Such as the aforementioned death of skulks that happen way past the bullet that killed the skulk or when you die with a lerk way past a corner.
I would like to address the stance that some people in here are taking, that since everyone who plays a lot will be aware of this issue, you can prevent it by for example leaving combat preemptively. Yes, this is true but it's not a good solution at all. This is very unintuitive and as such will include a "why the hell am I doing this when it's the game's/net coding's fault?"-factor. Working past this will test your willingness to accept that you're playing against the game and not against your opponent in this area which really annoys competetive players and can make a casual play quit altogether. This is not good at all. You want to be able to make in-game decisions based on health, enemy positioning, your own positioning and other in game factors, not wether or not the inherent delay will mess you up.
Even though "the best" players adapt, I assure you that they would rather have it work from the start so that they could focus on gameplay factors more.
If the rate is the real issue here as you propose, that's where to begin. My stance is the same. Include some server side options to at least try some settings out. This is in my opinion the only sensible thing to do since there are so many displeased players. Not to mention players and server admins with a wealth of experience with these kinds of thing that are eager to experiment.
I tested it with some friends and I saw no network increase. The tickrate is useless if the updates are only limited to 20 per second.
Is anyone from UWE actually reading this thread anymore?
Somewhere buried in this forum is an in-depth statement of a dev regarding tickrate and updates sent by the server. As far as I can remember, both are running on individual threads. This way you get your 30 (?) updates per second even if the server can't update the game world at the same rate. This resulted in smoother movement of yourself, despite seeing others "lag" when the tickrate gets down to much. So it removed rubber-banding for the player.
Now, if you want to decrease the interpolation time, you first need to send more packets per second to the clients, or it will end in rubber-banding. But if you keep the tickrate down to 30 updates per second, it is useless to send more updates to the client, because you would send the same data without changes when the next tick hasn't been calculated.
So what needs to be done (if I understood it right) is:
increase server tickrate
decrease interpolation time
increase updates send per second
But this could lead to a heavy performance decrease server AND client side. I imagine the architecture as follow:
Thread 1 "GameWorld": calculates 30 times a second where every player is, what Cysts are connected, which buildings have how much health and so on...
Thread 2 "Network": sends 30 (?) times a second an update to every client. This update consists of the current "snapshot" of the world that has been calculated by thread 1 in the last tick.
That means there are 33.3 milliseconds between each update you receive. This time frame needs to be compensated on the client or the movement of other players would look jittery. This is interpolation. If you now could increase the rate at which updates are calculated (=server tick rate) AND the rate at which updates are send to the clients, you could decrease the interpolation. It just needs to be long enough for 2 packets and a bit to arrive at the client.
ScatterJoin Date: 2012-09-02Member: 157341Members, Squad Five Blue
edited April 2013
Yeah If I want to overclock the hell out of my server hardware and run a few 14 player comp servers with high tickrate/updaterate and low interp settings, I think that should certainly be available.
It's one of those fundamental things that competitive players care more about than content and graphical options which will all be put on low immediately.
Low interp and 120 fps please !
EDIT: sss why don't you publish that as a mod on the workshop so we can all play around with it.
AsranielJoin Date: 2002-06-03Member: 724Members, Playtest Lead, Forum Moderators, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Shadow, Subnautica Playtester, Retired Community Developer
Believe me, server tickrate wont make any difference. But network update rate will (the rate at which the server sends updates, but also the client), and interpolation
yeah, good players would prefer if it was perfect from the beginning... and we would all prefer if we were billionaires. but it's not so.
it's not a 'fault' in the game code. it's working as intended, because they decided that the currently supported settings were the most stable/reliable.
look at the hitreg in counterstrike and TF2 - it's dire; shots inexplicably miss and the hitboxes are almost always out of sync with the player model. if that sort of hit registration is the cost for reduced interpolation, then marines would be incredibly frustrating to play - as you'd effectively increase the random factor. NO THANKS.
on top of that, you can't eliminate interpolation, so you still get killed from around corners/through walls - it just becomes less common.
I've had quite a bit of experience with interp on the source engine as I used to run comp L4D servers and even modded certain parts of the engine.
L4D too was stuck on 30 tick for most of it's life, until a mod from CS:S was ported that uncapped it.
Raising the tickrate was just an exercise to learn about the engine for me, it's effectively wasted cpu and useless to clients without an increased updaterate.
I have played around with increasing the snapshot frequency and looked for the updaterate limit, but I've since stopped because realistically It would be best to have the means to change such things exposed from the engine by UWE instead of it being hacked in by me or others and breaking each patch.
@sss: I don't think they will uncap it. The problem is, if a server hardware can't hold the tickrate (and therefore can't send updates with new world data), the client starts to predict what could happen.
The positive thing is, other players keep moving when the server can't hold the tick rate for a short time. It looks smooth. The bad thing is, those predictions are some damn heavy calculations. It creates suddenly heavy load on your CPU, that results into a FPS loss at the client. (It depends on your CPU how much this will hurt your FPS.)
Now back to uncapping the tick rate. You know, it will increase the CPU demands of the server and the client (because of higher update rate). Now imagine server operators that increase their tick rate despite not having the hardware to hold it in the late game. This would - in the end - result in players who have low FPS in the game, without knowing that it's the server operators fault.
We had this already short after the release with servers that couldn't hold their tickrate or had simply to many players allowed. It sure gave many new players the impression that their hardware wasn't able to run NS2 but in fact it was the server that forced the client to calculate predictions that increases the CPU bottleneck.
metso said before that higher ping creates more cpu usage on the prediction thread, so I think it's safe to assume that servers with bad performance thus missing updates would incur the same penalty that you are talking about onto clients.
Isn't that the reason there's a performance rating in the server browser?
You can inspect the lua and see that the performance percentage is based upon the servers framerate (i.e. tickrate).
If the spiking of the tickrate is what you're worried about then it is easily solved by the performance rating also reflecting that factor, which it may very well already do.
With that in mind I don't see why UWE would not uncap the tickrate, updaterate and maxsendrate, servers that are incapable of running high, stable tickrates will be obvious from the server browser.
even on LAN I was getting shot around corners (and had no-regs), the interp is still way too high, adding ping on top of that just makes it really silly
@sss: This is a good point. But somehow I don't trust the performance rating of the server browser. I suspect, that a server that can only hold 10 ticks in the end game, will show 100% performance if it is in the early game the moment you update your server browser. Can you see how it is implemented? Does it calculate an average over time?
The server browser simply does "current tickrate / max tickrate". 30 ticks/s is 100%.
The performance rating show only how the server performs at refresh time, just like ping. The server could be running at 20 ticks/s (66%) practically all the time and just by luck shoot up to 30 for half a second when you hit refresh, just like ping could.
even on LAN I was getting shot around corners (and had no-regs), the interp is still way too high, adding ping on top of that just makes it really silly
fix interp
100ms interp translates to an artificial 100ms latency, so that's a given.
even on LAN I was getting shot around corners (and had no-regs), the interp is still way too high, adding ping on top of that just makes it really silly
fix interp
100ms interp translates to an artificial 100ms latency, so that's a given.
but it's indeed weird to force 100ms on a LAN.
They can't redo half the network code just for a LAN...
To mitigate this packet travel time (Or at least try a illusion of mitigating it) the server keeps the ticks of the last 100ms in its memory. This is called interpolation. If the server receives your packet of moving, it looks back in time at the tick when you actually moved on your client. If someone shoots at you, the server can recalculate if he was able to hit you at the time he was firing at you.
You're describing lag compensation there - not interpolation.
100ms interp translates to an artificial 100ms latency, so that's a given.
but it's indeed weird to force 100ms on a LAN.
Being on LAN is irrelevant because the amount of interpolation is not related to ping. The amount of interpolation should be based on the frequency at which the server sends updates to clients and that doesn't increase just because you're playing on LAN.
Being on LAN is irrelevant because the amount of interpolation is not related to ping. The amount of interpolation should be based on the frequency at which the server sends updates to clients and that doesn't increase just because you're playing on LAN.
no, but the amount of interpolation is there to mimic a low ping and reduce lag, which is not applicable on a LAN.
it's pointless discomfort, like wearing a rubber under your speedos when you go swimming.
Yes, but removing it would have required modifying a fair amount of the network code I'm sure, not really worth it for a single LAN event when everyone playing it is used to the interp anyway.
There will be more LAN events. Everyone hopes and wants that.
Also more importantly, there will be thousands of more games on high performance servers, with player counts around 12 and with all of the players having low ping.
The game tailors to neither of the scenarios above. It's currently set across the board to be played on pings varying from medium to high on servers with a bunch of players.
Being on LAN is irrelevant because the amount of interpolation is not related to ping. The amount of interpolation should be based on the frequency at which the server sends updates to clients and that doesn't increase just because you're playing on LAN.
no, but the amount of interpolation is there to mimic a low ping and reduce lag, which is not applicable on a LAN.
Nope, interpolation does not mimic a low ping or reduce lag. Its purpose is to eliminate animation jitter/choppiness.
As I pointed out earlier, 100 ms of interp with 30 server updates a second means the client is buffering 3 updates. It makes sense to have a bit of safety margin since ping and server update rate will invariably fluctuate and it's worth it to trade a few ms of reaction time for smoother gameplay, but something is wrong when your safety margin is 200% of the ideal.
Comments
They would have to double the network traffic.. Which would bring 24 player servers to their knees I would imagine.
Might be possible in 6v6 though.. Shrug
I do recall updates being dynamic now though , but still capped at some point.?
You can check your network traffic with net_stats in game, you'll see that send rate never really goes over 3kb/s and receive varies anywhere from 5 up to around 15kb/sec depending on players and the events around you. Even if the server traffic were to be doubled you still would not break 10mbit combined up/down.
It's hardcoded in the server to set clients MaxSendRate to 25,000bps (24~ Kb/s) when they connect, any server worth it's salt is most likely on a 100mbit connection anyway, so traffic wise it would be inconsequential.
I can only speculate that the increased overhead lies in the network subsystem in the form of increased CPU usage.
@matso
Do you have any comment on whether the updaterate, maxsendrate and possibly tickrate will be modifiable any time soon? or at all?
anyways im watching this thread now for the next weeks before im gonna step back into the game. playing on the field doesnt make fun if you stop fire after 10 shots on a skulk and wait another 500 MS for his death just to save some ammo lol..
this is just a serious problem in fact and should be fixed with high priority. cant remember myself expect battlefield 3 to take over 200 MS time to wait till the enemy dies on my screen in the last 12 years im playing shooters and ns2 at the current state is just the peak of the mountain if its all about netcode - never seen so far such a hard delayed netcode.
dunno never had such a feeling like "im playing on a 56k modem" in any other game only as my old screen was plugged and sucked alot i was a bit fucked about that disadvantage.
That's more or less what i was saying.
The network traffic wouldn't bring a servers' internet connection down.
It's that the server performance can't handle that increase in data i believe.
That was the case months ago, at least, around launch.
Eventually, I'd love to see the tickrate become adjustable (something like 30, 60, or 90 maybe) as many of us are running servers on overclocked hardware. I'd love to experiment with different playercounts/tickrates to see just how far I can push server performance.
I second this.
http://steamcommunity.com/id/s-sentient/screenshot/558719694697499101
I tested it with some friends and I saw no network increase. The tickrate is useless if the updates are only limited to 20 per second.
Is anyone from UWE actually reading this thread anymore?
I only have a very basic understanding of the technical voodoo that is ping compensation, yes.
What I do have though is lots and lots of FPS experience in different engines and games. That experience has given a reliable feel of when something is off. I might not be able to pinpoint the exact area that needs tweaking but I do know that the state of the game as it is now is not very good for higher level play.
Anyone with a similar amount of experience or just has sharp senses notices immediately that NS2 has some really prominent delay. Such as the aforementioned death of skulks that happen way past the bullet that killed the skulk or when you die with a lerk way past a corner.
I would like to address the stance that some people in here are taking, that since everyone who plays a lot will be aware of this issue, you can prevent it by for example leaving combat preemptively. Yes, this is true but it's not a good solution at all. This is very unintuitive and as such will include a "why the hell am I doing this when it's the game's/net coding's fault?"-factor. Working past this will test your willingness to accept that you're playing against the game and not against your opponent in this area which really annoys competetive players and can make a casual play quit altogether. This is not good at all. You want to be able to make in-game decisions based on health, enemy positioning, your own positioning and other in game factors, not wether or not the inherent delay will mess you up.
Even though "the best" players adapt, I assure you that they would rather have it work from the start so that they could focus on gameplay factors more.
If the rate is the real issue here as you propose, that's where to begin. My stance is the same. Include some server side options to at least try some settings out. This is in my opinion the only sensible thing to do since there are so many displeased players. Not to mention players and server admins with a wealth of experience with these kinds of thing that are eager to experiment.
Somewhere buried in this forum is an in-depth statement of a dev regarding tickrate and updates sent by the server. As far as I can remember, both are running on individual threads. This way you get your 30 (?) updates per second even if the server can't update the game world at the same rate. This resulted in smoother movement of yourself, despite seeing others "lag" when the tickrate gets down to much. So it removed rubber-banding for the player.
Now, if you want to decrease the interpolation time, you first need to send more packets per second to the clients, or it will end in rubber-banding. But if you keep the tickrate down to 30 updates per second, it is useless to send more updates to the client, because you would send the same data without changes when the next tick hasn't been calculated.
So what needs to be done (if I understood it right) is:
- increase server tickrate
- decrease interpolation time
- increase updates send per second
But this could lead to a heavy performance decrease server AND client side. I imagine the architecture as follow:Thread 1 "GameWorld": calculates 30 times a second where every player is, what Cysts are connected, which buildings have how much health and so on...
Thread 2 "Network": sends 30 (?) times a second an update to every client. This update consists of the current "snapshot" of the world that has been calculated by thread 1 in the last tick.
That means there are 33.3 milliseconds between each update you receive. This time frame needs to be compensated on the client or the movement of other players would look jittery. This is interpolation. If you now could increase the rate at which updates are calculated (=server tick rate) AND the rate at which updates are send to the clients, you could decrease the interpolation. It just needs to be long enough for 2 packets and a bit to arrive at the client.
I hope that makes any sense. It's really difficult to explain. But if you are really interested, just read the link posted by matso to the explanations of valve: https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
It's one of those fundamental things that competitive players care more about than content and graphical options which will all be put on low immediately.
Low interp and 120 fps please !
EDIT: sss why don't you publish that as a mod on the workshop so we can all play around with it.
yeah, good players would prefer if it was perfect from the beginning... and we would all prefer if we were billionaires. but it's not so.
it's not a 'fault' in the game code. it's working as intended, because they decided that the currently supported settings were the most stable/reliable.
look at the hitreg in counterstrike and TF2 - it's dire; shots inexplicably miss and the hitboxes are almost always out of sync with the player model. if that sort of hit registration is the cost for reduced interpolation, then marines would be incredibly frustrating to play - as you'd effectively increase the random factor. NO THANKS.
on top of that, you can't eliminate interpolation, so you still get killed from around corners/through walls - it just becomes less common.
There's no real way to post it as a mod, it's a modified Server.exe, I don't think the workshop would allow you to distribute executables.
@_Necro_
I've had quite a bit of experience with interp on the source engine as I used to run comp L4D servers and even modded certain parts of the engine.
L4D too was stuck on 30 tick for most of it's life, until a mod from CS:S was ported that uncapped it.
Raising the tickrate was just an exercise to learn about the engine for me, it's effectively wasted cpu and useless to clients without an increased updaterate.
I have played around with increasing the snapshot frequency and looked for the updaterate limit, but I've since stopped because realistically It would be best to have the means to change such things exposed from the engine by UWE instead of it being hacked in by me or others and breaking each patch.
The positive thing is, other players keep moving when the server can't hold the tick rate for a short time. It looks smooth. The bad thing is, those predictions are some damn heavy calculations. It creates suddenly heavy load on your CPU, that results into a FPS loss at the client. (It depends on your CPU how much this will hurt your FPS.)
Now back to uncapping the tick rate. You know, it will increase the CPU demands of the server and the client (because of higher update rate). Now imagine server operators that increase their tick rate despite not having the hardware to hold it in the late game. This would - in the end - result in players who have low FPS in the game, without knowing that it's the server operators fault.
We had this already short after the release with servers that couldn't hold their tickrate or had simply to many players allowed. It sure gave many new players the impression that their hardware wasn't able to run NS2 but in fact it was the server that forced the client to calculate predictions that increases the CPU bottleneck.
metso said before that higher ping creates more cpu usage on the prediction thread, so I think it's safe to assume that servers with bad performance thus missing updates would incur the same penalty that you are talking about onto clients.
Isn't that the reason there's a performance rating in the server browser?
You can inspect the lua and see that the performance percentage is based upon the servers framerate (i.e. tickrate).
If the spiking of the tickrate is what you're worried about then it is easily solved by the performance rating also reflecting that factor, which it may very well already do.
With that in mind I don't see why UWE would not uncap the tickrate, updaterate and maxsendrate, servers that are incapable of running high, stable tickrates will be obvious from the server browser.
fix interp
The performance rating show only how the server performs at refresh time, just like ping. The server could be running at 20 ticks/s (66%) practically all the time and just by luck shoot up to 30 for half a second when you hit refresh, just like ping could.
100ms interp translates to an artificial 100ms latency, so that's a given.
but it's indeed weird to force 100ms on a LAN.
They can't redo half the network code just for a LAN...
Being on LAN is irrelevant because the amount of interpolation is not related to ping. The amount of interpolation should be based on the frequency at which the server sends updates to clients and that doesn't increase just because you're playing on LAN.
no, but the amount of interpolation is there to mimic a low ping and reduce lag, which is not applicable on a LAN.
it's pointless discomfort, like wearing a rubber under your speedos when you go swimming.
Also more importantly, there will be thousands of more games on high performance servers, with player counts around 12 and with all of the players having low ping.
The game tailors to neither of the scenarios above. It's currently set across the board to be played on pings varying from medium to high on servers with a bunch of players.