Server Specs
syserror
Join Date: 2007-08-09 Member: 61840Members, Constellation, Reinforced - Shadow
<div class="IPBDescription">Post your server setup, slotcount and stable tickrate</div>I am trying to get some information on what setup/hardware people are using on their servers. I was originally going to e-mail the skulkrush guys to see what they ran but I figured it could be useful generally.
If anyone can post (especially anyone running a high slot 30-tickrate stable server) could post what their setup is that would be great.
Ideally looking for (<b>example</b>):
<b>CPU</b>: i7 2600k @ 4.8Ghz
<b>RAM</b>: 16GB DDR3
<b>OS</b>: Windows server 2008 R2 (64bit)
<b>Virt</b>: Bare-metal/ESXi/Hyper-V (virtualization used [if any])
<b>Servers</b>: 2
<b>Slots</b>: 22
<b>Tickrate Dropoff</b>: 18 players (at what point does the tickrate drop from 30)
If anyone can post (especially anyone running a high slot 30-tickrate stable server) could post what their setup is that would be great.
Ideally looking for (<b>example</b>):
<b>CPU</b>: i7 2600k @ 4.8Ghz
<b>RAM</b>: 16GB DDR3
<b>OS</b>: Windows server 2008 R2 (64bit)
<b>Virt</b>: Bare-metal/ESXi/Hyper-V (virtualization used [if any])
<b>Servers</b>: 2
<b>Slots</b>: 22
<b>Tickrate Dropoff</b>: 18 players (at what point does the tickrate drop from 30)
Comments
<b>RAM</b>: 8GB DDR3
<b>OS</b>: Windows server 2008 R2 (64bit)
<b>Virt</b>: ESXi 5.0
<b>Servers</b>: 1
<b>Slots</b>: 12
<b>Tickrate Dropoff</b>: 6 players
<b>Other</b>: Server in Bluesquare Lodon, exclusive RAM allocation, high priority CPU allocation, dedicated cores (non-shared)
RAM: 4GB FB-DDR2
OS: Windows server 2012 (64bit)
Virt: None
Servers: 1
Slots: 18
Tickrate: Min: 26, Avg: 29, Max: 33
Location: My house, lol. SouthWest, Michigan, USA [East Coast]
CPU: Sandy Bridge Core i5 2500k @ 4.4 Ghz
RAM: 8GB DDR3 @ 1600 Mhz
OS: Windows server 2008 R2 (64bit)
Servers: 3
Slots: 18-22
Tickrate: Min: mid 20's (only for extremely long games), Avg: 30, Max: 30
The All-In servers are running on an Ivy bridge setup from NS2Servers.com which runs the following:
CPU: Ivy Bridge Core i5 3570k @ 4.3 Ghz
RAM: 8GB DDR3 @ 1600 Mhz
OS: Windows server 2008 R2 (64bit)
Servers: 3
Slots: 20-24
Tickrate: Min: mid 20's (only for extremely long games), Avg: 30, Max: 30
<b>RAM:</b> 16GB DDR3 @ 1333 MHz (4GB for the VM)
<b>OS:</b> Windows Server 2008 R2 (64bit)
<b>Virt:</b> Paravirtualized, KVM on Ubuntu 12.04
<b>Servers:</b> 1
<b>Slots:</b> 16
<b>Tickrate:</b> Min: 20, Avg: 27, Max: 30
CPU: Intel i5 2400 @ 3.10GHz
RAM: 8GB
OS: Windows 7 Ultimate 64-bit
HDD: Adata S510 SSD
GFX: NVIDIA GeForce GTX 550 Ti
Servers: 1
Slots: 12
Tickrate: I don't have anything fancy to track this, but for most of the game it was 29-30, it dipped to 20 and never really exceeded 30. Also note I was playing, it wasn't dedicated.
<b>CPU:</b> Ivy Bridge i5 3570k @ 4.4GHz
<b>RAM:</b> 8GB DDR3
<b>OS:</b> Windows Server 2008 R2 (64bit)
<b>Servers:</b> 3
<b>Slots:</b> 22
<b>Tickrate:</b> Min: ~20, Avg: 25-30, Max: 30
RAM: 4GB FB-DDR2
OS: Windows server 2012 (64bit)
Virt: None
Servers: 1
Slots: 18
Tickrate: Min: 26, Avg: 29, Max: 33
Location: My house, lol. SouthWest, Michigan, USA [East Coast]<!--QuoteEnd--></div><!--QuoteEEnd-->
Is that tickrate info on a full server?
I'm curious, how is it a 8x2.66Ghz with 4GB RAM performs so much better than a VM running 4x2.30Ghz and 8GB RAM. Is it the virtualization? The 0.33Ghz CPU difference? The difference between Server 2008 R2 and Server 2012?
Can anyone enlighten me ?
And how can we compare tickrate with a basic min/avg/max?
Tickrate is a value that is set to a maximum of 30, anything over that is either a bug, or just poor measuring by the gamecode. Any time your tickrate goes below 30, it means one of two things.
#1 There was a small overload in processing that your server had to do, and it caused a small very brief drop to around 26-30.
#2 Your cpu has hit 25%, and has now been forced to cause real game code to wait for the cpu to become free, which will slow everything down.
With that in mind, measuring "tickrate" is kind of meaningless, what would be more valuable is measuring at what point does your tickrate start to decline (because it will ALWAYS decline if you push it too far). This would require an easy and reproducible method of inducing the exact same load on a server, to get reliable measurements from it, and as far as I know there isn't one yet.
With the above said, here is the server running monash:
CPU: intel i5 2500K @ 4.9ghz
RAM: 8GB@1600
OS: Win 2008 R2
Servers: 3
Slots: 24/18/18
Tickrate: i
Co-stop 0%, 0~10ms
True, however would you expect as much difference as I am seeing?
I am simply trying to find out if the tickrate is capped (and if not, what it is) when the server is full. Probably should have specified 'when full tickrate'.
Set it to 2 cores or sockets (for a total of 2), at least for testing purposes, there's no reason to give any more than two cores to server.exe
And no, it really shouldn't make that much difference, but it will make some.
And even saying "when full" still means nothing. If you had a server of 16v16, all standing around doing nothing, you could have 30 tickrate, while a 8v8 with extreme building going on, could cause a drop into the 20's or lower.
The tickrate should never really be allowed to drop below 30, except in the rarest of cases. If your tickrate is consistently dropping below 30 (for long periods), then you should just lower the playercount.
edit: And as to whether or not the rickrate is capped, yes it definitely is! It's capped at 30. Any readings over 30 are most likely a bug or a false reading.
Something like this would be indeed nice to have.
I will reduce to 2/1 and see if it makes any difference, although I will be unable to test until later tonight.
My apologies for not clarifying; I meant if the tickrate is capped (i.e. is a constant 30; which is the current cap/maximum) not if there was such a cap. You are correct, it is probably not the best measure of server performance. What would you suggest instead?
Out of curiosity what slotcount would you recommend on my VM and would you recommend setting up separate VMs for new servers or running several servers one the one VM with affinity's for each process?
Edit: Updated original posts to specify 'tickrate dropoff'
And as a method to measure server performance, all I can offer is the theoretical model of what I believe it should be, but I have no idea how to create such a thing.
If only we had a player who was good at coding for the ns2 engine. We'd also need a player who has some free time on his hands since UWE are now taking over development of the admin system.
That's right, I know you're reading this. Chop chop!
2/1 vCpu setup seems to have lower C-stop: Min: 0ms / Avg: 1~2ms / Max: 4ms however this is without significant load.
Suggestions for slotcount?
2/1 vCpu setup seems to have lower C-stop: Min: 0ms / Avg: 1~2ms / Max: 4ms however this is without significant load.
Suggestions for slotcount?<!--QuoteEnd--></div><!--QuoteEEnd-->
The problem with NS2 is, that the slot count depends on the cpu clock speed, since the server process will spread the work to just a few threads and one of them doing almost 90-95% of the work. So 2.3 GHz is a pretty limiting factor, here
<!--quoteo(post=1972724:date=Sep 6 2012, 02:24 PM:name=endar)--><div class='quotetop'>QUOTE (endar @ Sep 6 2012, 02:24 PM) <a href="index.php?act=findpost&pid=1972724"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->If only we had a player who was good at coding for the ns2 engine. We'd also need a player who has some free time on his hands since UWE are now taking over development of the admin system.
That's right, I know you're reading this. Chop chop!<!--QuoteEnd--></div><!--QuoteEEnd-->
+1
2/1 vCpu setup seems to have lower C-stop: Min: 0ms / Avg: 1~2ms / Max: 4ms however this is without significant load.
Suggestions for slotcount?<!--QuoteEnd--></div><!--QuoteEEnd-->
Don't dedicate it for one NS2 server! Run 60 of them!
In all seriousness, i guess its hard to answer since up until this point in NS2, every server has been pushing every advantage possible to squeeze extra performance, and that includes dropping the hypervisor. But hopefully that should be changing soon. I don't have any experience running gameservers in VM, I guess thats how all the large providers operate though? I'm going to leave this answer to someone more qualified.
I'm curious, how is it a 8x2.66Ghz with 4GB RAM performs so much better than a VM running 4x2.30Ghz and 8GB RAM. Is it the virtualization? The 0.33Ghz CPU difference? The difference between Server 2008 R2 and Server 2012?
Can anyone enlighten me ?<!--QuoteEnd--></div><!--QuoteEEnd-->
Intels are faster clock per clock then AMDs. Also I'm not using VMs at all.
I host a few servers within 1 OS. It hosts a Website, a Minecraft 1.3.2 Server, 3 TF2 MvM Servers and the NS2 Server.
I assign cores to the servers properly. I use the last 2 cores for NS2. So NS2 isn't sitting on a core being shared with another server or on the primary 2 cores that are mainly used for the OS. So the ticks can be higher. I don't often see them go above 30 ticks tho. I hope once the dedicated server software is finalized for 1.0, that it uses 66 ticks. Because I notice once I drop below 25ish ticks, it started to cause everyone to lag really bad and shutter.
RAM: 8GB DDR3
OS: Windows server 2008 R2 (64bit)
Its stable @ 30 ticks. Haven't stress tested if for some time, but a few builds ago it could handle 3 full loaded servers easily, 8v8 if I remember correctly. Perhaps it will be able to handle 4 servers now. Haven´t tried though.
What would be the cause of that? Is there more to performance than low latency/loss/choke, high tickrate/fps?
I've noticed a very similar symptom with my server if it has not had a restart in a few days. Last night it was happening, 3 days with no restart. A couple of All-In guys came in and said my server was ###### and that everyone should leave (lol, what). After the round was over I told everyone I was doing a quick restart to resolve the problem. After the restart, everyone rejoined and it was running smooth as ever, no more jitters. Since then I have added a cron to restart the server every day early in the morning. I think it needs a quick restart to solve this problem. That's all.
Daily restarts are a good idea for pretty much any type of game server.
As far as I remember float has equal amount of precision for 1.0 - 2.0 as for 2.0 - 4.0, 4.0, 8.0 and so on. As you can see gaps between representable numbers get bigger with the number. If you get to amounts like 86400 (seconds in a day) and are not careful, you might get time bugs.
As far as I remember float has equal amount of precision for 1.0 - 2.0 as for 2.0 - 4.0, 4.0, 8.0 and so on. As you can see gaps between representable numbers get bigger with the number. If you get to amounts like 86400 (seconds in a day) and are not careful, you might get time bugs.<!--QuoteEnd--></div><!--QuoteEEnd-->
I heard some discussion of this at one point, but it's unlikely to be an issue unless your server is up for weeks at a time.
That's not true but I have to search for blog post that made it explicit. UP: I can't find it, whatever.
For range 1024 - 2048 we have 8388608 representable numbers with resolution 0.00012.
For 131072 - 262144 we have same amount of numbers but precision is only 0.015.
If server doesn't reset time after round end then it's enough for it to be running for 1-2 days to get to such low precision. If frametime is calculated that way:
<!--c1--><div class='codetop'>CODE</div><div class='codemain'><!--ec1-->float prev;
float cur;
float frame = cur - prev;
int last_tick;
float ticks_since_last = (cur/0.33f) - last_tick;
float tick_time = 0.33f * (last_tick+1)<!--c2--></div><!--ec2-->
Then frame value can be only 0 or 0.015 (15 miliseconds) or 0.031 (31 miliseconds). Even if tick_count_since_last_frame is calculated correctly and subtracted it simply has no precision to keep enough information about when the last frame was.
Furthermore lots of things in game rely on curTime being precise, things like if (curTime > lastAttackTime) would have significantly worse resolution.