Natural Selection 2 retweeted
alster
Join Date: 2003-08-06 Member: 19124Members
"Natural Selection 2 Retweeted
Charlie Cleveland @Flayra
An @NS2 player named Popcorn made this beautiful video of our game environments. Making me feel nostalgic: …
YouTube @YouTube"
Ah the good old days before healthbars and default alien vision mucking up the visuals.
Charlie Cleveland @Flayra
An @NS2 player named Popcorn made this beautiful video of our game environments. Making me feel nostalgic: …
YouTube @YouTube"
Ah the good old days before healthbars and default alien vision mucking up the visuals.
Comments
In a weird way, NS2 teaches you about life.
Using luajit, or lua, is not uncommon in games. It has advantages. Lua is very easy to work with and much quicker to develop with. It also has that cpu performance cost.
I am not saying the spark engine is perfect, or even a great engine, but it is not main cause of games performance problems. It certainly is a factor, but not as much as using Lua does.
In other news, check out some screenshots I took of NS2 with max settings in 4k. Unrealistic performance but it looks great.
http://forums.unknownworlds.com/discussion/comment/2262366/#Comment_2262366
Also, a film made by someone that goes by the name muffinsAKA on youtube made this which is pretty cool too!
Looks really awesome. Wonder when this will be able to run a decent framerate on "average" gamer hardware? 2 years? 5? Ready to for a re-release then! ; )
You could do it now depending on the kind of performance you want. I know others have run 4k on 970 sli with more than playable results. I would want more FPS myself though. I don't even think the brand new Nvidia pascal titan in SLI would be enough for the performance I want.
To get the performance I want from a single mid range card is going to be many years, but that is only because I demand so much.
And I'm guessing it is a bandwidth servertick issue for all the info streaming between server and client?
Can't really blame the cpu/gpu development, mostly they've become smaller and more power and per clock tick efficient, not the usual raw GHZ++ powerup they tended to get a few generations ago. ie: Sandy Bridge is only ~10-20% slower than Skylake, that is not that big a step compared to the generation gap. Something about Moore's law and the resistor barrier, I forget
I want more performance for the same efficiency but it is not that simple. As the die shrinks it becomes exponentially more difficult to keep electrons where they are supposed to be. The walls inside a processor are being measured in atoms now. There is not much room to move things around.
Basically, it becomes exponentially more expensive to improve performance as process nodes shrink.
I have also heard that game developers are lazy and need to use more cores. It also is not that simple. Most tasks a game does are not able to be paralleled, or made into multi core. It is not possible most of the time.
(just price?)
Elaborate pl0x. (I was already suspecting it had something to do with that)
Armchair physics fellow here, so take what I say with a bag of salt o/
Read up on it then.
From what I gather, it has more to do with how efficiently operations are done iteratively (thus the need for multi-core) than physics really? Now I see why the VISC architecture shows some promise.
Also, @Kouji_San , I meant increasing transistor count at same size instead of making it smaller at same transistor count.
Why not the MILL Architecture?
They claim to have solved Superscalar problem -> execute up to 32 instructions per clock cycle.
And they have Ivan Godard Gandalf on the team so the thing is going to be magic!
OI! I want my first quantum predictive computer, screw light speed and it's damn limitations. We need to figure out how to predict this quantum mechanics computing (and with we, I mean the insanely brilliant scientists) we need to go even further beyond!
I love how this "beautifying of NS2" evolved into "omg we need more power", just sayin'
The logical step for manufacturers at this time is to optimize their existing architecture, and that's exactly what we are seeing among "next-gen" chips.
If you think about it, this doesn't mean that semi-conductor shrinking will stop at 5nm. We have several proof-of-concept studies using carbon, diamond, glass, etc.. as wafer material. Beside allowing denser nodes, they are also good heat-conductors so cooling is less of a factor, hence power-efficiency.
But since it's capitalism and companies have spent fortunes on the current manufacturing lines, they won't start swapping the machines until it becomes economically beneficial. Give it 10 years and we'll have carbon wafers and Moore's law will continue, mark my words! :]
Oh, and quantum computers may sound awesome, but there is only a small set of computing problems they are good at. I don't think we'll see home computers having QPU modules, the more likely setup is data-centers leasing QPU time over the cloud... or maybe "small" encryption devices, I think there are some of these out there on the market already.
As you make the die bigger, the chance that the die contains a defect somewhere on its surface goes up, so yield ends up being inversely related to die size. This makes larger dies much more expensive because you get fewer of them out of the same process.
They already increase density. I was just saying whether or not to use that density to to have more transistors in the same area, as opposed to have a smaller area with the same transistor count. However, I've gotten my answers.
*cough* graphene *cough*
Graphene is made of carbon... but yeah, the lattice structure is the key, so I should've written graphene
Both are well known process technologies which are currently in use for high end microwave components.
The Cray-3 had GaAs processors in it and IBM had SiGe Processors at some point.