The End Of Faster Processors
<div class="IPBDescription">horrible news.</div> i just got the news recently but this story came out a while ago. i can't believe i didn't hear about it till now. it is not physically possible for processors to get any faster than they are now. the only way now to get faster computers is to do what super-computers do and put multiple processors into one computer working simultaneously. this would mean that programming will have to be drastically different and a lot more annoying.
Comments
with some new stuff...
that i don't feel like remembering
something biological
maybe
And there are hopes in things like SETs, Optical, Quantum, etc.
Oh, and wrong forum.
Nah, just kidding.
But seriously, if you attribute this to Intel's decision to stop their 4 ghz Prescott CPU from coming out, then that's just plain ignorant.
Theoretically, it's not that hard to ramp up the actual hertz, because all you need to do is extend the CPU pipeline, but a longer pipeline offsets the advantage of more speed.
Based on our current fabrication process, (90 nm), we can't have faster processors without massive heat and power problems. The more speed you have, the more juice it takes, and the more heat it outputs. It's just not fiscally sound to keep pushing this up, because you'd need to have a water or even phase change cooling based system to keep up with the heat. And that doesn't even take in to consideration of voltage problems.
No, we're still progressing, but faster processors aren't always the best performing ones. Right now, dual core design will be able to substitute faster processors if more programs are SMP aware.
As long as there is money to be made, processors will get faster. Moore's Law has yet to be broken.
one of the current ideas is to fugg the smaller and faster, and just make faster <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
and there are always people saying "things can't go past this point" then people laugh and do it (there was something about not being able to OC to a cretin hrz, some one recently did it)
etc etc etc.
don't wory.
by the time we hit our limit with current tech, we will have new tech to do more nifty things with <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->
I think it was the 6 GHZ clock limit. Yes, it was surpassed <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->
The only thing that's changed is that Intel finally got out of the megahertz myth. Things will still get faster, but the clock speeds are unlikely to go up significantly. Not that it matters. I can double the clock speed of a chip and make it <i>slower</i> by doing screwy things to the pipeline.
I just recently attended a seminar hosted by a Professor from Columbia University. The future of processors and such depends on the scale of which they can be created. In other words--the smallest you can get is to an atomic level, and handle the individual transfer of electrons through carbon-tubes, or nano-tubes (no no... not nanites, but that is a fix all <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo--> ).
Conventional transistor technology has been used WAY too long now, and has only just been getting smaller. If you realize how much really goes into making those 3.8 ghz P4s or those 64-bit AMDs, you start to think how in the hell anything can be done to make them better. Without getting all nerdy and technical about it, I'll just leave it at that technology still has awhile to go on the concept of shrinking transistors. 220 million transistors in a 1" by 1" chip that is about 1/16" of an inch thick is pretty good, eh?
Well. I vote for quantum computing <!--emo&:0--><img src='http://www.unknownworlds.com/forums/html//emoticons/wow.gif' border='0' style='vertical-align:middle' alt='wow.gif' /><!--endemo--> (which they have gotten to work!)
<a href='http://www.geek.com/news/geeknews/2003Aug/gee20030827021485.htm' target='_blank'>an article</a> now this isn't the article i read a few weeks ago but it's close enough.
--a snippit--
<!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Enter diamond semiconductors. Diamond? Yes, diamond, the hardest substance known to man. Diamond possesses some very useful properties, not the least of which is its superb ability to conduct heat, its high breakdown voltage, and its high carrier mobility. While silicon begins to show severe signs of thermal stress around 100°C, diamond can withstand several times that without ill effects. A chip made of diamond could do with a far less robust cooling mechanism and run at unheard-of frequencies without damage. CPUs could reach temperatures in the hundreds of degrees and continue to function normally.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
supposedly they've created a diamond processor at 84 ghz
**edit** after a bit of sleuthing i found the original article i read, i enjoyed it very much and i hope you will do the same <a href='http://www.wired.com/wired/archive/11.09/diamond.html' target='_blank'>Wired Diamond Age</a>
Among these tasks are rendering computer graphics, graphics cards have many pipelines(16 in the latest generation of high end cards) and need extremely little actual logic, they mostly just handle streams of data comming in, some operation on this data and a stream of data comming out, there's no need to have millions of transistors for it to figure out what to do when waiting for data it didn't know it needed to fetch from RAM a few clock cycles ago because it never happens, there's no need for a ton of cache to keep things in if you happen to need them in the future. These pipelines are practically identical units, if you can make one of them, it's not hard to make more of them. For graphics cards performance scales practically linearly with the number of pipelines, and the smaller feature sizes become, the more pipelines you can cram in.
Mechanics('physics' like the havoc engine) and sound are important elements of games and these love multi-processing/more computational units and not much logic is needed to predict anything or keeping it busy while fetching stuff from memory. To see just how much more power is available in these areas look at the clearspeed coprocessors, here we have a 96-way(96 PE's(Processing Elements) each consisting of an integer MAC(Multiply ACcumulate) and 2 64-bit floating point units) processor running at around 200 MHz, consuming 5-watts of power and capable of 50 GFlops.(<a href='http://www.clearspeed.com/products.php?si' target='_blank'>link</a>). This techology could migrate into gamers computers eventually, it would not need be so expensive if produced in the same insane quantities as normal processor. Also look at graphics cards, they are many times faster than your CPU at the same tasks as the above co-processor, they are still hard to program for such tasks because they are kinda graphics centric(for good reason) but it's not impossible.
<!--QuoteBegin-Rapier7+--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Rapier7)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Theoretically, it's not that hard to ramp up the actual hertz, because all you need to do is extend the CPU pipeline, but a longer pipeline offsets the advantage of more speed.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Practically it is an insanely difficult task to ramp the clock speed any further. As clock speed goes up so does power usage, this is a huge practical problem, can we really demand that everyone uses peltier cooling or water cooling and ever larger, more expensive and noisier power supplies? To achieve the clock speeds they did Intel used self-resetting domino circuits, one circuit drives the next, which drives the next... They are extremely timing sensetive, i.e. trace lenght sensitive. Measuring equipment causes so much interference that readings tend to become non-sensical, as a result the timing of the circuits had to be done BY HAND, moving millions of transistors one at a time, often WITHOUT accurate meassurements using trial and error. The more actual logic circuits you have the worse this problem becomes. Smaller feature sizes gives more transistors to play with, and they are invariably put to use(sometimes just as added cache mem or just by duplicating certain structures, in which case this point isn't valid).
<!--QuoteBegin-DOOManiac+--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (DOOManiac)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->They've been saying this for YEARS now...
As long as there is money to be made, processors will get faster. Moore's Law has yet to be broken.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
And that's no coincidence, the clock speed version has been broken for years(which is the one people tend to speak of. The transistor density version still seems alive and kicking allthough at ever increasing manufacturing costs and complexity(=> more bugs)).
Originally it was transistor density doubles every year, then it somehow got changed to performance/clock speed/transistor density(take your pick) doubles every one and a half years and now intel is saying "every few years".
The p4 2.0 GHz came out around september 2001, and now slightly more than 3 years down the road intel is at 3.8 GHz and has abandoned tejas(the successor to p4 building of the netburst architecture which was slated to scale to 9 GHz in 2005). It took 3 years to allmost double clock frequency.
AMD released the 1.6 GHz athlon xp 1900+ around december 2001, now it has ramped up to 2.6 GHz and switched to a more efficient architecture. It took 3 years to get 60 % higher clock speeds.
Performance and transistors have scaled alot faster then clock speed.
Why is the clock speed doubling version of Moore's law important you might ask? Well, it's been a free lunch until recently, you 'just' shrink the process and you get higher clock speeds and more transistors, now you're not getting higher clock speeds anymore because it's starting to become less than practical to have ever bigger power supplies, ever bigger heat sinks and so on. What we are relying on now for extra performance is to put the extra transistors to good use, on die memory controller, more cache, 64-bits, there's talk of dual core processors for home computing and so on.
It looks like the transistor version might be threatened by thermal noise(but intel recons this is a decade or so off). Leakage current is also threatening the transistor version of Moore's law, we might be able to stave this off with SOI, strained sillicon and such.
lol u 2? we both msut haev teh fastes prosesors 4evar lol b/c w/e i can have 2 programs runing @ teh saem tiems.
The current aplications & OSs are being beild for the CURRENT generation of CPU's. from the 286, towards the P4 I mean. Same with AMX. So whats the prob?
Backwards compatibility, same with XP. Xp is made to be partially compatible with old win98 aplications.. Same with the CPU's. Programs are written on this generation, so in order to keep using your pwning Norton 2004 Antivirus on the new uber cool CPU, they have to build in backwards compatibility. yes ppl, if yo got this far... BITS. 64bits CPU... 32bits CPU. This is bad.
Why?
Go figure you got a 32bits aplication, or worse, OS. And a 64bits CPU. your cpu has to work SLOWER to communicate with that OS. This means skipping clockcycles, etc. Alot of tech stuff. Point is, imagine a 128bits running a 32bits OS and aplications. OUCH. Thats alot of skipping, it gets slower cause it A.. has to skip, B. has to modulate the info and probably other things I forgot.
So what is this to us? if we keep going on this route we are keeping making hardware and software compatible with each other, from OSs back. Imagine a P6 supporting win98. You can say, we do not have win98 in that time, but remember.. windows HAS backwards compatibility so windows is STILL working on the old win98 principle, ok since xp this aint entirely true but some part still.
There are ways of making CPU's WAY faster but point is, you can NOT run 286+ software on that CPU, simply cause the software was never written to deal with that processor.
So the real question is not, can we make the CPU's faster, the real Q is, can we make them faster when STILL supporting backwards compatibility. THIS will end alot faster then physical limits, simply cause in time it will start to be to much.
Cause skipping clockcycles is not perfect, usually you have to skip more then needed (another long story) in order to keep a correct speed. So in time, you are skipping hell of alot. and anything you emulate, you risk making more and more false calculations.
in short, best thing we can do is dump the entire current CPU generation, rebuild from ground 0 on NEW discovered techniques and make OSs and aplications for THAT CPU. Now some companies do this (usually mainframes), since that has to be totally customizes anywayz.
I will try to dig up links to this info, since its been a while since i read it, I might be a tad of on some things.
I just googled a few hours and I can not find it anymore. I get tons of links to articles over the great compatibility of the current line of CPUs, damn it. Thats not what I need. (I read this bout a year ago and i came across it on accident when digging up some other info bout CPUs)
If anyone ever finds a article, talking where I am talking bout, pls lemme know. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->
Anywayz, point is.. they gotta restart, rebuild CPUs, rebuild OSs, rebuild aplications, and NOT make backwards compatibility.
What happens when they do build it? lets think of this:
they make a whole new CPU, totally new, original 0% compatibility with anything currently out. So no OS or software can run on the CPU. (good) Now they make a weird thing to make it run old software, not needed. Sure, if you run new software, which will be build for the CPu, you do not need it, but if you run software needing this backdoor you are SERIOUSLY lagging your CPU.
man, I wish I could find that article. took me 2 days the first time. <!--emo&:(--><img src='http://www.unknownworlds.com/forums/html//emoticons/sad-fix.gif' border='0' style='vertical-align:middle' alt='sad-fix.gif' /><!--endemo-->
What happens when they do build it? lets think of this:
they make a whole new CPU, totally new, original 0% compatibility with anything currently out. So no OS or software can run on the CPU. (good) Now they make a weird thing to make it run old software, not needed. Sure, if you run new software, which will be build for the CPu, you do not need it, but if you run software needing this backdoor you are SERIOUSLY lagging your CPU.
man, I wish I could find that article. took me 2 days the first time. <!--emo&:(--><img src='http://www.unknownworlds.com/forums/html//emoticons/sad-fix.gif' border='0' style='vertical-align:middle' alt='sad-fix.gif' /><!--endemo--> <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Cow droppings.
Rebuild software from scratch because the architecture has changed? What the hell for? All you have to do is write compilers for the new architecture. And get people to write standards conforming, portable code. Forget making the architecture backwards compatible; have the compiler do the work, not the CPU (thanks a lot, Intel. CISC was a <i>great</i> idea, wasn't it? :/)
...
As noted before, this is nothing new. It's probably nothing to worry about either. Firstly because computers can get faster by reducing bottlenecks - there's no point in your CPU being incredibly fast if your memory or hard drive can't keep up.
And then there's quantum computers, which is a different architecture altogether.
Anyway, I wouldn't worry about it. There's more than one way to make software run fast, and improving the CPU is only one of them.
Also, Windows 98 can really help out for running old games that XP doesn't want to run.
A Windows 98 system on a modern cpu can usually run windows 9x versions of old games excellent. Even XP is pretty decent, although there is some Windows 9X games that XP may not like. Considering many such games go back to 1996(when the P1 r0x0red), I'd say that's pretty damn good for X86 compatibility. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->
Ok my personal theory is that plenty of software is far too unoptimized. Hardware increases in power so quickly that there isn't a necessity to heavily optimize code.
Also quantum computing will be revolutionary. Also it'll be insanely fast... at certain tasks which can take advantage of the quantum nature of particles! If you actually look in to the theories and algorithms that have been developed so far to utilise quantum computing you'll find that not much is complete despites years and years of work. Quantum Computing for the general public is still a good 30-50 years away.
B. I never said they might not find a solution to somehow emulate it to run it anywayz, what i mean is.. if they do "fix" it softwarematic (like compilers) they ARE losing speed. period. And I know its not just the CPU, I have a few degrees in system administrating here, so i know whats it all about.
you say let the compiler do the work, i say let themselves do the work, recode, start from scratch and do it right.
i can only assume the rewriting stuff is based around the debacle that was itanium. My real problem at the current 64 bit systems is finidng drivers that support a 64 bit os, outside of win 64. (yeah, thats fedora)
The reason i support backward compatibility for my hardware on this one is that the 64 bit market is far too smalll to have all the things i want to have. Particularly if i dont want to pay enterpirse sized prices for software.
further i fand hat a large number of 32 bit programs are still very usefull. exampeles in clude things like steam, of zoners halflife tools. these are things which i cannt reasonably expect to be upgraded, particularly when 64 bit machines are such a small part of the computer market. thus X86_64 for the win:)
Intel realized, just like AMD had a while ago that it's just not effective to keep pumping up the clock frequency. Instead Intel will work on making their chips run more efficient with new technology.
Then again if AMD and Intel weren't such **** to silicon, we could have 10ghz diamond based processors at a decent price, since the synthetic diamond makers have been seriously increasing their capabilities.
So you are unlikely to see processors jumping past 4-5ghz in the near future, however you will see faster processors.
EDIT:
Oh in the future, you will see multicore processors (2 or more cores on a single processor). New types of processor extensions (Like SSE1/2/3). I'd rather not see them use carbon nanotubes (I want my space elevator first!). I did read a paper on being able to use standard copper/silicon in quantum computing by measuring the spin of an electron, but quantum computing is a LOOOOOOOONG ways off.
<!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->"The bottom line is we will deliver 8 times the computing power using less than one tenth of the electricity."<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Nuff said.
<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Not a good solution. It would cost AMD and Intel many billions of dollars getting diamond fabs on line and years of effort to get all the problems solved. But for what?
Do regular users want to have performance at the cost of insane power usage? The attractiveness of diamond is that it can stand up to allmost incandecent temperatures without that much trouble while sillicon cannot, therefor you can run it at very high frequencies(~200 GHz) and not worry too much(but won't thermal noise quickly become a problem?), with extreme power densities(~30W per square milimeter(!). A normal sized processor(~100 mm^2) would use 3 kW's, trip your fuse at home if you turn on two of them at once, have monstrous cooling and power supply and double as an efficient space heater), something which you can never do with sillicon. If you have several kW's to throw away it's not a problem as long as it works. For regular users IT IS a huge problem. Diamond processors are not a solution to the power problem for regular users and especially not for laptops where power is even more limited.
What do we need more computing resources for? Games? Current processors are horribly suited to games, we wan't something like a coprocessor for raw computation, we can make a 5 W processor(such as the clearspeed linked in my previous post) that could perform as well as that 200 GHz diamond processor if it was designed like a p4 at alot of the tasks we care for in our games.
I can't see any benefit to me with diamond processors instead of sillicon.