What Direction Should Computers Be Heading?
Hawkeye
Join Date: 2002-10-31 Member: 1855Members
in Discussions
Right now, the main concern is getting as much RAM and the fastest CPU as possible. Most people don't seem to realize that the big bottleneck of computer speed is the memory.
To put this into perspective...
A computer can compute instructions within 1s of cycles.
Local cache can return data in 10s of cycles.
RAM memory can return data in 100s of cycles,
and the big whooper.. Hard disc memory can return memory in 100,000s of cycles.
Yes, you get the picture. Going to the hard disc is bad! The computer industry's struggle is trying to avoid going to hard disc to save time. However, I ask you this, why go to hard disc memory at all? The cheap SRAM is getting to the point where it is almost affordable to replace hard disc. Why not start making the SRAM replace the hard drive? Granted, to get 4 GB of memory would cost you 600 dollars more or less. It isn't cheap. However, if there is supply and demand, these things go down much faster. I think this is where computers should be heading. Eliminate the big bottleneck of memory entirely. Good GOD would this speed things up.
Supposing it takes you .9 of every second to wait for results from the hard disc, we're talking about 900% speedup without hard disc. I understand the processor does other things in the meanwhile, but aside from routine system programs, sometimes there is nothing to do but wait for the main program to get memory from the hard drive. If we're talking about computer games, this would be THE speed-up better than any video card could do.
What do you guys think? If you think there is a better direction, please post it!
To put this into perspective...
A computer can compute instructions within 1s of cycles.
Local cache can return data in 10s of cycles.
RAM memory can return data in 100s of cycles,
and the big whooper.. Hard disc memory can return memory in 100,000s of cycles.
Yes, you get the picture. Going to the hard disc is bad! The computer industry's struggle is trying to avoid going to hard disc to save time. However, I ask you this, why go to hard disc memory at all? The cheap SRAM is getting to the point where it is almost affordable to replace hard disc. Why not start making the SRAM replace the hard drive? Granted, to get 4 GB of memory would cost you 600 dollars more or less. It isn't cheap. However, if there is supply and demand, these things go down much faster. I think this is where computers should be heading. Eliminate the big bottleneck of memory entirely. Good GOD would this speed things up.
Supposing it takes you .9 of every second to wait for results from the hard disc, we're talking about 900% speedup without hard disc. I understand the processor does other things in the meanwhile, but aside from routine system programs, sometimes there is nothing to do but wait for the main program to get memory from the hard drive. If we're talking about computer games, this would be THE speed-up better than any video card could do.
What do you guys think? If you think there is a better direction, please post it!
Comments
For example: My current system runs on 256 Mb of RAM (which I know is far too little). While Morrowind and Tribunal run smoothly, I can't travel to Solstheim (the island in the Bloodmoon expansion), as the multitude of trees causes so much slowdown as to make the game unplayable. I'm talking half-minute lags while the game just reads from the harddrive non-stop.
As for where I think computers should be going?
<a href='http://www.wired.com/wired/archive/11.09/diamond.html' target='_blank'>Diamonds!</a> [wired.com] for the CPU, and <a href='http://story.news.yahoo.com/news?tmpl=story&cid=562&e=3&u=/ap/future_windows' target='_blank'>Windows</a> [yahoo news] for the interface.
Virtual memory simply allows the current RAM to stay put, and the newly needed memory to run off into hard drive space. In other words, it prevents swapping, even if accessing memory from virtual memory takes a long time, it is better in the short term not to swap. It's kind of a stupid idea, actually.
There shows this continuous access time vs cost gap between RAM and hard drives. However, the price of RAM is lowering, so just ditch the hard drive. That's my idea anyway.
What have you guys heard of quantum computing? I hear they are really making leaps in that area. There isn't a computer which can hack 128 bit encrytion within reasonable time (less than a year). Quantum computing is supposed to melt all that away. Performance is geometric, not arithmetic.
They are going to make biological processors, a living computer, yes. I think the article was from the mag science.
So well probably be catching real viruses soon and you will need to have a PHd in biology as a pre-requesite for a comp engeneer maybe? <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->
My teacher also said to me back then that it was only a matter of time before HDs are replaced by big chunks of ram.
I am not that literate in comps i just know basic stuff so ill just stop here, just wanted to bring in the biology aspect of where we are heading <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html/emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif'><!--endemo-->
My favourite theories on quantum computing is that the minute we make one it'll tear a hole in space creating a blackhole that wipes out the earth and half the solar system lol. Reminds me of the theories people worried over that said atomic bombs would ignite the atmosphere turning our world into a giant ball of fire XD
To put this into perspective...
A computer can compute instructions within 1s of cycles.
Local cache can return data in 10s of cycles.
RAM memory can return data in 100s of cycles,
and the big whooper.. Hard disc memory can return memory in 100,000s of cycles.
Yes, you get the picture. Going to the hard disc is bad! The computer industry's struggle is trying to avoid going to hard disc to save time. However, I ask you this, why go to hard disc memory at all? The cheap SRAM is getting to the point where it is almost affordable to replace hard disc. Why not start making the SRAM replace the hard drive? Granted, to get 4 GB of memory would cost you 600 dollars more or less. It isn't cheap. However, if there is supply and demand, these things go down much faster. I think this is where computers should be heading. Eliminate the big bottleneck of memory entirely. Good GOD would this speed things up.
Supposing it takes you .9 of every second to wait for results from the hard disc, we're talking about 900% speedup without hard disc. I understand the processor does other things in the meanwhile, but aside from routine system programs, sometimes there is nothing to do but wait for the main program to get memory from the hard drive. If we're talking about computer games, this would be THE speed-up better than any video card could do.
What do you guys think? If you think there is a better direction, please post it! <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
The reason we don't use SRAM out the wazzoo is because there is not much room on a standard chip die to fit in significantly more memory. Most of the space is spent on stuffing other things like MMX/SSE/SSE2 onto the standard x86/x87 ISA. Thus, any extra SRAM will have to be off-die (like L3 caches on a lot of server-grade chips, e.g. Itanium) and the trace length to any (significant) amount of SRAM will be the determining factor...SRAM isn't *that* cheap. 128 megs of it will still run you about 300 bucks, iirc. Then, you have to change your ISA implementation to accomodate a 3rd tier of caching, which really isn't all that much faster than RAM. Basically, you have to do a hack job of it, and it's ugly, and there's not that much more advantage to it than adding more [cheap] DRAM.
So...basically you get an Athlon-64, get many gigs of ram for it, then mount a ramdrive for all your critical system apps, and voila. I don't think harddrives will ever be completely phased out, since many people like having hard copies and backups.
*edit* What I think the computer industry should head for in the next 10 years...hmm.
1) Phasing out the x86 ISA in hardware, but retain hardware emulation of it for compatibility.
2) optic circuitry instead of wire traces. that would be badass, and there would be no complaints about electromigration (i'm not sure if this would actually be viable in anything less than a mainframe)
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Brian Neal [Ace's Hardware]: Yeah that was in the chalk talk the other week [(press teleconference on Throughput Computing)].
Dr. Marc Tremblay: Well the main idea is that while running large commercial programs we find the processors sometimes waiting 70% of the time, so being idle 70% of the time.
Chris Rijk [Ace's Hardware]: Stalled on waiting for data, basically.
Dr. Marc Tremblay: Yes. In large multiprocessor servers, waiting for data can easily take 500 cycles, especially in multi-GHz processors. So there are two strategies to be able to tolerate this latency or be able to do useful work. One of them is switching threads, so go to another thread and try to do useful work, and when that thread stalls and waits for memory then switch to another thread and so on. So that's kind of the traditional vertical multithreading approach.
The other approach is if you truly want to speed up that one thread and want to achieve high single thread performance, well what we do is that we actually, under the hood, launch a hardware thread that while the processor is stalled and therefore not doing any useful work, that hardware thread keeps going down the code as fast as it can and tries to prefetch along the program path. Along the predicted execution path [it] will prefetch all the interesting data and by going along the predicted path [it] will prefetch all the interesting instructions as well.
<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
As for molecular computing...I don't think we're anywhere near surmounting the challenges facing us if we decide to try it. And even if we do succeed within the next 10 years, it'll be another decade or so at least before it'll be of any use to us. Keep in mind that traditional, established tech will continue advancing while research is being put into this.
Artificial intelligence is a funny thing. The military had been trying to make a program which could identify tanks for the longest time. There was this missile they wanted to have small tank bombs the missile could drop along the way to the main target. The trick was identifying the tanks. They had the camera, all they had to do was make a program which could find the tanks. A good officer can pick out a tank 70% of the time. The program didn't have any trouble identifying tanks out in the open with no cover. However, the enemy isn't stupid. They have tanks in cities, and many of them have cover to avoid getting hit with bombs.
In this case, the program would fail miserably. While an officer could find them 70% of the time, this program could find them like 10% of the time successfully. In the early 1990s, they did some research in AI. They took this program and generated millions of variations of this program (done completely randomly). They shows pictures of tanks to the program. The ones which were most successful were again generated millions of times in different variations. This was continuously done to the point in which a program was created which could not only spot covered tanks in cities, but 70% of the time (same as a human being).
Gotta make you wonder eh? The real tripper is that they don't know what the code looks like.
Perhaps you're talking about neural networks and training them to identify visual images? One of my friends programmed a basic image-recognition neural network for state science fair (only got 2nd place though... should have gotten first IMHO). Very smart guy, i looked at a little of the code and it was pretty awesome.
EDIT: In some cases they "generate code" (read: change behavior) only in that they perform prewritten functions in different orders or not at all.
...If I'm allowed to speculate wildly...
I think we will be seeing some form of solid state memory that has no need to be continually refreshed replace hard drives and SDRAM, like MRAM or/and a cheap-o variety of flash mem for hard drives.
MRAM has the potential to be as fast as SRAM, non-volatile and as dense as SDRAM. There will of course be a big penalty in the form of latency for having the RAM located far from the processor and for having RAM that is not soldered to the motherboard. I therefor think that we will see processors with a big pile of MRAM onboard as well as even more MRAM soldered directly to the motherboard with BGA packaging or better(...if IBM/infineon manage to make MRAM work properly)
I also think we will be seeing graphics cards develop in the direction of general purpose processors now that PCI express will allow better communication back from the graphics card and the graphics cards is getting more programmabillity(i.e. things like pixel shaders of increasing length and precision and memory reads that read the actual value stored at an address without performing bilinear filtering or other such graphics related operations.). Graphics cards are incredibly quick when it comes to processing streams of data without conditional branching or other problems.
If physics engines for games are reducible to a form where no conditional branching is used then I can easily see graphics cards taking over the roll of doing physical simulations for games as well as a DirectPhysics component of direct X 11 or something. <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->
And I also think sound cards will eventually deal massivly with geometry and material information when generating sound, so that if you make an empty concrete room in a game it will actually have reverb automatically as long as material and geometry is given to the sound card, and it would dissapear if you pushed a couch and some furniture into the room.
Long-term: true voice-recognition and semi-comprehension
What it think we need, is bio- computers or artificial intelligence. (A SS2 situation still scares meh!)
I dont personally know the mechanics of how you would do it (ENLIGHTEN ME PLZ!!!), The ability to learn would allow it to do some pretty insane stuff. Granted you would have to "raise it" like a child, but eventually they could do complex tasks humans do today. A basic one could do something such as drive a tank or deliver a pizza.
A.I. could hlep us enormously with our problems. They could provide us with leaps in technology(Imagine having something able to do a prediction or experiment at a rate 100x faster than a human researcher).
But away from the theoretical and on to hardware and software.
We need, in my opinion, someway to hd's faster (already mentioned heres my take.
What if we packed millions of tiny tiny reader heads onto the same disc? Then you have then travel in a line. One after another. They read the requested data and the data ahead of it along a programs predicted(i.e. what operations need what paths) and then the head zip ahead to the next assigned position and waits. If the path its assigned to isnt used by the program, it resets and awaits another.
I dont know, as im still in HS and our school wont let me work on computers (i.e. take em apart and look at them then fix them.)
I do know my basics though, from fixing my own computer, im just not to the system architecture point yet.
A computer can compute instructions within 1s of cycles.
Local cache can return data in 10s of cycles.
RAM memory can return data in 100s of cycles,
and the big whooper.. Hard disc memory can return memory in 100,000s of cycles.
Yes, you get the picture. Going to the hard disc is bad! The computer industry's struggle is trying to avoid going to hard disc to save time. However, I ask you this, why go to hard disc memory at all? The cheap SRAM is getting to the point where it is almost affordable to replace hard disc. Why not start making the SRAM replace the hard drive? Granted, to get 4 GB of memory would cost you 600 dollars more or less. It isn't cheap. However, if there is supply and demand, these things go down much faster. I think this is where computers should be heading. Eliminate the big bottleneck of memory entirely. Good GOD would this speed things up. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
The point of the harddisk should be, and will be I think, large-quantity storage. 4GB of ram for storing anything that needs processing, and a terabyte of harddisk for storing data.
Face it, by the time 4GB ram systems become mainstream, you will be well towards the terabyte. RAM is faster, but it will always linger behind when it comes to quantity. Having both will be the best solution, and is the best solution already. A program that is forced to resort to the harddrive for short-term storage is slow, very slow.
The processor tries all it can to avoid going to hard disc. When a program absolutely has to go to the hard drive, it grants the memory access, and the operating system runs other programs in the meanwhile until the memory is loaded.
The RAM addressing is not why I think it's necessary - the more efficient instruction set (better stability, better floating-point support, scalability) will allow things like the long-term direction I mentioned. Not that the RAM addressing is a small issue: you sound like a certain someone that once said 640K is enough for anyone...
IT probably would have been too, but then the gamers got ahold of it. All of the sudden text adventures weren't good enough anymore because they required you to imagine... it kinda went down hill from there.
Anyways, summery of my thoughts, MonsE style:
What I would like:
---Long term: A desktop archetecture based around a processor that is optimized for object-oriented code, with a switched backbone instead of a bus, ditching the bios in favor of an open-firmware, and that is scaleable "to the moon" as my networking prof. would say. Also, an addon hardware interface type with cards that contain their own drivers for said firmware that can be installed and then used by the OS (via the firmware) to use hardware without loading specialized (admittedly better), OS dependant, drivers... which can of course be installed later. And of course, object-based OSs.
--Short term: Just one OS, preferably open source, that I can really like. Just one.
What will probably happen:
--Long term: Gaming will push hardware to such extreems that things like the matrix won't be such sci-fi anymore. People (some people) will live out all of their non-working lives in giant MMORPGs. The cheaters will still be there, only now they can probably kill you for real. Computers will not get much more secure, or stable then they have ever been. CISC processors will still dominate, people will still believe that higher clockrate == better.
--Short term: DRM will be welcomed by the ignorant masses. Spammers will start getting murderd. HL2 will become the next DNF. Non-MS OSs will finally gain a decent (say, 20%ish) share of the home market. And there won't be a single OS I can really like. Not even one.
BTW, concerning memory, why not just meld processor and memory into a single integrated system. Justa thought.
==-===-=----==__+==========================================
While were at it, We can break rectangular objects that run on top of the shell.
-- An angry linux midget.
BTW, concerning memory, why not just meld processor and memory into a single integrated system. Justa thought.
==-===-=----==__+==========================================
While were at it, We can break rectangular objects that run on top of the shell.
-- An angry linux midget. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
Processors already have their own memory, small amounts of it anyway. the problem is basically an engeneering one. What do you do when some of the RAM goes bad? you can't replace it so you buy a new processor? Ouch...
---Long term: A desktop archetecture based around a processor that is optimized for object-oriented code, with a switched backbone instead of a bus, ditching the bios in favor of an open-firmware, and that is scaleable "to the moon" as my networking prof. would say. Also, an addon hardware interface type with cards that contain their own drivers for said firmware that can be installed and then used by the OS (via the firmware) to use hardware without loading specialized (admittedly better), OS dependant, drivers... which can of course be installed later. And of course, object-based OSs.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Good stuff here. Have you seen this? <a href='http://www.neoseeker.com/news/story/2339/' target='_blank'>http://www.neoseeker.com/news/story/2339/</a>
If a maxtix-esque system world would come into being, imagine the fun hackers could have........
The RAM addressing is not why I think it's necessary - the more efficient instruction set (better stability, better floating-point support, scalability) will allow things like the long-term direction I mentioned. Not that the RAM addressing is a small issue: you sound like a certain someone that once said 640K is enough for anyone... <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
er...I would have been someone who said "640k should be enough for anyone...for the time being" (which is what i was trying to say in my post).
Obviously we're eventually going to surpass the 4gig addressing limit imposed by a 32bit ISA, but I'm just saying that it probably won't be for a few years.
Here, go <a href='http://h18004.www1.hp.com/products/servers/proliantdl760/index.html' target='_blank'>buy one</a> yourself. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html/emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif'><!--endemo-->
Server farms have been running 64-bit for a while now.
*edit* btw, do windows server-class OSes support the listed amount of RAM per processor, or total?
---Long term: A desktop archetecture based around a processor that is optimized for object-oriented code, with a switched backbone instead of a bus, ditching the bios in favor of an open-firmware, and that is scaleable "to the moon" as my networking prof. would say. Also, an addon hardware interface type with cards that contain their own drivers for said firmware that can be installed and then used by the OS (via the firmware) to use hardware without loading specialized (admittedly better), OS dependant, drivers... which can of course be installed later. And of course, object-based OSs.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Good stuff here. Have you seen this? <a href='http://www.neoseeker.com/news/story/2339/' target='_blank'>http://www.neoseeker.com/news/story/2339/</a> <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
No, but now that I've read it I don't really like where they're going with it... DRM? Using special partions on the disk (when a small flash drive would easily be enough for what I envision)? unnessesary graphics? No thanks. I was envisioning something more like <a href='http://www.openfirmware.com/' target='_blank'>OpenFirmware</a>.
When the early computers were made, they could only differentiate between two states.. on or off. That was the basis of binary. So what if you redesigned the computer with using more than one state in mind. It would do wonders for memory access considering how much information can be passed in wires with more than 2 states allowed. We can differentiate between several states now. Why don't we try it?
Immediate advantages that I can think of.. <ul>
<li>bus bottleneck dramatically reduced
<li>much better page swapping efficiency
<li>a generally better way to transmit data
<li>much greater instruction set load
<li>huge computations made every clock cycle with ALU
</ul>
Disadvantages<ul>
<li>much worse hardware complexity
</ul>
Anybody have anything to add to either list?
Server farms have been running 64-bit for a while now.
<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
I saw the title of 'computers' and figured everything was fair game. Hence why I originally said 'widespread, mainstream' 64-bit computing. Even in the coporate server world, 64-bit Intel/AMD computing is extremely rare. DEC is dead, my friend. <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->*edit* btw, do windows server-class OSes support the listed amount of RAM per processor, or total?<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Total.
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->I was envisioning something more like OpenFirmware. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
From that link: "The IEEE-1275 Open Firmware standard was not reaffirmed by the OFWG and has been officially withdrawn by IEEE. Unfortunately, this means it is unavailable from the IEEE."
You should really find a new firmware cause. Without IEEE backing, it's effectively dead on the table. Even by open source standards <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo--> . I keeeed, I keeeed!!!