What Direction Should Computers Be Heading?

HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
Right now, the main concern is getting as much RAM and the fastest CPU as possible. Most people don't seem to realize that the big bottleneck of computer speed is the memory.

To put this into perspective...

A computer can compute instructions within 1s of cycles.

Local cache can return data in 10s of cycles.

RAM memory can return data in 100s of cycles,

and the big whooper.. Hard disc memory can return memory in 100,000s of cycles.

Yes, you get the picture. Going to the hard disc is bad! The computer industry's struggle is trying to avoid going to hard disc to save time. However, I ask you this, why go to hard disc memory at all? The cheap SRAM is getting to the point where it is almost affordable to replace hard disc. Why not start making the SRAM replace the hard drive? Granted, to get 4 GB of memory would cost you 600 dollars more or less. It isn't cheap. However, if there is supply and demand, these things go down much faster. I think this is where computers should be heading. Eliminate the big bottleneck of memory entirely. Good GOD would this speed things up.

Supposing it takes you .9 of every second to wait for results from the hard disc, we're talking about 900% speedup without hard disc. I understand the processor does other things in the meanwhile, but aside from routine system programs, sometimes there is nothing to do but wait for the main program to get memory from the hard drive. If we're talking about computer games, this would be THE speed-up better than any video card could do.

What do you guys think? If you think there is a better direction, please post it!
«1

Comments

  • lolfighterlolfighter Snark, Dire Join Date: 2003-04-20 Member: 15693Members
    "Virtual memory" as it was/is called on the Mac (I'm not up to speed on the Mac world) has always been an atrocity in my eyes. Virtual memory is what the Mac calls it when you use parts of the harddisk to store contents of the system memory, like the Windows swapfile. You substitute the rather sluggish hard drive for real, fast memory? Where is the logic in that? I hate, hate, HATE it. It's time to upgrade my memory and completely get rid of that blasted swapfile.
    For example: My current system runs on 256 Mb of RAM (which I know is far too little). While Morrowind and Tribunal run smoothly, I can't travel to Solstheim (the island in the Bloodmoon expansion), as the multitude of trees causes so much slowdown as to make the game unplayable. I'm talking half-minute lags while the game just reads from the harddrive non-stop.
  • moultanomoultano Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
    edited January 2004
    Games also containt multiple threads, so outside of extreme circumstances, you aren't going to notice most of the paging out to the harddisk that goes on. The OS will just switch threads for the time it takes. Generally I think most games are designed such that the game data will fit entirely inside the physical ram of the systems it is designed to be run on. Lol fighter, what you are experiencing there is known as thrashing. It happens when two sections of data are needed for the same calculation, but happen to map to the same cache block. It can happen as a result of bad design in the program, or just because you don't have enough space to hold all of the necessary data at the same time. In short, get more RAM.


    As for where I think computers should be going?
    <a href='http://www.wired.com/wired/archive/11.09/diamond.html' target='_blank'>Diamonds!</a> [wired.com] for the CPU, and <a href='http://story.news.yahoo.com/news?tmpl=story&cid=562&e=3&u=/ap/future_windows' target='_blank'>Windows</a> [yahoo news] for the interface.
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    I think the point of virtual memory was that in the situation where you would simply run out of RAM memory, you would have to swap all of RAM memory into the hard disk so that you will have more 'RAM' available. This is devastatingly taxing for the computer to do. It is on the order of seconds (and if we're talking about a cycle being a pico second, that's incredibly long time).

    Virtual memory simply allows the current RAM to stay put, and the newly needed memory to run off into hard drive space. In other words, it prevents swapping, even if accessing memory from virtual memory takes a long time, it is better in the short term not to swap. It's kind of a stupid idea, actually.

    There shows this continuous access time vs cost gap between RAM and hard drives. However, the price of RAM is lowering, so just ditch the hard drive. That's my idea anyway.

    What have you guys heard of quantum computing? I hear they are really making leaps in that area. There isn't a computer which can hack 128 bit encrytion within reasonable time (less than a year). Quantum computing is supposed to melt all that away. Performance is geometric, not arithmetic.
  • AsterOidsAsterOids Join Date: 2003-12-18 Member: 24536Members
    back in 2001 i was following a course to be a computer tech guy (networks and tech support, from Cisco) and i received this wonderful documentation on where the computers were heading in 10 years from now.

    They are going to make biological processors, a living computer, yes. I think the article was from the mag science.

    So well probably be catching real viruses soon and you will need to have a PHd in biology as a pre-requesite for a comp engeneer maybe? <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->

    My teacher also said to me back then that it was only a matter of time before HDs are replaced by big chunks of ram.

    I am not that literate in comps i just know basic stuff so ill just stop here, just wanted to bring in the biology aspect of where we are heading <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html/emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif'><!--endemo-->
  • GeminosityGeminosity :3 Join Date: 2003-09-08 Member: 20667Members
    can't remember who told me or if it's even true but supposedly intel's been experimenting with biological processors for a while now =o

    My favourite theories on quantum computing is that the minute we make one it'll tear a hole in space creating a blackhole that wipes out the earth and half the solar system lol. Reminds me of the theories people worried over that said atomic bombs would ignite the atmosphere turning our world into a giant ball of fire XD
  • WheeeeWheeee Join Date: 2003-02-18 Member: 13713Members, Reinforced - Shadow
    edited January 2004
    <!--QuoteBegin--Hawkeye+Jan 17 2004, 11:14 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Hawkeye @ Jan 17 2004, 11:14 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Right now, the main concern is getting as much RAM and the fastest CPU as possible.  Most people don't seem to realize that the big bottleneck of computer speed is the memory. 

    To put this into perspective...

    A computer can compute instructions within 1s of cycles.

    Local cache can return data in 10s of cycles.

    RAM memory can return data in 100s of cycles,

    and the big whooper.. Hard disc memory can return memory in 100,000s of cycles.

    Yes, you get the picture.  Going to the hard disc is bad!  The computer industry's struggle is trying to avoid going to hard disc to save time.  However, I ask you this, why go to hard disc memory at all?  The cheap SRAM is getting to the point where it is almost affordable to replace hard disc.  Why not start making the SRAM replace the hard drive?  Granted, to get 4 GB of memory would cost you 600 dollars more or less.  It isn't cheap.  However, if there is supply and demand, these things go down much faster.  I think this is where computers should be heading.  Eliminate the big bottleneck of memory entirely.  Good GOD would this speed things up.

    Supposing it takes you .9 of every second to wait for results from the hard disc, we're talking about 900% speedup without hard disc.  I understand the processor does other things in the meanwhile, but aside from routine system programs, sometimes there is nothing to do but wait for the main program to get memory from the hard drive.  If we're talking about computer games, this would be THE speed-up better than any video card could do.

    What do you guys think?  If you think there is a better direction, please post it! <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    The reason we don't use SRAM out the wazzoo is because there is not much room on a standard chip die to fit in significantly more memory. Most of the space is spent on stuffing other things like MMX/SSE/SSE2 onto the standard x86/x87 ISA. Thus, any extra SRAM will have to be off-die (like L3 caches on a lot of server-grade chips, e.g. Itanium) and the trace length to any (significant) amount of SRAM will be the determining factor...SRAM isn't *that* cheap. 128 megs of it will still run you about 300 bucks, iirc. Then, you have to change your ISA implementation to accomodate a 3rd tier of caching, which really isn't all that much faster than RAM. Basically, you have to do a hack job of it, and it's ugly, and there's not that much more advantage to it than adding more [cheap] DRAM.

    So...basically you get an Athlon-64, get many gigs of ram for it, then mount a ramdrive for all your critical system apps, and voila. I don't think harddrives will ever be completely phased out, since many people like having hard copies and backups.

    *edit* What I think the computer industry should head for in the next 10 years...hmm.
    1) Phasing out the x86 ISA in hardware, but retain hardware emulation of it for compatibility.
    2) optic circuitry instead of wire traces. that would be badass, and there would be no complaints about electromigration (i'm not sure if this would actually be viable in anything less than a mainframe)
  • [WHO]Them[WHO]Them You can call me Dave Join Date: 2002-12-11 Member: 10593Members, Constellation
    Let's just remember to blow up any company that designs computer controlled hands with opposable thumbs <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->
  • booogerboooger Join Date: 2003-11-03 Member: 22274Members
    Molecular computers (transistors made from sing molecules rather than silcon poop) would run a bajillion times faster (it's not a real word, I don't care, go away). For instance, having your computer run off of molecular computing rather than silicon would produce 1-10 terahertz (or more) in the same size computer chip that we use now. Quantum computing wouldn't be as good for gaming as much as it would be for solving huge mathematical equations (the ability to superposition a molecule in every direction at once, or something like that). And then there is bio-computing. gives me ta creeples. hopefully molecular computing comes out soon, and hopefully everything else follows (omg teh gr4phix cards would be HACKS!)
  • WheeeeWheeee Join Date: 2003-02-18 Member: 13713Members, Reinforced - Shadow
    edited January 2004
    <a href='http://www.aceshardware.com/read.jsp?id=55000245' target='_blank'>interesting interview</a>

    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Brian Neal [Ace's Hardware]: Yeah that was in the chalk talk the other week [(press teleconference on Throughput Computing)].

    Dr. Marc Tremblay: Well the main idea is that while running large commercial programs we find the processors sometimes waiting 70% of the time, so being idle 70% of the time.

    Chris Rijk [Ace's Hardware]: Stalled on waiting for data, basically.

    Dr. Marc Tremblay: Yes. In large multiprocessor servers, waiting for data can easily take 500 cycles, especially in multi-GHz processors. So there are two strategies to be able to tolerate this latency or be able to do useful work. One of them is switching threads, so go to another thread and try to do useful work, and when that thread stalls and waits for memory then switch to another thread and so on. So that's kind of the traditional vertical multithreading approach.
    The other approach is if you truly want to speed up that one thread and want to achieve high single thread performance, well what we do is that we actually, under the hood, launch a hardware thread that while the processor is stalled and therefore not doing any useful work, that hardware thread keeps going down the code as fast as it can and tries to prefetch along the program path. Along the predicted execution path [it] will prefetch all the interesting data and by going along the predicted path [it] will prefetch all the interesting instructions as well.

    <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    As for molecular computing...I don't think we're anywhere near surmounting the challenges facing us if we decide to try it. And even if we do succeed within the next 10 years, it'll be another decade or so at least before it'll be of any use to us. Keep in mind that traditional, established tech will continue advancing while research is being put into this.
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    Here's the big kicker of biological computers. They 'learn.' Yes, you heard right. If they ever got a biological computer up and working, you'd have to train it like you would a child. Even then, it wouldn't be right 100% of the time.

    Artificial intelligence is a funny thing. The military had been trying to make a program which could identify tanks for the longest time. There was this missile they wanted to have small tank bombs the missile could drop along the way to the main target. The trick was identifying the tanks. They had the camera, all they had to do was make a program which could find the tanks. A good officer can pick out a tank 70% of the time. The program didn't have any trouble identifying tanks out in the open with no cover. However, the enemy isn't stupid. They have tanks in cities, and many of them have cover to avoid getting hit with bombs.

    In this case, the program would fail miserably. While an officer could find them 70% of the time, this program could find them like 10% of the time successfully. In the early 1990s, they did some research in AI. They took this program and generated millions of variations of this program (done completely randomly). They shows pictures of tanks to the program. The ones which were most successful were again generated millions of times in different variations. This was continuously done to the point in which a program was created which could not only spot covered tanks in cities, but 70% of the time (same as a human being).

    Gotta make you wonder eh? The real tripper is that they don't know what the code looks like.
  • WheeeeWheeee Join Date: 2003-02-18 Member: 13713Members, Reinforced - Shadow
    I'm sorry to doubt your research, but could you provide a link? randomly generating code doesn't seem like it would be particularly effective. For example, even if you had a program capable of randomly generating code blocks, you'd have to make sure it compiled correctly, did what you wanted it to do...

    Perhaps you're talking about neural networks and training them to identify visual images? One of my friends programmed a basic image-recognition neural network for state science fair (only got 2nd place though... should have gotten first IMHO). Very smart guy, i looked at a little of the code and it was pretty awesome.
  • [WHO]Them[WHO]Them You can call me Dave Join Date: 2002-12-11 Member: 10593Members, Constellation
    edited January 2004
    what he's referring to is a "genetic algorithm". And no, it doesn't generate code. It just has lots and lots and lots of variables with random weights assigned to the importance of each. The ones that do "the best" in a training run keep their weights and the next set of tests is performed on copies with each weight shifted randomly a bit from the successful ones from the previous test.

    EDIT: In some cases they "generate code" (read: change behavior) only in that they perform prewritten functions in different orders or not at all.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    edited January 2004
    Having a huge block of SDRAM instead of a HDD would consome enormous amounts of power too, it must continually consume power to keep it's current memory state, and these amounts are not insignificant even for a GB or 2.


    ...If I'm allowed to speculate wildly...
    I think we will be seeing some form of solid state memory that has no need to be continually refreshed replace hard drives and SDRAM, like MRAM or/and a cheap-o variety of flash mem for hard drives.

    MRAM has the potential to be as fast as SRAM, non-volatile and as dense as SDRAM. There will of course be a big penalty in the form of latency for having the RAM located far from the processor and for having RAM that is not soldered to the motherboard. I therefor think that we will see processors with a big pile of MRAM onboard as well as even more MRAM soldered directly to the motherboard with BGA packaging or better(...if IBM/infineon manage to make MRAM work properly)

    I also think we will be seeing graphics cards develop in the direction of general purpose processors now that PCI express will allow better communication back from the graphics card and the graphics cards is getting more programmabillity(i.e. things like pixel shaders of increasing length and precision and memory reads that read the actual value stored at an address without performing bilinear filtering or other such graphics related operations.). Graphics cards are incredibly quick when it comes to processing streams of data without conditional branching or other problems.

    If physics engines for games are reducible to a form where no conditional branching is used then I can easily see graphics cards taking over the roll of doing physical simulations for games as well as a DirectPhysics component of direct X 11 or something. <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->

    And I also think sound cards will eventually deal massivly with geometry and material information when generating sound, so that if you make an empty concrete room in a game it will actually have reverb automatically as long as material and geometry is given to the sound card, and it would dissapear if you pushed a couch and some furniture into the room.
  • MonsieurEvilMonsieurEvil Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
    Short-term: widespread, mainstream conversion to 64-bit processing.

    Long-term: true voice-recognition and semi-comprehension
  • InsignusInsignus Join Date: 2003-03-22 Member: 14782Members
    I love topics like these... they are educational. (i.e. im subscribed now lets keep this thread going).

    What it think we need, is bio- computers or artificial intelligence. (A SS2 situation still scares meh!)
    I dont personally know the mechanics of how you would do it (ENLIGHTEN ME PLZ!!!), The ability to learn would allow it to do some pretty insane stuff. Granted you would have to "raise it" like a child, but eventually they could do complex tasks humans do today. A basic one could do something such as drive a tank or deliver a pizza.
    A.I. could hlep us enormously with our problems. They could provide us with leaps in technology(Imagine having something able to do a prediction or experiment at a rate 100x faster than a human researcher).

    But away from the theoretical and on to hardware and software.
    We need, in my opinion, someway to hd's faster (already mentioned heres my take.
    What if we packed millions of tiny tiny reader heads onto the same disc? Then you have then travel in a line. One after another. They read the requested data and the data ahead of it along a programs predicted(i.e. what operations need what paths) and then the head zip ahead to the next assigned position and waits. If the path its assigned to isnt used by the program, it resets and awaits another.
    I dont know, as im still in HS and our school wont let me work on computers (i.e. take em apart and look at them then fix them.)
    I do know my basics though, from fixing my own computer, im just not to the system architecture point yet.
  • xectxect Join Date: 2002-11-24 Member: 9807Members
    <!--QuoteBegin--Hawkeye+Jan 17 2004, 05:14 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Hawkeye @ Jan 17 2004, 05:14 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> To put this into perspective...

    A computer can compute instructions within 1s of cycles.

    Local cache can return data in 10s of cycles.

    RAM memory can return data in 100s of cycles,

    and the big whooper.. Hard disc memory can return memory in 100,000s of cycles.

    Yes, you get the picture. Going to the hard disc is bad! The computer industry's struggle is trying to avoid going to hard disc to save time. However, I ask you this, why go to hard disc memory at all? The cheap SRAM is getting to the point where it is almost affordable to replace hard disc. Why not start making the SRAM replace the hard drive? Granted, to get 4 GB of memory would cost you 600 dollars more or less. It isn't cheap. However, if there is supply and demand, these things go down much faster. I think this is where computers should be heading. Eliminate the big bottleneck of memory entirely. Good GOD would this speed things up. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    The point of the harddisk should be, and will be I think, large-quantity storage. 4GB of ram for storing anything that needs processing, and a terabyte of harddisk for storing data.

    Face it, by the time 4GB ram systems become mainstream, you will be well towards the terabyte. RAM is faster, but it will always linger behind when it comes to quantity. Having both will be the best solution, and is the best solution already. A program that is forced to resort to the harddrive for short-term storage is slow, very slow.
  • WheeeeWheeee Join Date: 2003-02-18 Member: 13713Members, Reinforced - Shadow
    MonsE: I don't think that mainstream conversion to 64-bit processing is needed, or even desirable. Although it would be nice for the chip companies to be able to develop only one architectural structure rather than two, I just don't see the need for 64bits. No home computer uses anything close to 4 gigs of ram right now (well, maybe a few specialized programs that shouldn't even be run on home computers anyway), and servers have been running 64-bit for a while now. So I think widespread change from 32 to 64bit processing among the masses will be a few years away at least. I think we're going to try to squeeze every drop of performance (that can be profitably attained) from current 32bit ISA's before we move on.
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    Well, if we could hold Terabytes of information on a tape, would we use that? Yikes!

    The processor tries all it can to avoid going to hard disc. When a program absolutely has to go to the hard drive, it grants the memory access, and the operating system runs other programs in the meanwhile until the memory is loaded.
  • MonsieurEvilMonsieurEvil Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
    <!--QuoteBegin--Wheeee+Jan 18 2004, 09:25 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Wheeee @ Jan 18 2004, 09:25 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> MonsE: I don't think that mainstream conversion to 64-bit processing is needed, or even desirable. Although it would be nice for the chip companies to be able to develop only one architectural structure rather than two, I just don't see the need for 64bits. No home computer uses anything close to 4 gigs of ram right now (well, maybe a few specialized programs that shouldn't even be run on home computers anyway), and servers have been running 64-bit for a while now. So I think widespread change from 32 to 64bit processing among the masses will be a few years away at least. I think we're going to try to squeeze every drop of performance (that can be profitably attained) from current 32bit ISA's before we move on. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    The RAM addressing is not why I think it's necessary - the more efficient instruction set (better stability, better floating-point support, scalability) will allow things like the long-term direction I mentioned. Not that the RAM addressing is a small issue: you sound like a certain someone that once said 640K is enough for anyone...
  • SkulkBaitSkulkBait Join Date: 2003-02-11 Member: 13423Members
    <!--QuoteBegin--MonsieurEvil+Jan 21 2004, 12:07 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (MonsieurEvil @ Jan 21 2004, 12:07 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> you sound like a certain someone that once said 640K is enough for anyone... <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    IT probably would have been too, but then the gamers got ahold of it. All of the sudden text adventures weren't good enough anymore because they required you to imagine... it kinda went down hill from there.

    Anyways, summery of my thoughts, MonsE style:

    What I would like:
    ---Long term: A desktop archetecture based around a processor that is optimized for object-oriented code, with a switched backbone instead of a bus, ditching the bios in favor of an open-firmware, and that is scaleable "to the moon" as my networking prof. would say. Also, an addon hardware interface type with cards that contain their own drivers for said firmware that can be installed and then used by the OS (via the firmware) to use hardware without loading specialized (admittedly better), OS dependant, drivers... which can of course be installed later. And of course, object-based OSs.

    --Short term: Just one OS, preferably open source, that I can really like. Just one.

    What will probably happen:
    --Long term: Gaming will push hardware to such extreems that things like the matrix won't be such sci-fi anymore. People (some people) will live out all of their non-working lives in giant MMORPGs. The cheaters will still be there, only now they can probably kill you for real. Computers will not get much more secure, or stable then they have ever been. CISC processors will still dominate, people will still believe that higher clockrate == better.

    --Short term: DRM will be welcomed by the ignorant masses. Spammers will start getting murderd. HL2 will become the next DNF. Non-MS OSs will finally gain a decent (say, 20%ish) share of the home market. And there won't be a single OS I can really like. Not even one.
  • InsignusInsignus Join Date: 2003-03-22 Member: 14782Members
    Dude, if only we lived in virtual reality, i would pwn you all sooo bad.

    BTW, concerning memory, why not just meld processor and memory into a single integrated system. Justa thought.







    ==-===-=----==__+==========================================
    While were at it, We can break rectangular objects that run on top of the shell.
    -- An angry linux midget.
  • SkulkBaitSkulkBait Join Date: 2003-02-11 Member: 13423Members
    <!--QuoteBegin--Insignus+Jan 21 2004, 12:54 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Insignus @ Jan 21 2004, 12:54 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Dude, if only we lived in virtual reality, i would pwn you all sooo bad.

    BTW, concerning memory, why not just meld processor and memory into a single integrated system. Justa thought.







    ==-===-=----==__+==========================================
    While were at it, We can break rectangular objects that run on top of the shell.
    -- An angry linux midget. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    Processors already have their own memory, small amounts of it anyway. the problem is basically an engeneering one. What do you do when some of the RAM goes bad? you can't replace it so you buy a new processor? Ouch...
  • MonsieurEvilMonsieurEvil Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->What I would like:
    ---Long term: A desktop archetecture based around a processor that is optimized for object-oriented code, with a switched backbone instead of a bus, ditching the bios in favor of an open-firmware, and that is scaleable "to the moon" as my networking prof. would say. Also, an addon hardware interface type with cards that contain their own drivers for said firmware that can be installed and then used by the OS (via the firmware) to use hardware without loading specialized (admittedly better), OS dependant, drivers... which can of course be installed later. And of course, object-based OSs.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    Good stuff here. Have you seen this? <a href='http://www.neoseeker.com/news/story/2339/' target='_blank'>http://www.neoseeker.com/news/story/2339/</a>
  • zebFishzebFish Join Date: 2003-08-15 Member: 19760Members
    Tsk

    If a maxtix-esque system world would come into being, imagine the fun hackers could have........
  • WheeeeWheeee Join Date: 2003-02-18 Member: 13713Members, Reinforced - Shadow
    <!--QuoteBegin--MonsieurEvil+Jan 21 2004, 12:07 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (MonsieurEvil @ Jan 21 2004, 12:07 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> <!--QuoteBegin--Wheeee+Jan 18 2004, 09:25 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Wheeee @ Jan 18 2004, 09:25 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> MonsE: I don't think that mainstream conversion to 64-bit processing is needed, or even desirable. Although it would be nice for the chip companies to be able to develop only one architectural structure rather than two, I just don't see the need for 64bits. No home computer uses anything close to 4 gigs of ram right now (well, maybe a few specialized programs that shouldn't even be run on home computers anyway), and servers have been running 64-bit for a while now. So I think widespread change from 32 to 64bit processing among the masses will be a few years away at least. I think we're going to try to squeeze every drop of performance (that can be profitably attained) from current 32bit ISA's before we move on. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    The RAM addressing is not why I think it's necessary - the more efficient instruction set (better stability, better floating-point support, scalability) will allow things like the long-term direction I mentioned. Not that the RAM addressing is a small issue: you sound like a certain someone that once said 640K is enough for anyone... <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    er...I would have been someone who said "640k should be enough for anyone...for the time being" (which is what i was trying to say in my post).
    Obviously we're eventually going to surpass the 4gig addressing limit imposed by a 32bit ISA, but I'm just saying that it probably won't be for a few years.
  • MonsieurEvilMonsieurEvil Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
    edited January 2004
    Not to mention that the 4GB addressing limit was surpassed back in NT4.0 Enterprise Edition in the mid nineties (it allows 8GB, as does Win2000 Advanced server; Windows 2003 Enterprise does up to 32GB, and Datacenter does 64GB - all on 32-bit architecture).

    Here, go <a href='http://h18004.www1.hp.com/products/servers/proliantdl760/index.html' target='_blank'>buy one</a> yourself. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html/emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif'><!--endemo-->
  • WheeeeWheeee Join Date: 2003-02-18 Member: 13713Members, Reinforced - Shadow
    edited January 2004
    I thought we were talking about home PC's.
    Server farms have been running 64-bit for a while now.
    *edit* btw, do windows server-class OSes support the listed amount of RAM per processor, or total?
  • SkulkBaitSkulkBait Join Date: 2003-02-11 Member: 13423Members
    <!--QuoteBegin--MonsieurEvil+Jan 21 2004, 10:42 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (MonsieurEvil @ Jan 21 2004, 10:42 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->What I would like:
    ---Long term: A desktop archetecture based around a processor that is optimized for object-oriented code, with a switched backbone instead of a bus, ditching the bios in favor of an open-firmware, and that is scaleable "to the moon" as my networking prof. would say. Also, an addon hardware interface type with cards that contain their own drivers for said firmware that can be installed and then used by the OS (via the firmware) to use hardware without loading specialized (admittedly better), OS dependant, drivers... which can of course be installed later. And of course, object-based OSs.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    Good stuff here. Have you seen this? <a href='http://www.neoseeker.com/news/story/2339/' target='_blank'>http://www.neoseeker.com/news/story/2339/</a> <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    No, but now that I've read it I don't really like where they're going with it... DRM? Using special partions on the disk (when a small flash drive would easily be enough for what I envision)? unnessesary graphics? No thanks. I was envisioning something more like <a href='http://www.openfirmware.com/' target='_blank'>OpenFirmware</a>.
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    Has anyone actually explored the possibility of recreating a computer from scratch using more than binary counting system?

    When the early computers were made, they could only differentiate between two states.. on or off. That was the basis of binary. So what if you redesigned the computer with using more than one state in mind. It would do wonders for memory access considering how much information can be passed in wires with more than 2 states allowed. We can differentiate between several states now. Why don't we try it?

    Immediate advantages that I can think of.. <ul>
    <li>bus bottleneck dramatically reduced
    <li>much better page swapping efficiency
    <li>a generally better way to transmit data
    <li>much greater instruction set load
    <li>huge computations made every clock cycle with ALU
    </ul>

    Disadvantages<ul>
    <li>much worse hardware complexity
    </ul>
    Anybody have anything to add to either list?
  • MonsieurEvilMonsieurEvil Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
    edited January 2004
    <!--QuoteBegin--Wheeee+Jan 21 2004, 04:22 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Wheeee @ Jan 21 2004, 04:22 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> I thought we were talking about home PC's.
    Server farms have been running 64-bit for a while now.
    <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    I saw the title of 'computers' and figured everything was fair game. Hence why I originally said 'widespread, mainstream' 64-bit computing. Even in the coporate server world, 64-bit Intel/AMD computing is extremely rare. DEC is dead, my friend. <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->
    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->*edit* btw, do windows server-class OSes support the listed amount of RAM per processor, or total?<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    Total.

    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->I was envisioning something more like OpenFirmware. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    From that link: "The IEEE-1275 Open Firmware standard was not reaffirmed by the OFWG and has been officially withdrawn by IEEE. Unfortunately, this means it is unavailable from the IEEE."
    You should really find a new firmware cause. Without IEEE backing, it's effectively dead on the table. Even by open source standards <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo--> . I keeeed, I keeeed!!!
Sign In or Register to comment.