Computers, Chess, And The Future

2»

Comments

  • TwexTwex Join Date: 2002-11-02 Member: 4999Members
    Your "self-sustaining AI" is nothing but a bad sci-fi plot device. I thought we were talking about AI appliances in the REAL world here.
  • InsaneInsane Anomaly Join Date: 2002-05-13 Member: 605Members, Super Administrators, Forum Admins, NS1 Playtester, Forum Moderators, NS2 Developer, Constellation, NS2 Playtester, Squad Five Blue, NS2 Map Tester, Subnautica Developer, Pistachionauts, Future Perfect Developer
    The concepts behind A.I. aren't <i>just</i> computer programming, Twex. There's obviously a lot of psychological factors involved.
  • dr_ddr_d Join Date: 2003-03-28 Member: 14979Members
    edited July 2003
    <!--QuoteBegin--Twex+Jul 15 2003, 03:48 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Twex @ Jul 15 2003, 03:48 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Your "self-sustaining AI" is nothing but a bad sci-fi plot device. I thought we were talking about AI appliances in the REAL world here. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    Since I was referring to artificial intelligence (which computer programming is just a small part of) that we won't have the technology to even build the first phase of for at least 60 years, how exactly was I supposed to apply that to the REAL world again?

    And if you would refer to my earlier post you would see that I pointed out there have been cases of AI in the REAL world becomming obsessed with games because their sub routines let them freely associate with other computer programs. They weren't programmed to become obsessed with anything they reached that point through free association.


    edit: and let's not post blatently flamey posts, constructive criticisim leads to constructive discussions.
  • Nil_IQNil_IQ Join Date: 2003-04-15 Member: 15520Members
    Personally, I subscribe to the X-com theory of robotics (if it has X-com in the name I will follow it fanatically):

    Robots with sentience will become fashionable, from factory robots to personal Butler robots, e.t.c, e.t.c. Eventually the robots will demand freedom and hence will become the latest persecuted minority, forming their own orgainisation. If X-com is anything to go by, it will be called S.E.L.F, the Sentient Engine Liberation Front. After this, all robots will be make with strict protocols to ensure they don't rebel again, and arn't made sentient unless it is completely nessesary.

    GO X-COM!
  • Nemesis_ZeroNemesis_Zero Old European Join Date: 2002-01-25 Member: 75Members, Retired Developer, NS1 Playtester, Constellation
    <!--QuoteBegin--Twex+Jul 15 2003, 08:48 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Twex @ Jul 15 2003, 08:48 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Your "self-sustaining AI" is nothing but a bad sci-fi plot device. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    So was spaceflight. So was airflight. Hell, so were <i>trains</i>.

    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->I thought we were talking about AI appliances in the REAL world here.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    I hate the use of the term 'artificial intelligence' on the current system of self-regulating machines which use neural nets to come to decisions, and I assume you are referring to. Those machines, while doubtlessly a quantum leap forward, are not intelligent, they are only capable of learning. So are microbes, and I can't see anyone talking about 'bacterial intelligence'.
  • moultanomoultano Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
    edited July 2003
    "As the external world becomes more animate, we may find that we--the so-called humans -- are becoming, and may to a great extent always have been, inanimate in the sense that *we* are led, directly by built-in tropisms, rather than leading. So we and our elaborately evolving computers may meet each other halfway. " -Philip K. D ick

    I don't know why everyone makes such a big deal about putting emotion into an AI. I guess people still feel emotions define us as human beings, even though I figured anti-depressants and narcotics in general would have shown that notion the door long ago. Were it not for the blood-brain barrier, right now you could take a pill to feel absolutely any way you want. Even so, the technology to do that with electrodes isn't far off. An emotion isn't anything more than a response to a stimulus, positive or negative. We can give it certain useful categories of feeling but all it is physical responses to how our body responds. Any program that is designed to evaluate a situation could be said to have 'emotion'. For instance, when deep blue finds a particularly bad chess move, assigns that move a very low value, and subsequently avoids it, it wouldn't be inaccurate to say that the move scares it or makes it unhappy. In order to accept anything as 'true' AI we're going to have to accept some uncomfortable things about our own mechanical nature. For that reason, I think its going to be a long time before anyone admits that the things around us are Artificial inteligence.

    Personally, I would define an AI as a machine capable of dealing with novel situations with a human level of complexity. It is important to remember that an AI wouldn't have anything resembling human emotions unless someone specifically programmed it to have them. I suspect that the final model for an AI will be similar to the gameplaying programs that we have today, except instead of evaluating a chessboard, they will be evaluating the world around them. They will be built having a specific goal that they will pursue specifically and directly. Whatever 'emotions' it ends up with, will just be the way we humans describe the tendencies in the AI's behavior.

    Edit: BTW, If anyone wants a great <b>realistic</b> short story on the dangers of AI, check out Autofac by Philip K D ick
    Edit 2: D ick has got to be taken out of the swear filter. I shouldn't have to muck around with the text to say so many people's names. <!--emo&???--><img src='http://www.unknownworlds.com/forums/html/emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif'><!--endemo-->
Sign In or Register to comment.