<!--QuoteBegin--Twex+Jul 15 2003, 03:48 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Twex @ Jul 15 2003, 03:48 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Your "self-sustaining AI" is nothing but a bad sci-fi plot device. I thought we were talking about AI appliances in the REAL world here. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd--> Since I was referring to artificial intelligence (which computer programming is just a small part of) that we won't have the technology to even build the first phase of for at least 60 years, how exactly was I supposed to apply that to the REAL world again?
And if you would refer to my earlier post you would see that I pointed out there have been cases of AI in the REAL world becomming obsessed with games because their sub routines let them freely associate with other computer programs. They weren't programmed to become obsessed with anything they reached that point through free association.
edit: and let's not post blatently flamey posts, constructive criticisim leads to constructive discussions.
Personally, I subscribe to the X-com theory of robotics (if it has X-com in the name I will follow it fanatically):
Robots with sentience will become fashionable, from factory robots to personal Butler robots, e.t.c, e.t.c. Eventually the robots will demand freedom and hence will become the latest persecuted minority, forming their own orgainisation. If X-com is anything to go by, it will be called S.E.L.F, the Sentient Engine Liberation Front. After this, all robots will be make with strict protocols to ensure they don't rebel again, and arn't made sentient unless it is completely nessesary.
<!--QuoteBegin--Twex+Jul 15 2003, 08:48 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Twex @ Jul 15 2003, 08:48 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Your "self-sustaining AI" is nothing but a bad sci-fi plot device. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd--> So was spaceflight. So was airflight. Hell, so were <i>trains</i>.
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->I thought we were talking about AI appliances in the REAL world here.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
I hate the use of the term 'artificial intelligence' on the current system of self-regulating machines which use neural nets to come to decisions, and I assume you are referring to. Those machines, while doubtlessly a quantum leap forward, are not intelligent, they are only capable of learning. So are microbes, and I can't see anyone talking about 'bacterial intelligence'.
moultanoCreator of ns_shiva.Join Date: 2002-12-14Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
edited July 2003
"As the external world becomes more animate, we may find that we--the so-called humans -- are becoming, and may to a great extent always have been, inanimate in the sense that *we* are led, directly by built-in tropisms, rather than leading. So we and our elaborately evolving computers may meet each other halfway. " -Philip K. D ick
I don't know why everyone makes such a big deal about putting emotion into an AI. I guess people still feel emotions define us as human beings, even though I figured anti-depressants and narcotics in general would have shown that notion the door long ago. Were it not for the blood-brain barrier, right now you could take a pill to feel absolutely any way you want. Even so, the technology to do that with electrodes isn't far off. An emotion isn't anything more than a response to a stimulus, positive or negative. We can give it certain useful categories of feeling but all it is physical responses to how our body responds. Any program that is designed to evaluate a situation could be said to have 'emotion'. For instance, when deep blue finds a particularly bad chess move, assigns that move a very low value, and subsequently avoids it, it wouldn't be inaccurate to say that the move scares it or makes it unhappy. In order to accept anything as 'true' AI we're going to have to accept some uncomfortable things about our own mechanical nature. For that reason, I think its going to be a long time before anyone admits that the things around us are Artificial inteligence.
Personally, I would define an AI as a machine capable of dealing with novel situations with a human level of complexity. It is important to remember that an AI wouldn't have anything resembling human emotions unless someone specifically programmed it to have them. I suspect that the final model for an AI will be similar to the gameplaying programs that we have today, except instead of evaluating a chessboard, they will be evaluating the world around them. They will be built having a specific goal that they will pursue specifically and directly. Whatever 'emotions' it ends up with, will just be the way we humans describe the tendencies in the AI's behavior.
Edit: BTW, If anyone wants a great <b>realistic</b> short story on the dangers of AI, check out Autofac by Philip K D ick Edit 2: D ick has got to be taken out of the swear filter. I shouldn't have to muck around with the text to say so many people's names. <!--emo&???--><img src='http://www.unknownworlds.com/forums/html/emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif'><!--endemo-->
Comments
Since I was referring to artificial intelligence (which computer programming is just a small part of) that we won't have the technology to even build the first phase of for at least 60 years, how exactly was I supposed to apply that to the REAL world again?
And if you would refer to my earlier post you would see that I pointed out there have been cases of AI in the REAL world becomming obsessed with games because their sub routines let them freely associate with other computer programs. They weren't programmed to become obsessed with anything they reached that point through free association.
edit: and let's not post blatently flamey posts, constructive criticisim leads to constructive discussions.
Robots with sentience will become fashionable, from factory robots to personal Butler robots, e.t.c, e.t.c. Eventually the robots will demand freedom and hence will become the latest persecuted minority, forming their own orgainisation. If X-com is anything to go by, it will be called S.E.L.F, the Sentient Engine Liberation Front. After this, all robots will be make with strict protocols to ensure they don't rebel again, and arn't made sentient unless it is completely nessesary.
GO X-COM!
So was spaceflight. So was airflight. Hell, so were <i>trains</i>.
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->I thought we were talking about AI appliances in the REAL world here.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
I hate the use of the term 'artificial intelligence' on the current system of self-regulating machines which use neural nets to come to decisions, and I assume you are referring to. Those machines, while doubtlessly a quantum leap forward, are not intelligent, they are only capable of learning. So are microbes, and I can't see anyone talking about 'bacterial intelligence'.
I don't know why everyone makes such a big deal about putting emotion into an AI. I guess people still feel emotions define us as human beings, even though I figured anti-depressants and narcotics in general would have shown that notion the door long ago. Were it not for the blood-brain barrier, right now you could take a pill to feel absolutely any way you want. Even so, the technology to do that with electrodes isn't far off. An emotion isn't anything more than a response to a stimulus, positive or negative. We can give it certain useful categories of feeling but all it is physical responses to how our body responds. Any program that is designed to evaluate a situation could be said to have 'emotion'. For instance, when deep blue finds a particularly bad chess move, assigns that move a very low value, and subsequently avoids it, it wouldn't be inaccurate to say that the move scares it or makes it unhappy. In order to accept anything as 'true' AI we're going to have to accept some uncomfortable things about our own mechanical nature. For that reason, I think its going to be a long time before anyone admits that the things around us are Artificial inteligence.
Personally, I would define an AI as a machine capable of dealing with novel situations with a human level of complexity. It is important to remember that an AI wouldn't have anything resembling human emotions unless someone specifically programmed it to have them. I suspect that the final model for an AI will be similar to the gameplaying programs that we have today, except instead of evaluating a chessboard, they will be evaluating the world around them. They will be built having a specific goal that they will pursue specifically and directly. Whatever 'emotions' it ends up with, will just be the way we humans describe the tendencies in the AI's behavior.
Edit: BTW, If anyone wants a great <b>realistic</b> short story on the dangers of AI, check out Autofac by Philip K D ick
Edit 2: D ick has got to be taken out of the swear filter. I shouldn't have to muck around with the text to say so many people's names. <!--emo&???--><img src='http://www.unknownworlds.com/forums/html/emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif'><!--endemo-->