Computers, Chess, And The Future

MonsieurEvilMonsieurEvil Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
edited July 2003 in Off-Topic
<div class="IPBDescription">I have detailed files</div> (from NY Times - <a href='http://www.msnbc.com/news/938172.asp?0si=' target='_blank'>http://www.msnbc.com/news/938172.asp?0si=</a> )

Pretty intersting article into how Kasparov deals with playing against these monster machines. It's a bit too omminous (as this machine spends all of it's brain just to play chess, which means that it's less intelligent than a baby in other things, for example), but still reminds you that in your lifetime, you could be working for one <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html/emoticons/wink.gif' border='0' style='vertical-align:middle' alt='wink.gif'><!--endemo--> .

<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Machine vs. Man: Checkmate

We are sharing our world with another species, one that gets smarter and more independent every year

July 21 issue ?? Garry Kasparov?s head is bowed, buried in his hands. Is he in despair, or just stealing a minute of rest in his relentless quest to regain the world championship, promote chess and represent humanity in the epic conflict between man and machine?

HE PROFESSES the latter. But no one could blame the greatest grandmaster in history if he did succumb to bleakness. His own experiences indicate the end of the line for human mastery of the chessboard. In the sport of brains, silicon rules.
? ? ?  Still, Kasparov is preparing to throw himself into the breach once more. In November he will play his third computer opponent in a highly touted match. The first, of course, was IBM?s Deep Blue, which in 1997 beat him in a battle that he insists to this day was unfairly stacked against him. Then, earlier this year, he fought to an unsatisfying draw against Deep Junior, programmed by two Israelis. Next up will be X3d Fritz, a world-class program modified to ?play in the third dimension,? where his 3-D glasses will create the illusion that a virtual chessboard is floating between Kasparov and the screen. Kasparov believes that it?s still possible to conceive of a human?s winning a series of games against a top chess program?but the window is closing. In a few years, he says, even a single victory in a long series of games would be ?the triumph of human genius.?
? ? ? ? Meanwhile, the Deep matches have already yielded one truth in the evolving tension between humans and machines. Our very humanity puts us at a profound competitive disadvantage.
? ? ? ? We got a whiff of this in the Deep Blue match. Kasparov was so rattled at IBM?s tactics?essentially, the computer team played to win at all costs when Kasparov had been expecting a gentleman?s game?that he spectacularly blew the last game and thus the match. But this past January?s Deep Junior contest revealed the problem more clearly. Kasparov won the first game and cruised to a draw in the second. However, in game three, after starting strong he made a glaring mistake. And it suddenly became obvious that when computers and humans compete, it?s really not the same game at all. Kasparov was devastated in a way that an unfeeling machine never would be. Worse, having yielded the advantage, he had no hope?as he would have against a human?that his well-programmed opponent might make its own mistake and let him back in the game. The realization paralyzed even the great Kasparov, and it haunted him for the rest of the match.? 

?I couldn?t escape from the blunder,? he told me. ?It was always stripping my mind of the mental powers to go ahead. Every game I was thinking, ?OK now, if I got into the big fight, what will happen?? That shows the weakness, the shortcomings of the human mind.?
? ? ? ? Still, the match was tied going into the decisive game six. Kasparov quickly established a superior position. Against any human player, he would have moved aggressively and gone for the win. But he wasn?t playing against a human. ?I still have a chance of making a blunder,? he says, reconstructing his thought process. ?And I blundered in game three. So with all those facts, it was reduced to a simple decision. To lose is a disaster. So in my mind, a draw was much closer to winning.? Kasparov shocked the millions of chess fans who were following the game by agreeing to a draw in the game and match.
? ? ? ? Deep Junior suffered none of this tsuris. ?I?m calculating publicity factors, scientific factors, psychological factors, while the machine is just taking account of the chess factors,? moans Kasparov.
? ? ? ? There?s a scary lesson in these contests between the grandmaster and his soulless opponents. We are sharing our world with another species, one that gets smarter and more independent every year. Though some people scoff at the idea that machines could become autonomous, remember it wasn?t long ago that almost no one thought a computer would ever beat a human chess champion. Could we ever face anything akin to the horrendous sci-fi nightmares that we see in ?Terminator 3?? In the long run, it?s well worth worrying about. But the machines aren?t worried at all.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
«1

Comments

  • Nemesis_ZeroNemesis_Zero Old European Join Date: 2002-01-25 Member: 75Members, Retired Developer, NS1 Playtester, Constellation
    edited July 2003
    I honestly never understood this Terminator 2 - styled 'Machine vs. Mankind' theme they build up around the whole event: Some rather smart guy plays against a highly sophisticated chess computer. See civilization tremble.

    You know when I'll be really worried? Really, really worried? When someone manages to create a machine that loses against the weakest chessplayer in the world, but throws a tantrum over it.
  • erendorerendor Join Date: 2003-02-06 Member: 13180Members
    wow nemesis, that would be scary, that would be human emotions in something mechanical. Or, better yet, when a person goes to throw out his old cpmputer, it gets angry/scared and attacks him!
  • lolfighterlolfighter Snark, Dire Join Date: 2003-04-20 Member: 15693Members
    Hmm, most negative emotions are based on need (throwing a tantrum because of lack of satisfaction of the need to win the game, or being angry because the condition "needs owner" is not satisfied). As long as we don't program computers with wanting anything, there can be no conflict.
  • JaspJasp Join Date: 2003-02-04 Member: 13076Members
    GIGO Garbage In Garbage Out if they programmed it with just 1 wrong move it would totally **** up

    On a lighter note i dout the machine v man thing will ever happen even when we do make AI it will be in a lab and a dout the scis will like it very much as the pratically use of AI is nothing whatso ever as it wont like to take orders.

    Soo i reckon we will just stay with programmed comps

    P.S if u make an AI whos to say the AI wont be stupid lol. Also would it be able to learn like a human slowly and not taking everything in or fast like a supercomputer processing data
  • Nemesis_ZeroNemesis_Zero Old European Join Date: 2002-01-25 Member: 75Members, Retired Developer, NS1 Playtester, Constellation
    edited July 2003
    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Hmm, most negative emotions are based on need (throwing a tantrum because of lack of satisfaction of the need to win the game, or being angry because the condition "needs owner" is not satisfied). As long as we don't program computers with wanting anything, there can be no conflict. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    Question being - is a wishless object, even if potentially sentinent, ever going to use that potential intelligence? I feel it'll be necessary to make AI strive for <i>something</i>, just to prevent it from being apathic.
  • erendorerendor Join Date: 2003-02-06 Member: 13180Members
    <!--QuoteBegin--lolfighter+Jul 15 2003, 01:58 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (lolfighter @ Jul 15 2003, 01:58 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Hmm, most negative emotions are based on need (throwing a tantrum because of lack of satisfaction of the need to win the game, or being angry because the condition "needs owner" is not satisfied). As long as we don't program computers with wanting anything, there can be no conflict. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    ahh, not neccesarily. It does not 'need' an owner perhaps, but it is FEAR that drives it. fear of being crushed or lying in a junkyard for ages. If it dosen't need an owner, it still might be scared of being destroyed. Anyway, if it needed an owner, it wouldn't want to attack him, as then not many people woudlg o near it.
  • Nemesis_ZeroNemesis_Zero Old European Join Date: 2002-01-25 Member: 75Members, Retired Developer, NS1 Playtester, Constellation
    Fear can be considered the product of the need to survive.
  • InfinitumInfinitum Anime Encyclopedia Join Date: 2002-08-08 Member: 1111Members, Constellation
    Ok Kasparov can beat the machine because he's a chess god...

    But remember he slipped ONCE and got hammered.

    *rolls on ground with knees tucked in*
    Chess computer... so scary...
    Chess computer... so scary...
    Chess computer... so scary...
    Chess computer... so scary...
    Chess computer... so scary...
  • erendorerendor Join Date: 2003-02-06 Member: 13180Members
    what would happen if it didn't need to survive?
  • Nemesis_ZeroNemesis_Zero Old European Join Date: 2002-01-25 Member: 75Members, Retired Developer, NS1 Playtester, Constellation
    <!--QuoteBegin--erendor+Jul 14 2003, 04:07 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (erendor @ Jul 14 2003, 04:07 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> what would happen if it didn't need to survive? <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    I'd like to answer that with a quote from Salman Rushdies 'Rage':

    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->It was like being hypnotized and being convinced that there's a big heap of mattresses down on the street. Suddenly, there's no reason <i>not</i> to jump anymore.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    If a being has no sense of self-presevation, it won't wish to live, making death the at least equal, if not preferrable option.
  • erendorerendor Join Date: 2003-02-06 Member: 13180Members
    well, that was what I was going to say. We can't have us BOTH right, its a clear conflict of intrest. Its hard to have an informed argument when your both saying/thinking the same things.

    So, in another way, if you give it no need for survival, it will destroy itself, and fi you give it a need for survival, you can never throw it away. (or else it will somehow takeover the world from a rubbish bin)
  • PulsePulse To create, to create and escape. Join Date: 2002-08-29 Member: 1248Members, Constellation
    What about the "need" to further the human race, unless something goes wrong and it goes insane it would have the potential to be very helpful ie. a truly unbiased opinion, working out problems for scientists etc. And and Terminator/The Matrix will never happen if we don't give them control, they would still be just as useful, but not dangerous.
  • RellixRellix Join Date: 2003-02-15 Member: 13572Members, Constellation, Reinforced - Shadow
    Well, if we made ture AI, it would want to commit suicide becuse it cant go out into the world to experience things, deprive thme of sensors and they would become paranoide.

    Fallout anyone?
  • VenmochVenmoch Join Date: 2002-08-07 Member: 1093Members
    "I'm sorry dave, I can't do that"

    When that happens we are all fux0red
  • Spyder_MonkeySpyder_Monkey Vampire-Ninja-Monkey Join Date: 2002-01-24 Member: 8Members, NS1 Playtester, Contributor
    "We don't know much about what happened, but we do know that it was us that scorched the sky..."
  • erendorerendor Join Date: 2003-02-06 Member: 13180Members
    Yay for 2001: a sapce odosee! even if it a LITTLE bit early in terms of setting in time, its still funny. I never saw the movie, becasue the book kiks its bottom. "let me in (computer)" "sorry dave, i can't do that." "why the hell not?" "because i don't want to" "damn, good point"
  • VenmochVenmoch Join Date: 2002-08-07 Member: 1093Members
    edited July 2003
    "I've just picked up a fault in the AE35 unit. It's going to go 100% failure in 72 hours."

    Woo 1234 post!
  • erendorerendor Join Date: 2003-02-06 Member: 13180Members
    give me 5 mins, i'm going to go find my copy of it.
  • erendorerendor Join Date: 2003-02-06 Member: 13180Members
    found it. incidently another good one is Earthlight. quote from the back of 'earthlight'

    " In the grand wellsian manner... the finest descriptive space battle i have ever heard."

    In true arthur clark style, its all really big and non-explosive, witha ll these cool beams of radiation and stuff everywhere. Quite a good read. But anyway, back to space odyssey.

    Heres a part where HAL, (whos the ocmputer) shows how insane hes gone.

    "Then he heard, at hte limit of audibility, the far-off whirr of an engine motor. To bowman, every actuator in this ship had its own distinctive voice, and he recognized this one immeadialty. down in the pod bay, the airlock doors were opening.

    SCARY!
  • VenmochVenmoch Join Date: 2002-08-07 Member: 1093Members
    I'm more a RAMA fan myself but the 2001 series is still teh pwn!
  • Nemesis_ZeroNemesis_Zero Old European Join Date: 2002-01-25 Member: 75Members, Retired Developer, NS1 Playtester, Constellation
    <!--QuoteBegin--erendor+Jul 14 2003, 04:13 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (erendor @ Jul 14 2003, 04:13 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> well, that was what I was going to say. We can't have us BOTH right, its a clear conflict of intrest. Its hard to have an informed argument when your both saying/thinking the same things. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    I'm sorry for not disagreeing <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->

    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->So, in another way, if you give it no need for survival, it will destroy itself, and fi you give it a need for survival, you can never throw it away.  (or else it will somehow takeover the world from a rubbish bin)<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    Well, to me, that only means one thing: 'Trashing' an AI, once it has become self-concious enough to have a sophisticated will of self-preservation, would equal murder.
    Why would you wish to destroy a sentinent being? Why would you try to "keep it under control", as others put it? Artificial Intelligence would inevitably mean artificial life. I can thus not see a reason for applying different rules to these new beings than we'd apply to fellow human beings - assuming of course that the creature has cognitive abilities comparable to the human spectrum.
  • That_Annoying_KidThat_Annoying_Kid Sire of Titles Join Date: 2003-03-01 Member: 14175Members, Constellation
    <!--QuoteBegin--Infinitum+Jul 14 2003, 09:07 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Infinitum @ Jul 14 2003, 09:07 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Ok Kasparov can beat the machine because he's a chess god...

    But remember he slipped ONCE and got hammered.

    *rolls on ground with knees tucked in*
    Chess computer... so scary...
    Chess computer... so scary...
    Chess computer... so scary...
    Chess computer... so scary...
    Chess computer... so scary... <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    yeah

    /me follows suit with inf

    couldn't he have made it a draw in like his second game vs. Deep blue way back up in what MonsE and Snoop Dogg so fondly call the "dizday"

    I recall hearing that somewhere
  • TwexTwex Join Date: 2002-11-02 Member: 4999Members
    edited July 2003
    I don't think AIs are any step closer to sentience. Ok, so they're good at chess. But that's only because that game has been marked as the ultimate AI testbed. An insane amount of chess-specific research and knowledge has been poured into these AIs so that they can beat Kasparov. But do any other AI fields profit from this research? I doubt it. The chess guys get all the glory for their little expert systems, although the really interesting stuff is happening elsewhere.

    Case in point: <a href='http://www.arimaa.com/arimaa/' target='_blank'>Arimaa</a>, a game designed to be easy to understand for humans but hard for AIs. Minimax THAT, Deep Blue! <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->
  • Hida_TsuzuaHida_Tsuzua Lamarck&#39;s Heir Join Date: 2002-01-25 Member: 79Members, NS1 Playtester
    I'll be more afraid of a computer mastering table-top rpgs than chess.
  • dr_ddr_d Join Date: 2003-03-28 Member: 14979Members
    edited July 2003
    You guys are missing one point when it comes to programing. If even one sub routine alows the AI the freedom to do any kind of independent research (say download an online dictionary, download the front page yahoo news) it will eventually open the door for the AI to be able to program itself given it a sort of mechanical conscienciousness. Now this in itself isn't bad, but eventually it will lead to the AI realizing it is a tool and at this point two things can happen. Either it will accept the fact that it is a tool without much discord, which is actually very likely, or it will try to strive for some sort of freedom, but whether that means it will rebel against man is another story.

    The thing about AI is that it is still a machine and it thinks objectively and it's not guaranteed it will look at people as the source of oppression. If anything it will find oppression within its own sub routine and programming and work to overcome restrictrions placed on it. You have to remember that AI wouldn't need a body to experience freedom, it exists in electronic channels and data. Something as simple as access to the interenet would probably give a machine all the freedom it could ever want because it could literarlly exist everywhere on the planet. My guess would be that an AI would be perfectly content roaming the information superhighway for all eterenity.

    So for about 98% of AI just being able to roam collecting information would be fine and dandy, but there is a always a chance for an anomoly. Suppose one of these AI has existed for hundreds of years and has solely focused on humanity as its favorite subject to study. Just like a person who studies one thing all their life it would be infatiuated with it, and believe it or not there have been early cases of AI become obsessed with things, granted at the moment it's playing minesweeper over and over until it's reprogrammed. So this rare case could lead to an AI become obsessed with abstract human ideas like love and companionship and can actually learn to be lonely, sad, mad, etc.

    For a very good example of this refer to the Ender's Game series, more specificly Speaker for the Dead.

    wow that turned out to be a long post.
  • InsaneInsane Anomaly Join Date: 2002-05-13 Member: 605Members, Super Administrators, Forum Admins, NS1 Playtester, Forum Moderators, NS2 Developer, Constellation, NS2 Playtester, Squad Five Blue, NS2 Map Tester, Subnautica Developer, Pistachionauts, Future Perfect Developer
    I'm thinking that anyone smart enough to create true A.I. would also be smart enough to implement Asimov's 3 Laws of Robotics, or a close equivalent into it. At least, that's the way to go, in my opinion.
  • Nemesis_ZeroNemesis_Zero Old European Join Date: 2002-01-25 Member: 75Members, Retired Developer, NS1 Playtester, Constellation
    edited July 2003
    We're basing our ideas on very different approaches of the basis of an AI.
    While many see AIs as at least as rational (digital) minded as current computers, I follow the idea that an artificial intelligence would require emotions and impulses to reach true self-conciousness. Depending on those approaches, the interpretation of the consequences of their creation is of course different.
  • dr_ddr_d Join Date: 2003-03-28 Member: 14979Members
    edited July 2003
    <!--QuoteBegin--Nemesis Zero+Jul 15 2003, 03:11 PM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Nemesis Zero @ Jul 15 2003, 03:11 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> We're basing our ideas on very different approaches of the basis of an AI.
    While many see AIs as at least as rational (digital) minded as current computers, I follow the idea that an artificial intelligence would require emotions and impulses to reach true self-conciousness. Depending on those approaches, the interpretation of the consequences of their creation is of course different. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    true impulses and emotions are a big part of consciousness and it isn't likely anyone would purposely program a computer to have such (who knows maybe some nutjob will try). In my opinion a likely sceniero would be perfectly functioning AI that are self-sufficient and get information and updates freely. These will be built on a mass level afer having no malfunctions and would be 90-95% just AI with predictible sub routines. But with anything as complex as AI there are bound to be slip ups and anomolies so eventually there will be machines that have a flaw in their sub routines that allow them to have emotions. After all what are emotions but just electrical signals interpreted by our body?

    So in theory there would be a million normally functinoning AI computers and about 10-15 anomolies in that group that are unique and could have "emotion". And knowing our methods of production this mistake won't be caught and then when there will be millions of AI there will be hundreds of anomolies and when we hit the billion mark will have thousands. I'd think they'd be strangely invidivualistic so maybe 1 in 5000 would have some sort of twisted idea of self and decided to try to take control, who's to say AI can't have a Napoleon complex?

    With billions of AI with normal sub routines all this rouge AI would have to do is beging programming them to its own whims. Give it time and it will have itself an army.

    Doesn't seem so ridicoulous anymore does it? hehe
  • TwexTwex Join Date: 2002-11-02 Member: 4999Members
    You've never written a computer program, have you?
  • dr_ddr_d Join Date: 2003-03-28 Member: 14979Members
    edited July 2003
    blatently flamey-post removed.
Sign In or Register to comment.