Computers, Chess, And The Future
MonsieurEvil
Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
<div class="IPBDescription">I have detailed files</div> (from NY Times - <a href='http://www.msnbc.com/news/938172.asp?0si=' target='_blank'>http://www.msnbc.com/news/938172.asp?0si=</a> )
Pretty intersting article into how Kasparov deals with playing against these monster machines. It's a bit too omminous (as this machine spends all of it's brain just to play chess, which means that it's less intelligent than a baby in other things, for example), but still reminds you that in your lifetime, you could be working for one <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html/emoticons/wink.gif' border='0' style='vertical-align:middle' alt='wink.gif'><!--endemo--> .
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Machine vs. Man: Checkmate
We are sharing our world with another species, one that gets smarter and more independent every year
July 21 issue ?? Garry Kasparov?s head is bowed, buried in his hands. Is he in despair, or just stealing a minute of rest in his relentless quest to regain the world championship, promote chess and represent humanity in the epic conflict between man and machine?
HE PROFESSES the latter. But no one could blame the greatest grandmaster in history if he did succumb to bleakness. His own experiences indicate the end of the line for human mastery of the chessboard. In the sport of brains, silicon rules.
? ? ? Still, Kasparov is preparing to throw himself into the breach once more. In November he will play his third computer opponent in a highly touted match. The first, of course, was IBM?s Deep Blue, which in 1997 beat him in a battle that he insists to this day was unfairly stacked against him. Then, earlier this year, he fought to an unsatisfying draw against Deep Junior, programmed by two Israelis. Next up will be X3d Fritz, a world-class program modified to ?play in the third dimension,? where his 3-D glasses will create the illusion that a virtual chessboard is floating between Kasparov and the screen. Kasparov believes that it?s still possible to conceive of a human?s winning a series of games against a top chess program?but the window is closing. In a few years, he says, even a single victory in a long series of games would be ?the triumph of human genius.?
? ? ? ? Meanwhile, the Deep matches have already yielded one truth in the evolving tension between humans and machines. Our very humanity puts us at a profound competitive disadvantage.
? ? ? ? We got a whiff of this in the Deep Blue match. Kasparov was so rattled at IBM?s tactics?essentially, the computer team played to win at all costs when Kasparov had been expecting a gentleman?s game?that he spectacularly blew the last game and thus the match. But this past January?s Deep Junior contest revealed the problem more clearly. Kasparov won the first game and cruised to a draw in the second. However, in game three, after starting strong he made a glaring mistake. And it suddenly became obvious that when computers and humans compete, it?s really not the same game at all. Kasparov was devastated in a way that an unfeeling machine never would be. Worse, having yielded the advantage, he had no hope?as he would have against a human?that his well-programmed opponent might make its own mistake and let him back in the game. The realization paralyzed even the great Kasparov, and it haunted him for the rest of the match.?
?I couldn?t escape from the blunder,? he told me. ?It was always stripping my mind of the mental powers to go ahead. Every game I was thinking, ?OK now, if I got into the big fight, what will happen?? That shows the weakness, the shortcomings of the human mind.?
? ? ? ? Still, the match was tied going into the decisive game six. Kasparov quickly established a superior position. Against any human player, he would have moved aggressively and gone for the win. But he wasn?t playing against a human. ?I still have a chance of making a blunder,? he says, reconstructing his thought process. ?And I blundered in game three. So with all those facts, it was reduced to a simple decision. To lose is a disaster. So in my mind, a draw was much closer to winning.? Kasparov shocked the millions of chess fans who were following the game by agreeing to a draw in the game and match.
? ? ? ? Deep Junior suffered none of this tsuris. ?I?m calculating publicity factors, scientific factors, psychological factors, while the machine is just taking account of the chess factors,? moans Kasparov.
? ? ? ? There?s a scary lesson in these contests between the grandmaster and his soulless opponents. We are sharing our world with another species, one that gets smarter and more independent every year. Though some people scoff at the idea that machines could become autonomous, remember it wasn?t long ago that almost no one thought a computer would ever beat a human chess champion. Could we ever face anything akin to the horrendous sci-fi nightmares that we see in ?Terminator 3?? In the long run, it?s well worth worrying about. But the machines aren?t worried at all.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Pretty intersting article into how Kasparov deals with playing against these monster machines. It's a bit too omminous (as this machine spends all of it's brain just to play chess, which means that it's less intelligent than a baby in other things, for example), but still reminds you that in your lifetime, you could be working for one <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html/emoticons/wink.gif' border='0' style='vertical-align:middle' alt='wink.gif'><!--endemo--> .
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Machine vs. Man: Checkmate
We are sharing our world with another species, one that gets smarter and more independent every year
July 21 issue ?? Garry Kasparov?s head is bowed, buried in his hands. Is he in despair, or just stealing a minute of rest in his relentless quest to regain the world championship, promote chess and represent humanity in the epic conflict between man and machine?
HE PROFESSES the latter. But no one could blame the greatest grandmaster in history if he did succumb to bleakness. His own experiences indicate the end of the line for human mastery of the chessboard. In the sport of brains, silicon rules.
? ? ? Still, Kasparov is preparing to throw himself into the breach once more. In November he will play his third computer opponent in a highly touted match. The first, of course, was IBM?s Deep Blue, which in 1997 beat him in a battle that he insists to this day was unfairly stacked against him. Then, earlier this year, he fought to an unsatisfying draw against Deep Junior, programmed by two Israelis. Next up will be X3d Fritz, a world-class program modified to ?play in the third dimension,? where his 3-D glasses will create the illusion that a virtual chessboard is floating between Kasparov and the screen. Kasparov believes that it?s still possible to conceive of a human?s winning a series of games against a top chess program?but the window is closing. In a few years, he says, even a single victory in a long series of games would be ?the triumph of human genius.?
? ? ? ? Meanwhile, the Deep matches have already yielded one truth in the evolving tension between humans and machines. Our very humanity puts us at a profound competitive disadvantage.
? ? ? ? We got a whiff of this in the Deep Blue match. Kasparov was so rattled at IBM?s tactics?essentially, the computer team played to win at all costs when Kasparov had been expecting a gentleman?s game?that he spectacularly blew the last game and thus the match. But this past January?s Deep Junior contest revealed the problem more clearly. Kasparov won the first game and cruised to a draw in the second. However, in game three, after starting strong he made a glaring mistake. And it suddenly became obvious that when computers and humans compete, it?s really not the same game at all. Kasparov was devastated in a way that an unfeeling machine never would be. Worse, having yielded the advantage, he had no hope?as he would have against a human?that his well-programmed opponent might make its own mistake and let him back in the game. The realization paralyzed even the great Kasparov, and it haunted him for the rest of the match.?
?I couldn?t escape from the blunder,? he told me. ?It was always stripping my mind of the mental powers to go ahead. Every game I was thinking, ?OK now, if I got into the big fight, what will happen?? That shows the weakness, the shortcomings of the human mind.?
? ? ? ? Still, the match was tied going into the decisive game six. Kasparov quickly established a superior position. Against any human player, he would have moved aggressively and gone for the win. But he wasn?t playing against a human. ?I still have a chance of making a blunder,? he says, reconstructing his thought process. ?And I blundered in game three. So with all those facts, it was reduced to a simple decision. To lose is a disaster. So in my mind, a draw was much closer to winning.? Kasparov shocked the millions of chess fans who were following the game by agreeing to a draw in the game and match.
? ? ? ? Deep Junior suffered none of this tsuris. ?I?m calculating publicity factors, scientific factors, psychological factors, while the machine is just taking account of the chess factors,? moans Kasparov.
? ? ? ? There?s a scary lesson in these contests between the grandmaster and his soulless opponents. We are sharing our world with another species, one that gets smarter and more independent every year. Though some people scoff at the idea that machines could become autonomous, remember it wasn?t long ago that almost no one thought a computer would ever beat a human chess champion. Could we ever face anything akin to the horrendous sci-fi nightmares that we see in ?Terminator 3?? In the long run, it?s well worth worrying about. But the machines aren?t worried at all.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Comments
You know when I'll be really worried? Really, really worried? When someone manages to create a machine that loses against the weakest chessplayer in the world, but throws a tantrum over it.
On a lighter note i dout the machine v man thing will ever happen even when we do make AI it will be in a lab and a dout the scis will like it very much as the pratically use of AI is nothing whatso ever as it wont like to take orders.
Soo i reckon we will just stay with programmed comps
P.S if u make an AI whos to say the AI wont be stupid lol. Also would it be able to learn like a human slowly and not taking everything in or fast like a supercomputer processing data
Question being - is a wishless object, even if potentially sentinent, ever going to use that potential intelligence? I feel it'll be necessary to make AI strive for <i>something</i>, just to prevent it from being apathic.
ahh, not neccesarily. It does not 'need' an owner perhaps, but it is FEAR that drives it. fear of being crushed or lying in a junkyard for ages. If it dosen't need an owner, it still might be scared of being destroyed. Anyway, if it needed an owner, it wouldn't want to attack him, as then not many people woudlg o near it.
But remember he slipped ONCE and got hammered.
*rolls on ground with knees tucked in*
Chess computer... so scary...
Chess computer... so scary...
Chess computer... so scary...
Chess computer... so scary...
Chess computer... so scary...
I'd like to answer that with a quote from Salman Rushdies 'Rage':
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->It was like being hypnotized and being convinced that there's a big heap of mattresses down on the street. Suddenly, there's no reason <i>not</i> to jump anymore.<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
If a being has no sense of self-presevation, it won't wish to live, making death the at least equal, if not preferrable option.
So, in another way, if you give it no need for survival, it will destroy itself, and fi you give it a need for survival, you can never throw it away. (or else it will somehow takeover the world from a rubbish bin)
Fallout anyone?
When that happens we are all fux0red
Woo 1234 post!
" In the grand wellsian manner... the finest descriptive space battle i have ever heard."
In true arthur clark style, its all really big and non-explosive, witha ll these cool beams of radiation and stuff everywhere. Quite a good read. But anyway, back to space odyssey.
Heres a part where HAL, (whos the ocmputer) shows how insane hes gone.
"Then he heard, at hte limit of audibility, the far-off whirr of an engine motor. To bowman, every actuator in this ship had its own distinctive voice, and he recognized this one immeadialty. down in the pod bay, the airlock doors were opening.
SCARY!
I'm sorry for not disagreeing <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->So, in another way, if you give it no need for survival, it will destroy itself, and fi you give it a need for survival, you can never throw it away. (or else it will somehow takeover the world from a rubbish bin)<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Well, to me, that only means one thing: 'Trashing' an AI, once it has become self-concious enough to have a sophisticated will of self-preservation, would equal murder.
Why would you wish to destroy a sentinent being? Why would you try to "keep it under control", as others put it? Artificial Intelligence would inevitably mean artificial life. I can thus not see a reason for applying different rules to these new beings than we'd apply to fellow human beings - assuming of course that the creature has cognitive abilities comparable to the human spectrum.
But remember he slipped ONCE and got hammered.
*rolls on ground with knees tucked in*
Chess computer... so scary...
Chess computer... so scary...
Chess computer... so scary...
Chess computer... so scary...
Chess computer... so scary... <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
yeah
/me follows suit with inf
couldn't he have made it a draw in like his second game vs. Deep blue way back up in what MonsE and Snoop Dogg so fondly call the "dizday"
I recall hearing that somewhere
Case in point: <a href='http://www.arimaa.com/arimaa/' target='_blank'>Arimaa</a>, a game designed to be easy to understand for humans but hard for AIs. Minimax THAT, Deep Blue! <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->
The thing about AI is that it is still a machine and it thinks objectively and it's not guaranteed it will look at people as the source of oppression. If anything it will find oppression within its own sub routine and programming and work to overcome restrictrions placed on it. You have to remember that AI wouldn't need a body to experience freedom, it exists in electronic channels and data. Something as simple as access to the interenet would probably give a machine all the freedom it could ever want because it could literarlly exist everywhere on the planet. My guess would be that an AI would be perfectly content roaming the information superhighway for all eterenity.
So for about 98% of AI just being able to roam collecting information would be fine and dandy, but there is a always a chance for an anomoly. Suppose one of these AI has existed for hundreds of years and has solely focused on humanity as its favorite subject to study. Just like a person who studies one thing all their life it would be infatiuated with it, and believe it or not there have been early cases of AI become obsessed with things, granted at the moment it's playing minesweeper over and over until it's reprogrammed. So this rare case could lead to an AI become obsessed with abstract human ideas like love and companionship and can actually learn to be lonely, sad, mad, etc.
For a very good example of this refer to the Ender's Game series, more specificly Speaker for the Dead.
wow that turned out to be a long post.
While many see AIs as at least as rational (digital) minded as current computers, I follow the idea that an artificial intelligence would require emotions and impulses to reach true self-conciousness. Depending on those approaches, the interpretation of the consequences of their creation is of course different.
While many see AIs as at least as rational (digital) minded as current computers, I follow the idea that an artificial intelligence would require emotions and impulses to reach true self-conciousness. Depending on those approaches, the interpretation of the consequences of their creation is of course different. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
true impulses and emotions are a big part of consciousness and it isn't likely anyone would purposely program a computer to have such (who knows maybe some nutjob will try). In my opinion a likely sceniero would be perfectly functioning AI that are self-sufficient and get information and updates freely. These will be built on a mass level afer having no malfunctions and would be 90-95% just AI with predictible sub routines. But with anything as complex as AI there are bound to be slip ups and anomolies so eventually there will be machines that have a flaw in their sub routines that allow them to have emotions. After all what are emotions but just electrical signals interpreted by our body?
So in theory there would be a million normally functinoning AI computers and about 10-15 anomolies in that group that are unique and could have "emotion". And knowing our methods of production this mistake won't be caught and then when there will be millions of AI there will be hundreds of anomolies and when we hit the billion mark will have thousands. I'd think they'd be strangely invidivualistic so maybe 1 in 5000 would have some sort of twisted idea of self and decided to try to take control, who's to say AI can't have a Napoleon complex?
With billions of AI with normal sub routines all this rouge AI would have to do is beging programming them to its own whims. Give it time and it will have itself an army.
Doesn't seem so ridicoulous anymore does it? hehe