The Human Brain.
Scythe
Join Date: 2002-01-25 Member: 46NS1 Playtester, Forum Moderators, Constellation, Reinforced - Silver
in Discussions
<div class="IPBDescription">Can it be quantified?</div> My girlfriend and I are engaged in a debate as to wether or not the activity of the human brain can be described by a finite set of algorithms that take sensory information and produce predictable responses.
Discuss.
--Scythe--
Discuss.
--Scythe--
Comments
Let me give you an example.
A monkey given a banana each day will probably eat the banana ever day.
A human given a banana each day will probably sooner or later ask for something else.
as i understand it, the more you repeate a process the more your brain adapts to that process by creating new neural networks? so everyones brain ends up different, and will produce a different response to the next persons.
i think we may never be able to map these variations well enough to produce a list of stimuli and response which work in every case.
although having said that, my opinion is based on little more than a laymans knowledge of the subject <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->
Melatonin: Couldn't you write a computer algorithm that develops in the same way as a human brain? And couldn't this algorithm be fed the same information as the human brain being predicted in order to develop in the same manner?
--Scythe--
EDIT: Those advances would probably have to be about self-replicating and mutating algorithms based on sensory input.
I doubt however that we will ever be able to program a <i>specific</i> human brain into a computer. I think we are going to find that the state information of our brains is too quantum mechanical to measure.
were talking about building a computer program to operate in the same manner as a human brain?
i think in that case all you need is a system capable of recognising and quantifying sensory information (already the technology for sight, touch, smell, 'taste' are coming along nicely, im not sure how well computers can decern and make sense of sound at the moment.)
and then the tricky part, a programme which can learn about its reality from these senses.
ive no idea whatsoever how to go about the second part.
wait.. im confused now, time to stop talking.
The human will is truly free and not a mere function of sensory input.
And calling a particular kind of AI algorithm "neural net" doesn't mean it emulates brain operation in any meaningful way. It happens to work for Backgammon, but failed at Chess and Go.
Well it pretty much simulates the firing of neurons, so I don't really know what you are getting at. It works for Backgammon because backgammon relies on pattern recognition in which slight differences in pattern don't change the ideal action all that much. It isn't as effective at chess and go because chess and go are much more complex games positionally. A neural network with the same number of neurons as a <b>Jellyfish</b> is able to beat ranking backgammon players. Eventually chess will be conquered as well, and sometime in the future we'll have one for go. It's only a matter of time. <a href='http://neural-chess.netfirms.com/' target='_blank'>Here's</a> a link to a project that is trying to train a neural network for chess.
And what makes you think a NN will ever outperform a minimax with static evaluation at chess? When the computing power increases, so will the strength of minimax, so I can't see why the relative strengths of the algorithms should shift.
Not even the guy at the site you linked to shares your optimism. He only sees an application for move tree ordering.
And what makes you think a NN will ever outperform a minimax with static evaluation at chess? When the computing power increases, so will the strength of minimax, so I can't see why the relative strengths of the algorithms should shift. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
They already have evolutionary programs that create images that are unmistakably art (I'm not sure if they use neural nets or not).
I'm not sure about the asymptotic behavior of neural net generation so I'm not entirely sure how it will scale, but the minimax algorithm has exponential behavior, meaning that every time computing power <b>doubles</b> (or triples or whatever, depending on the size of the gamespace), the same implementation can look <b>1</b> move deeper into the future. What this means is that for an effective minimaxing program the most important feature is not the depth to which it can search, but rather, the effectiveness of the function it uses to evaluate the resulting positions. This explains why gary kasparov with only the ability to search a saturated gamespace 5 moves ahead can compete with a machine that can search the gamespace 11 moves ahead. Gary Kasparov has a much better evaluation function, which allows him to compete even though he has much less computing power at his disposal.
What neural nets are very very good at is creating evaluation functions. What I suspect, (although I haven't done the research to prove it) is that doubling the processing that goes into the creation of a neural net benefits it much more than the additional 1 move of foresight that the minimax algorithm gets from the same jump in processing power.
Link anyone?
Especially emotion.
This is just common sense right?
Link anyone? <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
<a href='http://www.leadtogold.com/software/genesaver/index2.html' target='_blank'>WHUPAH!!!!!!</a>
Re: Backgammon... *shudder* I had to write a backgammon AI for an interdisciplinary NTL (Neural Theory of Language) class. I didn't even know how to play backgammon, but I figured it out well enough to write an AI that could thoroughly clean my clock. Stupid backgammon.
Finite? Definitely. Constructable by human beings? Debatable. <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->
It was believed the earth was flat, cause it was from our perspective. It was believed the earth was the center of the universe, cause it was from our perspective (We're too important not to be that special). It was believed the sun was the center of the universe (Once again, if we can't be the center of everything then we must be close). I believe it's similar with this, we must have a non-physical element to us that makes our way of deciding better than a computers. No, there is no difference other than that we haven't realised how to build a computer and write a program smart enough to work this out to a level where it has the same intelligence as us.
Another question to concern yourself with is teleportation (which they can now do with very small particles) involves destroying the original and making an exact duplicate at the other end. Do you die and someone else who thinks they are you in every way is created? Do you and your consciousness get recreated fully? Or would there just be a body with no consciousness left? Because that asks the same fundamental question. Are we more than just what we can physically analyze? Do we exist beyond what modern science can prove exists (even if they cannot understand it fully)?
If the human brain was simple enough to understand, we still wouldn't be able to understand it.
think about it