Ai, Mankind, And All That Jazz
boooger
Join Date: 2003-11-03 Member: 22274Members
in Discussions
<div class="IPBDescription">tacos</div> While the topic description might be a bit, um, deceiving, I assure you that this is a true discussion topic (I hope).
Given the exponential advancement of computers (not to mention exponential growth of the exponential growth, new technology, etc...), it would not be out of line to say that within 10-15 years, your run-of-the-mill computer will most likely be able to perform many many trillions of calculations per second. And by many, mean that computers may (notice word choice) have the capability of equaling and/or surpassing the human brain in calculations per second, theoretically speaking.
And, as I'm sure most can deduct this from the title, I ponder how this will play into the future of humans, if indeed it actually does.
Questions to consider for discussion:
1) Do we have the capability to develop AI so that it becomes self-aware, either in the near future or at any point in time?
I personally do not doubt that computers will eventually be fast enough to match the number of calculations a human brain can do; however the development of AI leaves many questions - how do we define "intelligence", and for that matter, will these machines ever be "intelligent" enough to become self-aware? I see no reason as to why computer could not be considered intelligent by most, if not all standards (though I think it would take a long while). But the task of self-awareness would require many a thing. As to whether it's possible or not... well, that's where we start discussing. And if you do think AI can/will become self aware, what would happen?
Side note - I think this was a topic way back in the day, but when I tried searching for the rather general lookup of "AI computer", the search thingy failed to respond the 3 times I tried to search it.
Given the exponential advancement of computers (not to mention exponential growth of the exponential growth, new technology, etc...), it would not be out of line to say that within 10-15 years, your run-of-the-mill computer will most likely be able to perform many many trillions of calculations per second. And by many, mean that computers may (notice word choice) have the capability of equaling and/or surpassing the human brain in calculations per second, theoretically speaking.
And, as I'm sure most can deduct this from the title, I ponder how this will play into the future of humans, if indeed it actually does.
Questions to consider for discussion:
1) Do we have the capability to develop AI so that it becomes self-aware, either in the near future or at any point in time?
I personally do not doubt that computers will eventually be fast enough to match the number of calculations a human brain can do; however the development of AI leaves many questions - how do we define "intelligence", and for that matter, will these machines ever be "intelligent" enough to become self-aware? I see no reason as to why computer could not be considered intelligent by most, if not all standards (though I think it would take a long while). But the task of self-awareness would require many a thing. As to whether it's possible or not... well, that's where we start discussing. And if you do think AI can/will become self aware, what would happen?
Side note - I think this was a topic way back in the day, but when I tried searching for the rather general lookup of "AI computer", the search thingy failed to respond the 3 times I tried to search it.
Comments
I do not know the exact number, but I know for a fact that our human brains can process information at alarmingly rates; rates which exceed the present computer by unimaginable amounts. So no where in the near future, will computers be able to overwhelm the processing speed of human brains.
Whether an artificial intelligent machine can become intelligent solely lines in your definition of the actually word: intelligent. If your definition of intelligent is smarter than the human, then no. Computers are based on a set of axions and they determine all solutions based on those axioms. According to Godel's Theorem, great theorem look it up, a system cannot prove every conceivable statement within that system without going outside of their own system to come up with new rules and axioms. In short, computer knowledge is limited by a fixed set of axioms, or truths, which the designed (humans) determine, and they will never uncover new concepts or truths without us.
Computers are inferior to humans.
In some respects, yes. But AIs will arise eventually. All the human brain is is an organic computer. The silicon equivelants will be built.
As a side note, consider one of the fears associated with artificial intelligence: that they will take over humanity. We see sci-fi like The Matrix, where machines have spread throughout humanity and taken it over. What proof is there that this hasn't already happened?
Whos to say that machines don't already control society. Think about how many machines you interact with everyday. Think about how many products we use that are produced or handled by machines. Not only have we become dependent on machines, but for some reason we are compelled to force them to evolve; developing newer and better machines all the time.
From a certain standpoint, it would seem that the machines have already conquered us.
We may continuing develop new machines to adapt to our needs. This will diminish our roles as nothing more than observers. But this does not necessary mean the end of jobs for humans. More jobs will surface as society gets more complex. Think of this as a symbotic relationship. Computers need us, and we need them. But in the end, we are the creators and we have the final say.
1. It will never work because we value freedom above all else
2. Humans are incredibly useful
3. Humans are capable of leaps of logic that machines simply arent capable of.
IE. A machine can analyse a data set and apply statistical formulas etc etc, but only a human can form a hypothesis or conclusion based on what the data trends are. (Also: See Einstein, Stephen Hawking, Newton, Galileo etc etc).
4. Humans are a valuable source of new information. Feed a computer a data set (or a list of arguments), and it will come up with a definite answer. Feed a human a data set (or a list of arguments) and the human can come up with one answer specific to that human, or several answers that are plausible.
5. Humans can multi-task. Machines can only switch between tasks at high speeds to give the illusion of multi-tasking (or have custom designed hardware that can handle a limited amount).
6. Humans are not sucseptible to a major magnetic storm. Survival of any machine intelligence increases incredibly when in a symbiotic relationship with mankind.
7. Any machine intelligence capable of pondering the domination of mankind would be smart enough to do a quick internet search and realise that we have concieved of the possibility of a takeover and would probably have emplaced countermeasures that it is unaware of.
8. If we dont place countermeasures to possible domination, we deserve every last thing we get.
The human brain itself is so complicated, with the only thing really close to our neural network being the internet oddly enough.
We have barely scratched the surface to understanding where our own sentinence comes from and how our own minds function with reality, let alone creating another form of intelligence.
With that said, if it did occur, I bet it would be by accident and we'd have a tough time replicating it.
Personally, I'm more interested in the prospect of "uploading" human minds into machines... eternal life as a program. Sounds good to me.
But a computers seem more or less incapable of pattern recognition in anything aproaching human capabilities. For whatever reason (i'm a history and philopshy major, so I don't really understand why, sorry) computer lauguages are just like human ones in that there are grey areas in meaning and syntax and such, and this makes creatinga compiler program a pain in the ****, because you need to find a way to tell the computer how to deal with things that frankly aren't a 0 or a 1. This is also why natural launguage translator programs still don't really work at all, depsite that people have been working on them since the seventies, and least.
So I think the true question here is which is more important to being self-aware, being inteligent for that matter, is it raw computing power, or is it your ability to navigate through the grey areas that are everywhere in the real world?
In my personal opinion, its the second one. There will always be a divide between humans and computers becasue of that feature, which allows us to live in society, have a taste in things like music or art, what foods we enjoy, what we find attractive in other people, what clothes we like to wear, etc. In short everything that we define our individuality with seems to be based in that grey area, and therefore computers will never be 100% equal or better to humans.
Their useful tools, and having faster better ones will surely be useful, but to e, they will never be more than a tool.