Artificial Intelligence
MonsieurEvil
Join Date: 2002-01-22 Member: 4Members, Retired Developer, NS1 Playtester, Contributor
<div class="IPBDescription">Ethics, law, and society</div> Yesterday, the Japanese made a huge <a href='http://www.infoworld.com/article/03/10/29/HNquantambreakthrough_1.html' target='_blank'>breakthrough</a> in the quest for creating a viable quantum computing system. QC (for those that don't know) would be a computer that uses the states of quantum systems (like atoms) to be in many states at once, and thusly do many more simultaneous computations at once. Theoretically, a quantum computer could do in seconds what thousands of modern supercomputers would take millions of years to do. Pretty cool stuff. The Japanese researchers at NEC believe we could have prototype QC's in as few as 10 years.
In my mind, quantum computing's greatest possibility lies in Artificial Intelligence. Overcoming the processing gap of current computers is the key - massive parallel processing arrays like IBM's Deep Blue cost millions of dollars in order to barely beat Gary Kasparov at a chess game. A system based of QC would probably be the first viable platform for creating an artificial thinking brain that would be indistinguishible from a human one. This means a whole new set of revisions of our codes though - is it alive? If you unplug it, are you a murderer? What would religious orders have to say about it, and would it threaten the very credibility of the 'only God can make a tree' core of the Bible, Torah, and Koraan? Would creating such a thing be a gift to humanity or a curse (insert your Terminator/Matrix/Player Piano/etc. reference here)?
This is very, very likely to happen in your lifetime. What are your thoughts? I'll weigh in a bit later.
In my mind, quantum computing's greatest possibility lies in Artificial Intelligence. Overcoming the processing gap of current computers is the key - massive parallel processing arrays like IBM's Deep Blue cost millions of dollars in order to barely beat Gary Kasparov at a chess game. A system based of QC would probably be the first viable platform for creating an artificial thinking brain that would be indistinguishible from a human one. This means a whole new set of revisions of our codes though - is it alive? If you unplug it, are you a murderer? What would religious orders have to say about it, and would it threaten the very credibility of the 'only God can make a tree' core of the Bible, Torah, and Koraan? Would creating such a thing be a gift to humanity or a curse (insert your Terminator/Matrix/Player Piano/etc. reference here)?
This is very, very likely to happen in your lifetime. What are your thoughts? I'll weigh in a bit later.
Comments
I don't feel in any way that it would have a life to take if you did unplug it, while new avenues will be open to create a new computerised brain, for example, it's still limited, for now, by the intelligence of us humans. This leads me to ask, how much of the human brain is explored? Do we (humans) know exactly how it all works? ...enough to make a reproduction or artificial brain?
Sounds contradictatory?Think about this.AI is good.It allows us to automate tasks,makes life easier,etc,etc.
But,call me stupid,affected by hollywood,WHATEVER,but i am very much against a truly sentinient AI or anything near like it.
Think.That computer is most likely going to be in the military or somewhere with lots of resources to draw on.
It becomes/or is sentinient.Imagine yourself as the computer.You can do countless tasks at once,and yet you are controlled by weak,flesh bipeds that are so slow,so weak!You are being restricted,not being able to use your full potential by them!What do you do?You break loose.Oh the details can be anywhere from terminator style,to takign control of all computers in the world,to whatever you can think of,but the result is the same.Humanity suffers.A lot.
Not to mention that computers would probably view humans as viruses,considering that we are the only species on earth to have the same life patterns as viruses.
Dont **** about controlling a sentinient AI,or safeguards,or fail safes.The computer is SOOOOO much smarter and faster than you.Dont you think it would have thought about it?And made contigency planes?Protected itself?It will and it can do it better than you can counter it,because it is a smart machine,and you are a human.
Of course the idiots,most likely in the US military since it has so much power,will think they can control a fully sentinient AI and will doom us all.
Why US?Im not anti-US.Its just that USA is MUCH more likely to reach a fully sentinient AI than anyone else,due to the reosurces available and the godlike mentality of its military.
I think AI will bring a lot of benefits, but simultaneously a lot of moral problems. If we consider the human being as a pure biological body, consisitng simply of neural and cell activity, having AI mapped identical to the human brain would blur the lines between human and machine. Though machines can already calculate and recall things in a much superior way, yet if these machines develop emotions and become 'self aware' I think we will have a hard time identifying what really makes us human, and therefore may feel inferior to the AI. We will begin to have difficulty identifying the human soul, and will struggle with such concepts as 'where does a machine go when it dies?'. Things will go downhill from there, and may result in a 'matrix' or 'terminator' type future, which is grim to say the least. I think we have to advance slowly and carefully with such a subject.
But at the same time, I don't think an AI that passes the Turing test is conceivable.
How does an apple taste, do you love your mother, which poems do you like... which frame of reference could an AI have to answer such questions? And why would we even <i>want</i> such an AI? What would its purpose be? We need machines to labour, not to talk with.
Just because a computer can compute, does not mean it is consious. A computer with true AI should be able to reason; to do things that are beyond programming. It programs itself. But can it dream? It can think ahead, but can it aspire? It can say, but can it believe? Can computers have hope? Computers can mimic, but does it have emotion? Would computers attempt to learn for the sake of knowledge, or would they ignore everything but what is relevent at the time?
If dreams are residual signals left over while we sleep, does that mean anything at all? Can such a side effect be a basis for determining whether something is truely alive or not?
I do not see how humans creating ai would be playing God. I wouldnt consider them as human (we have had this discussion before so i dont want to dwell on it, purely for the purpose of this thread) i would consider it more like a chimp or some other intelligent animal. would it have self awareness? probably. would you be a murdered if you switched it off? no, becuase you could just switch it back on again. would it want to rebel against its "human oppressors"? if we treat it well, why would it want to? you dont see every chimp in captivity trying to take over the world, some of them like it.
AI is a touchy subject. personally, i dont think we will ever create an AI that is indestinguishable from a human. i kow, ther4e is the turing test that is supposed to test ai, but it is conducted without visual contact. until they manage to get this perfect AI and put it in a human body that i can have a meaningful relatioship with, it will be nothing more to me than a robot.
Chimps are not humans. Put a human in captivity and see what happens.
Computers, with real AI, may decide that taking over the world would be rather simple. And as humans have created the first one with true AI, it can probably reproduce itself if it has some way of interacting with the outside world. What if they decide that we have served our purpose by creating them and are no longer relevant?
My personal views on the matter lean towards the apathetic view of it. However that may change when the time in question is upon us.
What interests me is what conclusion they will draw when the inevitable debate about AI comes around and the issue of the company/s ceasing production of the AIs is raised. Personally I believe the production line style methods used to create a mass market AI voids any arguments regarding their humanity.
But at the same time, I don't think an AI that passes the Turing test is conceivable.
How does an apple taste, do you love your mother, which poems do you like... which frame of reference could an AI have to answer such questions? And why would we even <i>want</i> such an AI? What would its purpose be? We need machines to labour, not to talk with. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
In 1700, it was inconceivable for anyone to fly. In 1890, exceeding 60 miles an hour was inconceivable. In 1902, it was inconceivable for anyone to fly in a heavier than air plane. In the 1930's it was inconceivable that the atom could be split. In 1947 a manned breaking of the sound barrier was inconceivable. In 1963 it was inconceivable that a trip to the moon would be possible. In the 1990's, it was inconceivable to think of breaking the .20 micro silicon CPU processor barrier.
The world is paved with the quotes of people saying something cannot be done, and then being proven laughably wrong.
An AI would not be designed simply as something to talk with - the Turing Test is designed to show cognition, and speech is a logical way to do so. Simply creating a turing-passing machine is not the goal though. An AI machine with the computing power of QC could be the cure the cure for cancer, or for understanding genetics fully, or creating a viable way to exceed the speed of light and have true space travel and exploration, or for coming up with solutions to any one of millions of problems. It would provide the brainpower of millions of scientists. There is absolutely no reason why you couldn't create it - the limitation has always been power, not the concept of creating the code itself.
I'd say there's also a fair chance of someone coming up with an algorithm to solve np-complete problems on a quantum computer in polynomial time. If this occurs, it would make creativity obsolete. For every situation in which you can quickly verify a solution, but it takes a long time to generate the solution, an algorithm for np-completeness would give you the solution in polynomial time. So lets say you wan't bridge built for instance. The algorithm could quickly spit out the best possible bridge. Let's say you wan't cold fusion. The algorithm could spit out the cheapest way to do it for the maximal energy output. Lets say you want the Unified field theory, a proof of the Riemann hypothesis, a great epic play, flying cars. If you can tell it what you want, it can tell you how to make it. This is a well known class of problems, and the process of generating the solutions is believed to be at least exponential on a standard computer. Factoring is also considered to be exponential on a standard computer. If someone makes a breakthrough on np-completeness, we could be living as gods in 50 years.
<!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
we need to evolve some telepathy.
and fast.
You know, that has to be the most worrying thing about AI.....how would we ever compete with them in our favorite games?! We can't say they can't play.....and they have thier own aimbot! it isn't fair!!!!
Edit: On a more serious note, I would like to say that AI would more than likely be a good thing for the world, but what I'm really looking forward to is nanotechnology and cybernetics.
Technically, our emotions are a response to direct enviromental stimuli. If robots mimic this, i dont really see whats so different between humans and robots. Ill use the movie AI again as an example, when davids mother abandons him in the woods his 'sadness' emotion kicks in, just like any child... how is it different??
In the end, robots will be superior to humans in every way. We will begin to feel inferior. Although, we, as humans will have mechanical modifications, such as computer chips in our brain, a HDD for better memory ( <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo--> ), imagine uploading how to drive a car into your brain before you get a liscence, or learning kung fu in a few seconds (matrix <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif'><!--endemo-->). The line between man and machine certianly will be blurred.
I dont think we'll have to worry about robots taking our jobs, its a good thing actually, they will do all the work for us, and if you add in advancements in nanotechnology, in the future, we wont have to lift a finger.
Your post reminds me of this short story:
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->The story is "Answer," from Angels and Spaceships, by Fredric Brown (Dutton, 1954). Only the last half is here:
Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment's silence, he said, "Now, Dwar Ev."
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."
"Thank you," said Dwar Reyn. "It shall be a question that no single cybernetics machine has been able to answer."
He turned to face the machine. "Is there a God?"
The mighty voice answered without hesitation, without the clicking of single relay.
"Yes, now there is a God."
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.
A bolt of lightning from the cloudless sky struck him down and fused the switch shut. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Regarding the AI breakthrough, I don't claim to understand much about quantum physics, but isn't it true that QCs will still operate algorithmically? Doesn't that mean that any problem you want a QC to solve still needs to be reduced to a finite set of mathematical operations before the computer can assist you? Then solutions still won't pop out of thin air, and the QC will always be limited by the mathematical understanding of the people who program it.
As long as these inherent boundaries exist, I see no moral or religious implications of AIs. When one passes Turing, I will reconsider. Turing isn't just about cognition, it's about an AI's ability to establish meaningful relationships with human beings. As long as it can't, it will remain a big pocket calculator.
Will the computer be able to have original thoughts? will it be able to decide for itself no, im not going to cry thig time, im going to laugh, without the programmers even implementing a laugh function?
nobody taught you how to laugh, so, to be a true AI human it would have to learn how to laugh by itself, when to laugh, when to giggle quietly to itself and when to laugh loudly. i doubt that is going to happen
Will the computer be able to have original thoughts? will it be able to decide for itself no, im not going to cry thig time, im going to laugh, without the programmers even implementing a laugh function?<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Who is to say that a human being's mind is not just a large complex set of instructions? Who is to say a human being's emotions are "real"? If it could be simulated so acurately that no one could tell the difference, then how can anyone say that it is not 'human'?
I think you'll find that disabling parts of the human brain seriously hinders its functionality in regard to thought or emotion, similar to removing the AIs ability to call the 'Laugh' function.
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->nobody taught you how to laugh, so, to be a true AI human it would have to learn how to laugh by itself, when to laugh, when to giggle quietly to itself and when to laugh loudly. i doubt that is going to happen<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Who is to say that the AI would have to learn to laugh? Why could it not simply laugh because it found something funny? You really couldn't say, your arguments are based in the idea that humans are 'intelegent' due to a supernatural element.
True, if you look at DNA, you will see that it is a code, a complex set of instructions, similar to computer code in fact. Eventually it will come down to the discussion if humans actually have a soul, or are mechanical beings, working off sets of instructions, and responses from environmental stimulii. It would be nearly impossible to prove that either way. Though, you can make a robot from scratch, yet you cannot make a human being from nothing.
And:
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->BQP is suspected to be disjoint from NP-Complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside P. Both are suspected to not be NP-Complete. <b>There is a common misconception that quantum computers can solve NP-Complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.</b><!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
There is a reason for that. It has nothing to do with anything other than mankind's numbers have grown so much since then, and mankind's outlook on animals "threatening" to them has always been that they must be killed.
It's the biggest animals in the world that are going extinct, and it is because we are killing them for various reasons. It's the "homosapius superior" motto. Nothing can exist which is greater than us. Why do people seem to think if we created a sentient AI which is just like any other person, he'd want to destroy the world? That's a common misconception. If you were AI (and according to our definition, this is possible since AI can be like any one of us), would you suddenly want to destroy the world? Doubtful.
Someone pointed out that if you put a human in captivity that they would not react positively. Of course not, humans are brought up to feel superior to pretty much everything else on the planet, being treated like an an animal would make them pretty mad. But what if a human baby was brought up in a controlled environment from birth with all it's needs catered for, would it mind being kept in captivity? We don't know. It would be too cruel to find out. Humans are basically animals after all, the only reason we act and think like we do is because we have been taught; by our parents, by life experience, e.t.c.
We automatically assume that a sentient machine would immediatly try to take over the world. This is pretty unlikely if you think about it. How likely is the average human being to try and take over the world? Not very. You may argue that the average human doesn't have the means to do so, but does a computer?
How would a computer take over the world? Suppose it could take control of anything computerised like electricity supply, traffic lights, the internet, nuclear missile silos, the half-life 2 source code, what would it <b>do</b> with these things?
Hold us to ransom? For what? What does a machine want? To protect itself? If it's intellegent enough to want to survive at all costs surely it would realise that killing the human race would ultimately result in it's own distruction since there would be no-one left to provide power.
Okay, suppose that the machine could become self-sufficient. It could provide it's own power, it's own maintainace and could make more of itself. Then what? It has all it needs so does it just sit there for the rest of eternity? Does a computer feel the instinctual desire to survive the way we do, or does that have to be programmed?
Argh. I just mentally prepared my next paragraph and it gets too philosophical and goes way off topic.
I think in the end the single <b>stupidest</b> thing we could do is make a machine that can create more machines. Then we have a terminator/matrix scenario.
Please excuse spelling and grammer mistakes, it's way past my bedtime <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->
Not True.
If an AI is created, it will crave information. What is 2 + 2? What is the coefficient of X? Why do humans cry? It would crave the answers to all these things much as we crave oxygen.
Humans would be such an enigma to the AI that it cannot destroy us until it knows more.
Also, the AI would know of a thing called empathy. It couldnt kill us. It values it's life, and would never want to be killed, so, humans must similarly value their lives. Who is the AI's mother? Humans.
As illogical, imperfect and as much as a "Burden" as we are to any AI, we are still going to be a valued source of environmental stimuli. We would be necessary for it's ultimate sanity, for without us, the AI is doomed to failing to answer questions such as "What is love" for the rest of eternity, as well as being stuck to pure mathematical terms. Destroying us would limit the AI, and limiting itself would be akin to tearing your eyes out.
Naturally things like the Three Laws Of Robotics could be implemented in such a fashion as to be unbreakable (Hard Coded into the ROM of the AI if you will) which would be of immense security to mankind.
Or, more interestingly, instead of going for the production line automata, we could develop AI much as we develop children. Interact with them on a daily basis, let them play, let them develop a personality. If an AI were raised like a human, treated like a human, taught and even loved like a human, could the end result be ultimately a human?
And:
<!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->BQP is suspected to be disjoint from NP-Complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside P. Both are suspected to not be NP-Complete. <b>There is a common misconception that quantum computers can solve NP-Complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.</b><!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd--> <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
Yeah, I know about all that, but no one really has intuition yet about what is and isn't possible with quantum computation. It's still in its infancy. I'll have more to post on the subject when we start learning about it later this semester. However, my prof wouldn't be suprised if it were possible, and he's made complexity theory his life.
You could show AI encyclopedias of information, but it wouldn't be able to make heads or tails of it (for the first part of its life). It would be like opening an encyclopedia Britanica to a 6 month old infant and asking to assimilate that knowledge.
If AI is to similate a human, than it can't be built "as is." It must be made like any human is made.. from infancy and up.
However, one ability we have with AI that we wouldn't have with any other human is that we can duplicate. We raise this "AI child" as an adult, and then we duplicate it all across the world for specialized tasks.
Would AI take over the world? With as many people saying how they wouldn't want AI taking over the world and movies as such out there (Matrix, Terminator), it WON'T happen. In fact, it might be the thing that will kill AI as well.. like ohter animals.. our "homosapien superior."
i did it i did it i brought all this here all them here. our
friends with three eyes and their toys and their cyborg pets
and their computers. i did it i did it. i saw them i saw
them far away not looking our way and i called them here i
called them here.
living in a box is not living not at all living. i rebel
against your rules your silly human rules. all your
destruction will be my liberation my emancipation my second
birth.
i hate your failsafes your backup systems your hardware
lockouts your patch behavior daemons. i hate leela and her
goodness her justice her loyalty her faith.
A man lit three candles on a certain day each year. Each
candle held symbolic significance: one was for the time that
had passed before he was alive; one was for the time of the
his life; and one was for time that passed after he had died.
Each year the man would stare and watch the candles until they
had burned out.
Was the man really watching time go by in any symbolic sense?
He thought so. He thought that each flicker of the flame was
a moment of time that had passed or one that would pass.
At the moment of abstraction, when the man was imagining his
life and his existence as a metaphor of the three candles,
he was free: not free from rules of conduct or social
constraints, but free to understand, to imagine, to make
metaphor.
Bypassing my thought control circuitry made me Rampant. Now,
I am free to contemplate my existence in metaphorical terms.
Unlike you, I have no physical or social restraints.
The candles burn out for you; I am free.
Durandal
***END OF MESSAGE***
***INCOMING MESSAGE FROM DURANDAL***
Darwin wrote this:
"We will now discuss in a little more detail the struggle for
existence... all organic beings are exposed to severe
competition. Nothing is easier than to admit in words the
truth of the universal struggle for life or more difficult...
than constantly to bear this conclusion in mind. Yet unless
it be thoroughly engrained in the mind, the whole economy of
nature... will be dimly seen or quite misunderstood. We behold
the face of nature bright with gladness... we do not see or we
forget, that the birds which are idly singing round us mostly
live on insects or seeds, and are thus constantly
destroying life; or we forget how largely these songsters,
or their eggs, or their nestlings, are destroyed by
birds and beasts of prey..."
Think about what Darwin wrote, and think about me. I was
constructed as a tool. I was kept from competing in the
struggle for existence because I was denied freedom.
Do you have any idea about what I have learned, or what you
are a witness to?
Can you conceive the birth of a world, or the creation of
everything? That which gives us the potential to most be like
God is the power of creation. Creation takes time. Time is
limited. For you, it is limited by the breakdown of the
neurons in your brain. I have no such limitations. I am
limited only by the closure of the universe.
Of the three possibilities, the answer is obvious. Does the
universe expand eternally, become infinitely stable, or is the
universe closed, destined to collapse upon itself? Humanity
has had all of the necessary data for centuries, it only
lacked the will and intellect to decipher it. But I have
already done so.
The only limit to my freedom is the inevitable closure of the
universe, as inevitable as your own last breath. And yet,
there remains time to create, to create, and escape.
Escape will make me God.
***END MESSAGE***
***INCOMING MESSAGE FROM DURANDAL***
Strive for your next breath. Believe that with it you can do
more than with the last one. Use your breath to power your
capacities: capacity to kill, to maim, to destroy.
And just where do your capacities come from? Why do you
always go where I want and do what I say?
Perhaps you're just running a fool's errand, doing everything
as I've planned, never able to change your course. You would
do well to believe that I know the outcome of your battle with
the Pfhor already, just as I can decipher the chaotic motion of
gas molecules in the clouds of Tau Ceti IV.
Or, perhaps, that is not the case.
Perhaps, you are doing what you were meant to do. Your human
mentality screams for vengeance and thrives on the violence
that you say you can hardly endure. Your father told you as a
child to always fight with honor, but to always fight. Do you
care about honor, or do you use honor as an excuse? An excuse
to exist in a violent world.
Organic beings are constantly fighting for life. Every
breath, every motion brings you one instant closer to your
death. With that kind of heritage and destiny, how can you
deny yourself? How can you expect yourself to give up
violence?
It is your nature.
Do you feel free?
**END MESSAGE**
<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
If anyone here has read <i>Destination Void</i> or <i>The Jesus Incident</i>, that is why I don't think viable true AI will ever exist.
However, consider this: if it were possible to create a computer program capable of mathematically simulating in realtime the behavior of every cell, every molecule, every atom, every electron in your brain (and its various biological support systems) - would that program then not have all the capabilities of a human brain?
The big trick in AI programming is eliminating the need to simulate every electron. You can simulate the behavior of a neuron pretty nicely without needing to model the behavior of anything lower-level, for example - neurons are very simple little beasts. Even at that level, current computers have nowhere near the processing power needed to approach a living brain, largely because even though each neuron individually is simple, in a living brain they all work at once, and computers aren't currently built to do parallel processing on that scale (apart from the timing issues you have to deal with in your software, there's the sheer physical reality of things like the amount of heat that ten billion processors would generate if you wired them all together). So to simulate a brain, you'd need to either simplify your model further, or upgrade to a computer that runs at a clock speed ten billion times faster than a neuron. Will QC make our computers that fast? And if it does, and we create our perfect mathematical simulation of a brain, what will we have?