Jump to content

stargazine_ep35_banner.thumb.jpg.a7c1791d7e682021778de0def357bdbb.jpg

Recommended Posts

Thats the whole point, it would only 'feel' it because someone programmed it to do so. As humans, we know when we are happy or angry, we know what it feels like. We don't rely on a computer programme to tell us what we should be feeling during given situations.

I disagree with this in that our genes and evolutionary history are just that. Codes and programming which make us behave in a certain way to increase the chance of those genes being successfully reproduced. (Can you tell I have read a few Dawkins books?? :D)

BUT I agree with your general argument that the difference between AI and I as that with AI, there is a 'maker' with I there's only evolution.

EDIT - should have read further down the thread first!

Edited by Moonshane
Link to post
Share on other sites
  • Replies 130
  • Created
  • Last Reply

Top Posters In This Topic

Quote:

Theres no evolutionary requirement for intelligence

Surely you don't mean that? An ability to solve problems is usually seen as quite a good evolutionary advantage.

I agree but in a primitive world with creautires that are larger, faster, scarier intelligence isn't terribly important in the here and now.

ie if your faced with large powerful creatures that will eat you, intelligence (which is a longer term game) isnt nearly so important as the ability to either run faster (to get away) or more bone and muscle (so you can fight on equal terms).

I know humans developed inteligence (though watching TV makes me doubt that :D ) but its always seemed to me to be freaky because as I said - when your in a world of the primitive (nature red in tooth and claw) intelligence is not the obvious way to evolve - if it were we would be down the pub with other species like large cats because we wouldnt be the sole specied with it if it were a requirement.

Link to post
Share on other sites

Yup, I agree that human intelligence has meant no need for bigger claws (we have atomic bombs) but I would want to come to Astro Baby's defence, none the less. There does seem to be a surplus capacity in human thinking. We think about things that, on the face of it, we don't need to think about. But, then again, I live with a dog, a very fine dog, and I sometimes have the strong impression that she, too, is 'off on one.' I might be quite wrong and anthropomorphising but I do watch her closely. I think she's subtle. She has a brain. The thing I'm tapping at the moment does not. It's a thing.

Olly

Link to post
Share on other sites

Very enjoyable thread, I'm suprised that no-one has mentioned the recent creation of life designed by man that was on the news a day or two ago.

Will it be that perhaps we will become the artifical intelligence eventually by creating a lifeform that can bridge the human mind with our electronic world. Gone will be the cell phone and ipod where we could instantly communicate or download webpages to our visual cortex from the existing satellite system.

To quote a phrase "The possibilities are endless".

Link to post
Share on other sites

Olly,

I agree with you. I believe that my computer is not conscious. However, like you I believe that my dog and two cats are definitely conscious....well I 'believe' they are based on observation. I say 'believe' because I can't prove it because I don't have a hard definition of it.....well not one that actually relates to intelligence anyway.

I think if we accept that something (a dog) is intelligent then it has to be conscious. For that reason I don't think a computer is intelligent because it is not conscious. It is a machine performing preprogrammed operations....even where genetic algorithms are concerned....and even when it does things the programmer didn't expect. I can say that the program is intelligent....but here I am only using the word very loosley and actually mean that the programmer was clever....which is different.

In the same way I don't believe a plant is intelligent or conscious. It reacts to light, gravity, etc....but it is not conscious, and therefore cannot be described as intelligent.

The rub is where do you draw the line (maybe at some emergent level of complexity) or is there a spectrum?

Link to post
Share on other sites
Actually I doubt computers are even at the stage of amoeba - they are constructs. They cant replicate by themselves

But computer viruses can. There are even computer viruses "in the wild" that deliberately modify themselves to avoid detection - some variants are of course too damaged to continue to function, but some are "successful" and have evolved anti-detection strategies that their authors did not envisage.

That does not make them intelligent, or alive. But it's a step along the way.

Link to post
Share on other sites
There does seem to be a surplus capacity in human thinking.

Olly, I agree, I don't think we evolved intelligence so that we could write music or make atom bombs. But there are many things in evolution that occur as a by product of something else that actually did have an evolutionary reason.

So I guess we evolved larger brains because that enabled us to solve problems that kept us alive. Language is a huge advantage and requires a larger brian. This enables us to pass on knowledge from one individual to others and down the generations to a massively greater extent than any other mechanism. This is an real example of how a bigger brain will enable you to stay alive against the tooth and claw. The evidence is all around. Weedy people survived on the plains of Africa amoungst lions....and even kill lions by working in groups and using spears etc, which requires language to teach each generation to make.

There are apes that are much stronger than us....but that will not be able to go out and kill a lion.

So I wouldn't dismiss intelligence as not particularly important in our evolution when compared to big teeth or claws.

And this larger capacity brain then gets used for other things....things that were not necessarilly chosen for in an evolutionary sense. And out comes music, art, etc.

Link to post
Share on other sites

Intelligence and Consciousness might not be properties of the brain but properties of brains-as-a-group. I suggest that the crucial evolutionary innovation is (was) empathy: this means setting aside some neurons for simulating other beings/things. That also needs a mechanism for keeping track of which neurons are supposed to be simulating others and which are properly "me", this is what we perceive as consciousness.

Link to post
Share on other sites
In the same way I don't believe a plant is intelligent or conscious. It reacts to light, gravity, etc....but it is not conscious, and therefore cannot be described as intelligent.

How would we know, just because we can't relate to the plant on our level or understanding doesn't mean that a plant hasn't some consciousness of sorts. Perhaps I should say awareness instead, Ask a doctor where is the bit in us that makes us self aware.

The nearest explaination I have heard is a small part in the base of the brain (I'm not going to bother googling for the right name of the bit) that acts like a feedback loop providing constant updates from the senses and in fact consciouness is just an illusion of the constant feedback.

Link to post
Share on other sites
Language is a huge advantage

There is lots of evidence that apes have and use language. When provided with suitable aids (to overcome their inability to articulate human sounds) they can even learn to use human language pretty effectively. We find it hard to understand theirs - are we more stupid than they are?

Whales and dolphins - which are very different to primates in evolutionary terms - also appear to have language and use it for collaborative hunting as well as for purposes which we can only assume to be "social".

Contrary to the belief 50+ years ago, modern evidence suggests very strongly that human anscestors developed bipedalism first, then improved tool making skills enabling "sophisticated" technologies such as control of fire, large brains & fine control of breathing required for talking being late arrivals. Chimpanzees living in forests are fine; if they want to survive in grasslands, standing upright would allow better visibility of predators. This would free the hands, allowing them to develop fine control rather than brute strength .... with basic technologies, the prolonged upbringing of children required by large brains would be possible rather than an impossible handicap, allowing "intelligence" to develop. Finally speech allows expertise to be passed on to the next generation, accellerating the further development of social & technological skills.

Link to post
Share on other sites
But computer viruses can. There are even computer viruses "in the wild" that deliberately modify themselves to avoid detection - some variants are of course too damaged to continue to function, but some are "successful" and have evolved anti-detection strategies that their authors did not envisage.

That does not make them intelligent, or alive. But it's a step along the way.

Language again; the computer viruses 'deliberately modify themselves.'

No they don't. The 'they' in this sentence is chimeric. There is no 'they' to do the modifying. The modifying was designed by the programmer using a system to introduce delays and reactions with other programmes encountered. If I take a large Toby Jug and roll it down a mountain will it modify itself along the way? That would be a strange way of putting it, yet it is common to put it that way in speaking of computers.

It has been pointed out by linguists that in human language a verb requires a subject, giving rise to odd expressions like 'it is raining.' But there is no 'it.' When we say 'the sun is shining' we subtly predispose ourselves to anthropomorphism. Language pushes us that way because it evolved, I suppose, around our own actions and those of the people with whom we were trying to co-operate. This last point, perhaps, should be treated with suspicion in the absence of evidence. We often assume that the primary role of language is communication but that may not be the case. It may be more important to us as a medium in which to thnk.

Olly

Link to post
Share on other sites
the computer viruses 'deliberately modify themselves.'

Fact of the matter is, evolution happens. Neither does the DNA in living cells doesn't "deliberately modify itself" in order to give rise to mutations, some of which give rise to organisms which are better able to survive when refracted through the prism of natural selection in whatever environment the organism happens to find itself.

We often assume that the primary role of language is communication but that may not be the case. It may be more important to us as a medium in which to thnk.

Advanced mathematics, or modern physics, tends to find human language a barrier rather than a help to forming or communicating ideas. Perhaps there is a link to the "number blindness" that some people suffer from here: it's important to realise that e.g. "three" is not just a modifier applied to an object ("three apples") but an abstract object in its own right. Some people never seem to grasp "threeness", a concept which is terribly hard to explain in anything approaching normal language.

Link to post
Share on other sites
Advanced mathematics, or modern physics, tends to find human language a barrier rather than a help to forming or communicating ideas.

I think when talking about language we need to think of it at a lower level. Language is a system for representing real and conceptual entities. Mathematics is just another representation system, and it's useful as it's very strictly defined, and does not leave much room for ambiguity like "normal" language does.

Touching on the brain size issue that others have mentioned, my understanding is that that is perhaps one of the side effects of other evolutionary changes. Standing upright, using tools and harnessing the power of fire allowed our ancestors to gather much greater quantities of food, and to release greater food value from it (by cooking). This surplus of available energy gave rise to the possibility of feeding a bigger brain. Our brains have a voracious appetite, so until there was a suitable food supply big brains were not possible.

Edited by samtheeagle
Link to post
Share on other sites
Mathematics is just another representation system

No it isn't. It's an abstract universe existing in parallel with but completely independent of the physical universe(s). "Threeness" exists as a mathematical construct whether or not there is a life form capable of understanding it.

Link to post
Share on other sites
. We often assume that the primary role of language is communication but that may not be the case. It may be more important to us as a medium in which to think.

Good point, I have always 'talked to myself' out loud and thought nothing of it. I was watching Genius of Britain last night and it was mentioned that Hooke did the same as though it was unusual.

So I asked the other half if she talked to herself which she acknowledged. Hopefully were not mad and we all do it but is it a more recent thing because each of us has more to think about now than in Hooke's time.

Link to post
Share on other sites

There is some really good stuff on this by Daniel Dennett ("Darwin's dangerous idea") and John(?) Serle (not the rower, a different one). Dennett is the arch strong AI proponent - his view of the Chinese Room (which relies on distinguishing between syntactic and semantic "understanding") is that it proves nothing other than that there is no difference between the way a machine "understands" (I use the word deliberately loosely) and the way a human understands - he also argues that a thermostat is no less "conscious" than a person and what we think of as thoughts, imaginations etc are merely illusory - weird stuff but it's hard to fault his philosophy which is extremely rigorous. Serle is his arch enemy and there has been some pretty nasty/funny banter between the two of them. IIRC a New Zealand philosopher by the name of Kim Sterelny has written a whole book summarising the pros and cons of each line of argument. For those who are interested, Sterelny has also written a short book summarising the rival arguments of Richard Dawkins and Stepen J Gould (so if you want to know about what they're all saying but can't be a**ed to read 400 pages of small writing, Sterelny's books are a good place to start).

I think Dennett, Serle, Sterelny, Dawkins and Gould all have degrees (Dennett may even have more than one as he is a professor at MIT) but I could be wrong....:D

Link to post
Share on other sites

How about this then as a definition for intelligence/consciousness to split computers and their software from living forms.

The primary difference is predictability (or lack of it). My computer is predictable (even 'self' modifying software viruses are predictable to their programmer) my cat isn't I dont know what my cat will do under any given circumstances.

Hows that for a definition ?

Intelligence - its interesting. Did you know the humble beaver split the scientific world in the 20s because of the then definition of sentient life - the definition was that any sentient life shoud use tools an communications purposefully. The beaver can meet that as a requirement and it caused a split in thinking. DId that imply beavers were sentient/intelligent or merely that the test itself was at fault ? The polemics of the argument back then were something frightful and perhaps an indication of things to come.

Link to post
Share on other sites

Well, I must admit, I hadn't heared of the "Chinese Room" before. But the argument tended to convince me that, for specific tasks, with sufficient rules, and considering only "I/O" etc., such might render machine and "being" indistinguishable. :D

How about this then as a definition for intelligence/consciousness to split computers and their software from living forms.

The primary difference is predictability (or lack of it). My computer is predictable (even 'self' modifying software viruses are predictable to their programmer) my cat isn't I dont know what my cat will do under any given circumstances.

Hows that for a definition ?

Works for me. I had been thinking on much the same lines, while watching my own cat's "decision process" - Whether to sit down on a chair, or not! I feel it would be a perverse machine that interrupted the task with an excursion upstairs... only to return unhesitatingly to the (original?) objective some minutes later. :(

I suppose one might argue Cat was randomly diverted from it's original task, or that an external circumstance had changed? But the level of complexity to simulate interruption and RETURN to a (projected?) task must render sentience the most likely option? AI machines are (have been!) capable of many things, but, for now, seem very... task-specific? ;)

:D

P.S Perhaps (in slight disagreement with the Mr.Data?) sapience is the unique human traite?

Edited by Macavity
Link to post
Share on other sites

Hi Astro-Baby,

That's a good stab at a definition...but I can see problems with it (as I think there will be with any definitions we come up with....unfortunately).

The problem is 'predictability'. Someone much cleverer than me could write a program that does one of two things depending upon the outcome of a quantum process that it measures....like which way a single photon goes through a beamsplitter. The outcome cannot be predicted....except in a statistical sense.....but the next move of your cat can be predicted in the same statistical sense as well.....though there are millions (of millions of millions etc) things that your cat could do next. Its just a very complex equation with an unimaginably large number of input variables. And the output would be a set of likeliehoods of a number of responses....not one definite answer.

There's a question of predictability and whether a problem is calculatable. Many chaotic systems are very simple but so sensitive to initial conditions that they become very difficult to predict and give the illusion of being unpredictable.

This is getting to another questions as well...does your cat (and us) have free will. If so then we are 'unpredictable'. If not then we are predictable (albeit unimaginably hard to solve).

And this last question has an impact on how we tackle the original question of computers and intelligence and consciousness.

I don't profess to know the answer, but its a wonderfully facinating subject.

Edited by simon hicks
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.