Let's Make Robots!

Artificial Intelligence Questionaire

Hi Guys,

I'm new to robots etc. I am currently doing a school project on artificial intelligence and wanted to do some research. I thought why not go talk to the experts so here I am. I have already done some background research and already know the general definition but part of the project requires me to gain information from a range of sources. This is a kind of questionaire/interview to gain new sources from experts.

Could you please explain in your own way "What is Artificial Intelligence?" and "How can you prove when a robot has AI?". Please make the answers as detailed as possible.

My chosen project is to design and create a robot (i.e line follower) and discuss whether or not it posseses AI. 

Also could you suggest a robot ( line follower or maze solver) which would be closest to AI in order for me to raise a good discussion.

Thanks for the help!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

hi " am not sure if i want to give my bots AI ,i'll give ai on the friday and it leaves me on the  saturday"  EDx is ran by uc berkeley i did the electronic coarse and had a brill time doingit,It's free you https://www.edx.org/courses/BerkeleyX/CS188.1x/2012_Fall/about and they are offering another chance to the circuit and electronics and you get a chance to read through "Foundations of Analog and Digital Electronic Circuits" https://www.edx.org/courses/MITx/6.002x/2012_Fall/about

Maybe it's worth deciding what makes a thing 'intelligent'. Most people assume humans are intelligent, but are non-human animals? Descartes believed in (mind-body) Dualism, so some sources suggest that he viewed everything that wasn't human as a sort of mechanical construct that only showed things like pain because it was wired into them and was nothing but an illusion. Only humans felt the real thing because they had a 'soul'. From my understanding, philosophy and ethics has moved on from that view, so if, say, a mouse is intelligent because it seeks to survive long enough to reproduce - escaping from predators, seeking sustainence, showing signs of pain/happiness - could a human-constructed device that does those things through programming be as 'intelligent' as a mouse? And on a lower level, BEAM photovores seem to mimic many types of the behaviours of insects, so could they be as 'intelligent' as a woodlouse? If the answer is yes, then could there ever be such a thing as cruelty to robots? What if we ever reverse-engineer the human brain, as this guy thinks will be done if he gets the funding, and put that into an electro-mechanical device - is it really intelligence, or just the illusion of it?

When you said, "Most people assume humans are intelligent..." I am glad you said, "most".  I for one do not assume humans are intelligent. I have spoken to many of them, so I know better.  (ha ha)

If a robot that is assembling parts on an assembly line in a car factory suddenly says to you, "Boss? My work is a little off today because I have been preoccupied. I had a really troubling dream last night, and can't get it out of my mind. Perhaps you can explain it to me. May I tell you about it?"  ——then I would say you have an excellant candidate for humanlike intelligence.

Or perhaps in a case I used in one of my books where the robots in a spaceship assembly plant play card games during their time-off and can occasionally be observed cheating. In that case, of course, I wrote them to have humanlike intelligence. Can it happen in "real life"? That is an excellant question for which we do not yet have an answer.

While the simple arduino, picaxe, propeller, basic stamp, etc. controllers we use in everyday robot-building are not too "smart", the latest processors contain billions, or even trillions of components one a single chip smaller than your thumbnail and very thin. Picture using a chip that was, say, 11cm X 16cm oval and then stacking a couple dozen of these atop each other. We could have a "solid" brain with perhaps one quadrillion components that could still fit inside a reasonably human-sized skull on a robot, and that is with today's technology. It is not too far in the future to have technology that will allow a thousandfold increase to that. I have heard estimates that by 2030 to 2035 (given the same rate of improvement in computer technology), we will have computers that will be equal to the human brain's ability, and again given the same rate of miniaturization and improvement by 2050 we will have computers as smart as all humans who ever lived combined into one package. Will the increase in the ability to compute automatically mean intelligence? Absolutely not. The robot has to be granted the ability to rethink and rewrite its own programming. THEN you will have something REALLY dangerous.

Here is a news note on work on intelligent robots: http://www.guardian.co.uk/science/2003/aug/25/science.research

————————————————————————————————————————————————

 

————————————————————————————————————————————————

There are a plethera of related videos on YouTube, such as: http://www.youtube.com/watch?v=1HmZKd4Siic&feature=related

 

 and even fellow LMRian, RobotGrrl: http://www.youtube.com/watch?v=u-mjTGgCZiE&feature=related

doesn't this consciousness allow the robot to be able to effectively program itself though to perform an action 

I never thought this post would get so many diverse responses. I am very grateful for the responses I've had despite me bieng a newcomer. Wish me luck as I begin to dive into the contreversial topic.

Like fourth-joint deep. It occurs to me as we discusses the criteria for intelligence that we may not have the right to say that we as a race possess it ourselves. (Inother words if "it takes one to know one" we're screwed.) After all, we as a race have created such questionable wonders as the neutron bomb, real estate market derivatives and the double bacon corn dog(or equally strange, something called "Zoomba")


We have invented great things too-such a communications system that allows anything that we can express as information to be duplicated and shared at little or no cost and satellites to support it which let anyone with access to the system see nearly any point on the planet from space, but we also occasionally decide to annihilate massive amounts of eachother because they live across lines that are not visible to those satellites. If (as some have suggested here) one of the criteria for intelligence is learning from experience, then humanity has a way to go as a group.

Perhaps when and if we finally do (or have) create(ed) artificial intelligence, we won't (didn't) know it because we never discovered what real intelligence is anyway.

. This does suggest a Douglas Adams-esque moment in our future when we finally receive a message from an extra terrestrial life form, which I now imagine going something like this: PEOPLE OF EARTH:WE ARE THE SUCHANDSUCH FROM THE PLANET WHATEVER IN THE YADAYADA GALAXY. WE HAVE BEEN WATCHING YOU FOR SOME TIME. OUR RACE'S ONLY GOAL IS TO SEEK OUT AND CONTACT OTHER INTELLIGENT LIFE FORMS, SO IF YOU ENCOUNTER ANY, PLEASE LET US KNOW. OTHERWISE, DON'T CALL US AT THIS NUMBER.

Beyond your cynicism is some truth hidden. You're right. We are great inventors. We invented the atomic bomb,  hot dogs, the theory of relativity, Bohr's theory, petrol motors, music, etc., but the result of the quantum theory is, that we are not only outside observers. During we are observing an experiment (and only during an experiment we can gain knowledge), we are changing the conditions of an experiment - we suddenly became  part of the experiment, and as we are part of the experiment, we are changing the result with our observation. Therefore we can never know the absolut truth. This is one deeply philosophical and painful result of the quantum theory. Ok, it's just another invention of the human race...

but i would say that ... im a biologist.

Take DNA for example - its essentially a four part code (ATGC) [rather than digital 01] leading to 3 part codes (Amino Acids) [letters] which when combined in the right order (proteins) [commands] can create interacting structures (cells) [programs] - these small structures can then be combined to create larger structures which can interact (organs) [large programs].

so why mention DNA? where do you think the human brain comes from?

Human inteligence comes from a two fold evolution - first genetic then life experience - we call it nature and nurture.

The reason why DNA works is because it has evolved to integrate mistakes into its development - from the grass root Amino acid coding to the muscle development and brain tissue organisation. and it doesnt stop there the brain is constantly reorganising itself based around a central pattern.

we like to think of ourselves as non pattern following entities but frankly we couldnt be further from the truth - were just so damn complicated it can be difficult to see the patterns without focusing on the tiniest detail.

even our celebrated conciousness is simply a REALLY big program doing exactly what its supposed to.

the turing test is the perfect example of our arrogance in this field - if it can fool us it MUST be inteligent.

so - how to define artificial inteligence?

its the insertion of a random element within an organised structure which allows evolution of the structure not based upon its origional design.

 

i think... ;)

 

dom

I agree with Lumi.

 

As far as Blue's idea that: "Human intelligence comes from a two fold evolution - first genetic then life experience - we call it nature and nurture" I have to say you are balancing on the fence around intelligence. True intelligence is not simply responses based upon observed phenomenae, otherwise we would not have any theoretical scientific papers at all. We could not consider the possible existence of parallel worlds for example, since this is not something we were able to observe and quantify. Intelligence is going beyond your programming. Anything less is merely following your learned responces, –your program as it were.

 Perhaps I can relate the two a wee bit differently. Biological DNA is more like a robot's schematic diagram. A schematic is a plan (layout) of how the robot will be built, while DNA is the plan for how organic creatures will be built.

After the creature has been "built" and "switched on" (is alive) it gets information from its sensors (ears, eyes, smell, taste or feel) just as the robot does from its sensors (IR sensors, UV, sonic, ultrasonic, pressure, radio, etc.)  This information is fed to the brain where it is interpreted and acted upon based on prior learning (i.e.  programming).

Where we move from pre-programmed logic to intelligent behaviour is interpreting inputs or events that do not fit the prewritten program.

To me, in order to have artificial intelligence, the robot must be able to rewrite its own programming in order to adapt to new situations.  A certain response may have worked in the past, but is not working in this case, so the robot tries doing something different, which was never programmed into it to do (at least not in that way). It has the capacity to ignore its old programming and try something different it never tried before, based on its knowledge of actions it can perform, even if it was never programmed to do that action for this sort of problem or situation.

If the program tells it "If you come to a wall or a drop-off, then back up." following that instruction is not intelligence, —(no matter how complex the mechanics of determining what is a wall or what is a drop-off).

If, however, it uses its knowledge of how big it is, (and how big its wheels or legs are) and its sensors tell it there is a drop-off that is too big for it to cross and it decides on its own to not try moving forward, then it is exhibiting intelligence.  Or if it were at the top of a stairs and it tries to reach down to the next step but cannot without losing its balance, so it decides to not try it, and goes elsewhere without that being an option already preset into its programming. It changes its programming to include backing back or going around any drop-offs that are too deep for it to reach, even though there was nothing in its original programming to tell it to do that.

__________________________________________________________________

Some may argue this view by noting that we used the term "artificial". To me artificial in this context simply means real intelligence in an artificial entity, –a created electro-mechanical device. Some may think of it only as artificial meaning "not real" intelligence and therefore simulated (or virtual), an imitation of intelligence.  I do not deem that to be the meaning we should be considering as AI. AI has come to mean a machine that can actually think and reason beyond its programming, (just as we do).

 You can prove a robot has AI if it does something it was never told to do, but figured out all by itself.

__________________________________________________________________

  

On a related topic, there was mention of quantum physics, specifically a part of the theory known as the Copenhagen theory. In this there is the belief that things do not occur unless witnessed by an outside observer. This leads to some startling conclusions I am not ready to accept. For instance, if something cannot occur unless observed, then there are spring forth the questions like, "Can a bear relieve himself in the woods if there is no one there to see it?" and the Copenhagen interpretation says, "No, it cannot happen."  This may be amusing, but it leads to other more drastic revalations. More than asking where an electron is at any specific moment and stating that it is located at all possible locations until an observer observes it, the other implication is that it does not exist at all unless observed. What is true of one electron, would be true for any electron, proton, positron, neutron, quark, hadron, and all other particles. The conclusion tot he Copenhagen interpretation is ultimately that nothing exists unless observed. If nothing exists unless observed, then the observers do not exist either. If we go back to Philosophy 101, we remember the statement by René Descartes, "Cogito ergo sum" [Je pense donc je suis (French); I think, therefore I am (English)] and I must observe the rest of you in order for you to exist.

Luckily for you, I do not agree with the Copenhagen interpretation and so you are permitted to exist even if you have not been observed by me. (grins)

For those who wish to listen to it, here is Prof. Basil Hiley discussing some views of the Copenhagen theory. http://www.youtube.com/watch?v=9gFCj5PPEyw

And another one at: http://www.youtube.com/watch?v=wayQn0uVIvE&feature=related

 

 

I agree with this! I also believe AI only exists if the robot performs an intelligent/useful action not present in it's programming. This therefore brings around the question of whether AI exists or will ever exist and does this consciousness go against the laws of physics.