Week 2

human-robot interactionsWeek 2

“Introduction … Video Lecture … Weekly Assignment (W2)”
(Source URL)

Summaries

  • Week 2 > Video Lecture > Video 1
  • Week 2 > Video Lecture > Video 2
  • Week 2 > Video Lecture > Video 3
  • Week 2 > Video Lecture > Video 4
  • Week 2 > Video Lecture > Video 5
  • Week 2 > Video Lecture > Video 6
  • Week 2 > Video Lecture > Video 7
  • Week 2 > Video Lecture > Video 8
  • Week 2 > Video Lecture > Video 9
  • Week 2 > Video Lecture > Video 10

Week 2 > Video Lecture > Video 1

  • In this lecture I’m going to talk about the human-robot interactions, and I’ll also explain how we can create a very human-like robot and how we can use that very human-like robot android for studying the human-robot interaction.
  • In our future, a robot is helping the people in various ways and in daily situations.
  • This robot is guiding the people, and another robot is carring the baggage.
  • So the question is, why do we want to use a very human-like robot? So in order to realize these robot societies, I have created various humanoids and androids, which is very human like robots.
  • So the left upper robot is Robovie developed in ATR And in the center, the robot is that commercial version.
  • The upper right robots are androids, which are very human-like robots.
  • So because of these reasons, we’re going to use the more human-like robots, if we have enough technology for mimicking humans.
  • We’re going to have robot societies in the near future.
  • So the question is, when are we going to have those robot societies? I think there is a possibility to have those robot societies within a couple of years, because of this platform robot.
  • Pepper is a personal robot, and this robot has almost all possible functions as interactive robots.
  • So these small robots are another of the personal robots which can be used on the desktop.
  • So with these personal robots, there is a possibility that we could create a robot society within a couple years, as we have changed this world with personal computers and smartphones.
  • So then we’re going to use the robots for many purposes.
  • The my guess is we’re going to use the robot for gaming first.
  • Then we’re going to use the robots for the educational purposes, like language training.
  • Especially the robot is good for English training or some other Language training for the Japanese.
  • If we have our robots, a robot would be a good partner for the conversation.
  • I believe personal robots will be good partners for language training.
  • Then we’re going to use those robots for many purposes, like in the public places, in stations, in department stores, and, of course, for elderly care.
  • The elderly want to have conversation partners, and many of the elderly hesitate to talk to people, but they don’t hesitate to talk to the robots.
  • So the personal robots are good for elderly care, too.
  • So the bottleneck for the personal robot was pattern recognition, like visual recognition, and auditory recognition.
  • With this Deep Learning on the cloud computers the robot can have reliable functions for pattern recognition.

Week 2 > Video Lecture > Video 2

  • I have created various robots, as shown in these figures, from a very mechanical-looking robot to the very human-like robot.
  • So my purpose for this robot study is to understand the humans by creating robots.
  • Actually, there are no studies about the human-like appearance and the movement and the perceptions and the conversations.
  • Rather, in order to develop this human-like robot, we need to have methodologies or we need to have some some more meta-level hypothesis about humans, about what is a human-like appearance? What is the meaning of a human-like appearance? What is the meaning of the human-like moment? So here, what we are doing is we try to understand the humans by creating these humanoids and very human-like robots.
  • Around the 2000s, I have created interactive robots for studying the human-robot interactions, as shown in this video.
  • So these robots have 300 behaviors and 700 transition rules for their behaviors.
  • I did a field test of the robots in a shopping mall.
  • The role of the robot is to guide the people to somewhere and answer the questions given by the visitors.
  • Actually, the navigation of this robot is autonomous.
  • So this was quite practical, the application of the robot.
  • We had questions about the human likeness of the robots, especially for the appearance.
  • Is this appearance OK for everybody? I mean, this mechanical-looking is good for everybody or not? The question is should the robot have the more human-like appearance or not? In order to answer that question, I have created this first child android around 2001.
  • Actually, the movement was jerky, and therefore, android was quite uncanny.
  • I have developed a more human-like mechanism for the android.
  • As you can see, this mechanical design is totally different for other robots, like ASIMO developed by Honda.
  • So subconscious movement is very important for giving a human likeness to the robot, to the android.
  • What we did here is by using a very precise motion capturing system we have carefully measured the human subconscious movement and implemented it onto the robot.
  • Neuroscientists, they have some hypotheses how humans control their bodies And then, based on that hypothesis, we can develop much better controls for this subconscious movement.
  • The horizontal axis is from a very simple robot to the human-like robot.
  • As the robot gets complicated, the impression is getting better and better.
  • When the robot is getting very close to the humans, suddenly we have a negative impression.
  • The first condition is human itself, and the third condition, it is the right robot.
  • The right robot has a mechanical-looking appearance and robot-like movement.
  • Of course, human has a human-like movement and human-like appearance.
  • When we see the normal humans and the robots, the SDS area in the brain is activated.
  • If we see the uncanny robot, actually, the middle picture shows the uncanny android.
  • That android has a very human-like appearance, but the movement is quite jerky like a robot.
  • So as I have explained here, so… androids and robots can be used as tools for cognitive science and neuroscience.
  • On the other hand, if cognitive science and psychology and neuroscience provides some hypothesis to the robots, we can improve the robot performance more.
  • In order to develop a more human-like robot, we need to work with psychologists and neuroscientists and cognitive scientists.

Week 2 > Video Lecture > Video 3

  • The purpose of the development of an android is to study both the humans and the robots.
  • Of course, the developed android can be used for many purposes.
  • I and his family decided to create his android.
  • The people can enjoy his professional storytelling through the android.
  • So android can have more human likeness and human-like presence.
  • Android is a bit younger than the current himself, because he was very popular ten years ago, and then he became a living national treasure.
  • So people can recognize the android as himself but nobody recognized current old himself of the famous comic storyteller.
  • So which is more stronger, android or real person? So if we create a very human-like robot, android, that we can touch to these philosophical questions.
  • So if we have the very human-like android, we can go back to the original idea.
  • Actually, for this android, we have implemented the complicated behaviors of humans.
  • So android has the sensors for detecting a human face and facial expressions and gestures.
  • Then the Android has more than 60 behaviors for waiting for somebody.
  • So this is an exhibition for Valentine’s Day, and android was actually waiting for somebody.
  • So the system is choosing one of the points in this two-dimensional space and the deciding the Android emotions.
  • If Android is aroused, Android opens the eyes widely.
  • So this is a very practical application for the android also.
  • Every day, many people record the android with video cameras and upload it onto YouTube.
  • Probably, the android is much better than the humans as the idol singers.
  • Of course, the android is quite beautiful, always smiling and always sitting with the right posture.
  • We may have more idol singers…well, android idol singers in our futures.
  • The difficulty for the Android is voice recognition in the noisy environment.
  • We cannot use the voice recognition functions with Android.
  • So in department stores, people hesitate to talk to the android, but nobody hesitated to press a button on the tablet computers.
  • As a result, the android could sell many clothes, like the human shopkeeper.
  • The first reason is people never hesitate to talk to the android.
  • For the android, the visitor considers it’s easy to reject the android’s offer, because it’s android.
  • The visitors never hesitate to talk to the android.
  • Once they start the conversations, an android will say many positive things for example, you are so beautiful, or the clothes are perfectly matching you.
  • Then the visitor trusts the android’s opinions easily, because the androids are computers.
  • We think computers, androids, robots never tell a lie.
  • The visitor accepts the android’s opinions and becomes very happy.
  • So if we choose the situation and purpose more carefully, probably the android will be much better than the human shopkeepers.
  • In the near future, we want to use more androids in department stores and many other places.

Week 2 > Video Lecture > Video 4

  • The android can answer the questions, but android cannot understand human intention or desires.
  • The conversation is one question and one answer style Therefore, android cannot have a logical conversation.
  • If the person is getting drunk, I think a person can enjoy the conversation, easy.
  • We can install the same computer program, artificial intelligence program, to both android and the robot.
  • Actually, the Cleverbot is gathering conversation partners through the Internet and developing dictionaries for answering to the questions.
  • Intelligence is coming from the Internet, and you know, the android is just memorizing the conversation partner’s and answering to the questions.
  • Probably, this is one of the ways for having the human like conversations.
  • One is to give the emotional expression to the Android, and the other is, of course, to give it more knowledge.
  • Emotion is a way to give a very human like conversation.
  • For example, when the robot doesn’t understand the human utterance.
  • If a robot says, I don’t understand, it is not so good for the human.
  • So the emotion is easiest way to establish the relations between a human and robot.
  • For example, the Google search provides similar topics, then the robot can easily change the topics.
  • If a robot can access to the dictionaries, robot can understand what kind of a concept they are talking.
  • We are doing the same things as robots and androids.

Week 2 > Video Lecture > Video 5

  • We have developed this teleoperated Android.
  • Then the computer- they’re sending the operator’s voice to the Android.
  • So by analyzing the voice, the Android is moving their lips.
  • By using the motion capturing system, or the camera, the system is tracking the operator’s head moment and the system is controlling the Android head moves.
  • So if we use this system, for example, the five minutes, if we have the conversation through the Android, both the visitor and the operator can adopt to the Android.
  • If the visiting operators, they use this Android system, they can naturally adapt to this Android.
  • So adaptation means the visitors, they naturally accept this Android as myself.
  • On the other hand, operators and myself can adapt the Android and accept the Android body as my own body.
  • So we can use this Android for giving lectures in a distant place.
  • Right? And I’m tele-operating this Android and giving lectures in a distant place.
  • What is identity? Actually I can transmit my presence to the distance place by using an android.
  • The people, they can accept the android as myself.
  • The more complicated situation is, I’m a creator of android, and I’m a researcher.
  • Android represents my identity as a professional researcher and as a teacher’s boss, right? So somehow- well, in some sense, the android has much stronger identity- my identity.

Week 2 > Video Lecture > Video 6

  • As I said, the operator can adapt the android bodies.
  • The operators, myself, can accept the android body as my own body.
  • Actually, operator is not putting any device on the cheek, but just watch in the monitors.
  • The operator can have a very strong virtual feeling, virtual tactile sensation.
  • If we have the matching between the efference copy and the proprioceptive afference and the visual afference, we recognize, we accept the arm as our own body.
  • Let us see what happens in the geminoid asistant to tele-operated android assistant.
  • The, actually, operator is controlling a geminoid and adapting a geminoid.
  • The geminoid doesn’t have any tactile sensations and the geminoid is not sending any tactile information to the operators.
  • What we want to prove is even if we don’t have a proprioceptive afference, we can accept the geminoid body as our own body.
  • What we want to say, in order to accept the android body as our own body, we don’t need to have a proprioceptive afference.
  • If we can prove this hypothesis, that means we can adapt the geminoid system, and we can accept the geminoid body as our own bodies.
  • See, operator is not moving the body at all, but just thinking.
  • Operator will think, just think, and a computer that control the robot based on the brain signal.
  • Actually, the operator is moving a left hand and right hand.
  • If the operator can adapt to the geminoid body by controlling the geminoid body with the brain signal just by thinking, probably the operator can accept the robot hand as own hand and take some reaction to the injection.
  • The operator could adapt to the geminoid body and accept the geminoid hand as his or her own hand.
  • Even if the operator doesn’t move the body at all, even if the operator doesn’t have any proprioceptive afference, the operator could adapt to the geminoid bodies.
  • We can use the geminoid as our own bodies in a distant place.
  • Even if the person cannot move the body at all, by using the tele-operated android, the person can walk in any other societies.
  • Anyway, the geminoid system can transfer my presence to the distant place.
  • Then I can use the geminoids for giving a lecture in a distant place, in different countries.
  • I don’t need to go to the foreign countries anymore, I can just use a geminoid.
  • I can just send a geminoid and the geminoid autonomously give lectures.
  • You know, I just can tele-operate it, geminoid, for answering to the questions.
  • What is experience? So, if I give many lectures in foreign countries with the geminoid, I will receive many emails from many people.
  • The people, somehow, they’re not clearly distinguish myself and the geminoid.
  • Myself and the geminoid can share the experience.

Week 2 > Video Lecture > Video 7

  • The next topic is humanlike emotion and heart or mind.
  • So how we can represent the humanlike emotion and heart or the humanlike mind with the robot and android.
  • As you can see here, androids can have very natural, humanlike facial expressions, and almost all people can feel the very humanlike emotions from this android.
  • If an android has humanlike facial expressions and a humanlike voice, the android also has emotions.
  • He’s a very famous theater director in Japan and France and Korea.
  • Then this theater became very popular in the world, in France and the United States, everywhere.
  • The human actor and actress, they are adapting to the robot, the utterance.
  • The people could feel the humanlike heart and the mind by watching this theater from the robots.
  • So questions of what is the heart and the mind? So my answer is probably, the heart and the mind is a subjective phenomenon, appearing in interactions among a people in a society.
  • Probably, we do not have exact functions for the mind in your brain.
  • Of course, there are many studies about the heart and the mind, but the neuroscience study cannot perfectly explain the humanlike heart and the mind, which we feel by watching the theaters.
  • So if we can develop a perfectly autonomous robot and android, and if we can feel the heart or mind through the interactions with the android and robot probably, we may prove the hypothesis that the human heart or mind is a subjective phenomenon.

Week 2 > Video Lecture > Video 8

  • As I explained for the different topics the difficulties of a current interactive robot is a conversational- the bubble conversations, voice recognitions.
  • If a robot could have a intentions, a desire by itself, it probably- robot can have a some function to understand the human intention and desires.
  • Actually we have implemented a very complicated behavior network for the interactive robot and Android.
  • In the next step we want to implement the intentional network for controlling the behavior network and the desires for activating intentions.
  • So if the Android, or robot, could have this hierarchical model consisting of a behaviors, intentions, desires, by using that hierarchical models the Android or geminoid or robot can understand humans intention and desires.
  • Then the human and robot could have a more human-like relationships.
  • Then human may considers, may feel the more human-like heart and the mind through the conversation with the Android and robots.

Week 2 > Video Lecture > Video 9

  • If we give them a telenoid, they suddenly start to speak.
  • They are feeling some strong pressures from the real person, but they don’t feel any pressure from the telenoid because they can use their own imaginations for the conversations.
  • In addition to that, through the conversations with a telenoid, they can suppress their wild behaviors they call the BPSDs.
  • So telenoids are quite useful in elderly care house.
  • Danish government is strongly encouraging the field test of a telenoid.
  • They are seriously exploring the possibility to use a telenoid in elderly care houses.
  • So we could have a good success with a telenoid, especially in elderly care house.
  • So I consider that if we developed a portable device with a design, with the minimal design of humans, that we may have a much better device than the smartphones.
  • So smartphone is just a black square shape, but it is not so natural to talk.
  • We should give some meaning to the mobile phone appearance, the smartphone appearance.
  • So in the near future, actually I have developed the 3G mobile phones with a design, with the minimal design, as a human.
  • People can talk with somebody by feeling the person’s presence in their hand.
  • This is not perfectly minimal design as humans, but we can consider the perfectly minimal conditions to feel the human presence.
  • So we can put the smartphones into the Hugvies, and then we can just talk to somebody.
  • Subject used the phone or Hugvie 15 minutes, and then we have checked the hormone level, the cortisol level in the blood.
  • Actually, we could suppress the- we could reduce the cortisol level significantly with Hugvies.
  • So this experiment could prove that the Hugvie can suppress- can reduce the stress of the people significantly.
  • The children cannot focus on the teacher’s talk.
  • Just around the teachers, the children are carefully listening the teacher’s talk.
  • In the distant locations, the children are not carefully listening.
  • The children could listen to the teacher’s voice from the Hugvies.
  • As you can see here, everybody could concentrate on the teacher’s talk.
  • Now, we are using these Hugvies for the education and learning.
  • Kids can concentrate on the teacher’s talk and memorize more things.
  • The Hugvie will be pretty good tools for the education.
  • The Hugvie could reduce cortisol and could reduce the stress.
  • The kids can relax and the kids can concentrate on the teacher’s talk more.
  • What do we need for feeling human presence? So my hypothesis is the two modalities.
  • If we have two modalities, we can feel the human presence.
  • Usually, in order to recognize something, we bridge two modalities.
  • The shape and function, shape and smell, shape and the sound, something like that.
  • When we can bridge the two modalities to expressions, if we can guess the function from the shape, then we say that we know that.
  • For recognizing a human presence, we need to have two modalities.
  • Actually, Hugvie has two modalities, the voice and the tactile sensations.
  • The tactile sensations will support that imagination.
  • We are bridging the two modalities, voice and the tactile sensation.
  • Of course, we can consider the different combinations, smell and the tactile sensations or voice and the smell.

Week 2 > Video Lecture > Video 10

  • So far I have discussed how we can develop the humanlike facial expressions and humanlike movements, but I didn’t carefully talk about the moral principles as a human.
  • If a biological system is facing an unknown situation, the biological system suppress the activities and behaves based on the noise- the Yuragi.
  • If a biological system is in a known environment, the system behaves based on the knowledge, based on the function f. So by using the noise or the biological fluctuations, the biological system can be a very adaptive and robust.
  • If we create very humanlike arms with artificial muscles and the humanlike bone structures and muscle arrangements, we are going to have a very complicated robot.
  • Very simple bacteria robot case and the very complicated robot arm case.
  • If we want, we can simulate human developmental process.
  • One the bottlenecks of android studies and human studies is actuators.
  • So this is quite strong and this actuator is stronger than the previous linear actuators.
  • Then the android can be an autonomous and powered by batteries.
  • I have explained about the android technologies and humanlike technologies.
  • I have also explained how we understand the human.
  • It’s through the development of android and the robot.
  • The question is, what is the difference between a human and robot.
  • On the other hand, humans are more accepting of artificial stuff.
  • Many people are using prosthetic arms and legs.
  • Still, even if they have their prosthetic arms and legs, they’re human, of course.
  • Humans are accepting more robot arms and legs and bodies.
  • The difference between a human and animal is the human can use tools, machines, technologies.
  • We cannot separate the human and machine because machine technologies- that is the way of Evolution.
  • Through that kind of development, we are looking for the definition of humans.
  • We are updating, we are changing the definition of humans.
  • 200 or 300 years ago, if a person doesn’t have the arms and legs, we couldn’t accept handicapped people as a human.
  • Of course we can accept, even if they have prosthetic arms and legs.
  • So a robot is a mirror to the reflect of humanities.
  • By watching the robots, probably, we may have more chances to think about the human itself.
  • The robot society is the society where we are going to think about our self, deeply- through the interaction with a robot.
  • So that is the possibility of robots and robot societies.

Return to Summaries.

(image source)

 

Leave a Reply

Your email address will not be published. Required fields are marked *