资源预览内容
第1页 / 共57页
第2页 / 共57页
第3页 / 共57页
第4页 / 共57页
第5页 / 共57页
第6页 / 共57页
第7页 / 共57页
第8页 / 共57页
第9页 / 共57页
第10页 / 共57页
亲,该文档总共57页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
Smart Home TechnologiesAutomation and RoboticsMotivationnIntelligent Environments are aimed at improving the inhabitants experience and task performancenAutomate functions in the homenProvide services to the inhabitantsnDecisions coming from the decision maker(s) in the environment have to be executed. nDecisions require actions to be performed on devicesnDecisions are frequently not elementary device interactions but rather relatively complex commandsnDecisions define set points or results that have to be achievednDecisions can require entire tasks to be performedAutomation and Robotics in Intelligent EnvironmentsControl of the physical environmentAutomated blindsThermostats and heating ductsAutomatic doorsAutomatic room partitioningPersonal service robotsHouse cleaningLawn mowingAssistance to the elderly and handicappedOffice assistantsSecurity servicesRobotsnRobota (Czech) = A worker of forced laborFrom Czech playwright Karel Capeks 1921 play “R.U.R” (“Rossums Universal Robots”)nJapanese Industrial Robot Association (JIRA) :“A device with degrees of freedom that can be controlled.”nClass 1 : Manual handling devicenClass 2 : Fixed sequence robotnClass 3 : Variable sequence robotnClass 4 : Playback robotnClass 5 : Numerical control robotnClass 6 : Intelligent robotA Brief History of RoboticsnMechanical Automata nAncient Greece & EgyptnWater powered for ceremoniesn14th 19th century EuropenClockwork driven for entertainmentnMotor driven Robotsn1928: First motor driven automatan1961: UnimatenFirst industrial robotn1967: ShakeynAutonomous mobile research robotn1969: Stanford ArmnDextrous, electric motor driven robot armMaillardets AutomatonUnimateRobotsnRobot ManipulatorsnMobile RobotsRobotsnWalking RobotsnHumanoid RobotsAutonomous RobotsnThe control of autonomous robots involves a number of subtasksnUnderstanding and modeling of the mechanismnKinematics, Dynamics, and OdometrynReliable control of the actuatorsnClosed-loop controlnGeneration of task-specific motionsnPath planningnIntegration of sensorsnSelection and interfacing of various types of sensorsnCoping with noise and uncertaintynFiltering of sensor noise and actuator uncertaintynCreation of flexible control policiesnControl has to deal with new situationsTraditional Industrial RobotsnTraditional industrial robot control uses robot arms and largely pre-computed motions1.Programming using “teach box”2.Repetitive tasks3.High speed4.Few sensing operations 5.High precision movements6.Pre-planned trajectories and 7.task policies8.No interaction with humansProblems nTraditional programming techniques for industrial robots lack key capabilities necessary in intelligent environments1.Only limited on-line sensing2.No incorporation of uncertainty3.No interaction with humans4.Reliance on perfect task information5.Complete re-programming for new tasksRequirements for Robots in Intelligent EnvironmentsnAutonomynRobots have to be capable of achieving task objectives without human inputnRobots have to be able to make and execute their own decisions based on sensor informationnIntuitive Human-Robot InterfacesnUse of robots in smart homes can not require extensive user trainingnCommands to robots should be natural for inhabitantsnAdaptationnRobots have to be able to adjust to changes in the environmentRobots for Intelligent EnvironmentsnService RobotsnSecurity guardnDeliverynCleaningnMowingnAssistance RobotsnMobilitynServices for elderly and People with disabilitiesAutonomous Robot ControlnTo control robots to perform tasks autonomously a number of tasks have to be addressed:nModeling of robot mechanismsnKinematics, DynamicsnRobot sensor selectionnActive and passive proximity sensorsnLow-level control of actuatorsnClosed-loop controlnControl architecturesnTraditional planning architecturesnBehavior-based control architecturesnHybrid architecturesnForward kinematics describes how the robots joint angle configurations translate to locations in the worldnInverse kinematics computes the joint angle configuration necessary to reach a particular point in space. nJacobians calculate how the speed and configuration of the actuators translate into velocity of the robotModeling the Robot Mechanism(x, y, z) 1 2(x, y, )nIn mobile robots the same configuration in terms of joint angles does not identify a unique locationnTo keep track of the robot it is necessary to incrementally update the location (this process is called odometry or dead reckoning)nExample: A differential drive robot Mobile Robot Odometry(x, y, )RLActuator ControlnTo get a particular robot actuator to a particular location it is important to apply the correct amount of force or torque to it.nRequires knowledge of the dynamics of the robotnMass, inertia, frictionnFor a simplistic mobile robot: F = m a + B v nFrequently actuators are treated as if they were independent (i.e. as if moving one joint would not affect any of the other joints).nThe most common control approach is PD-control (proportional, differential control)nFor the simplistic mobile robot moving in the x direction: Robot NavigationnPath planning addresses the task of computing a trajectory for the robot such that it reaches the desired goal without colliding with obstaclesnOptimal paths are hard to compute in particular for robots that can not move in arbitrary directions (i.e. nonholonomic robots)nShortest distance paths can be dangerous since they always graze obstaclesnPaths for robot arms have to take into account the entire robot (not only the endeffector)Sensor-Driven Robot ControlnTo accurately achieve a task in an intelligent environment, a robot has to be able to react dynamically to changes ion its surroundingnRobots need sensors to perceive the environmentnMost robots use a set of different sensorsnDifferent sensors serve different purposesnInformation from sensors has to be integrated into the control of the robotRobot SensorsnInternal sensors to measure the robot configurationnEncoders measure the rotation angle of a jointnLimit switches detect when the joint has reached the limitRobot SensorsnProximity sensors are used to measure the distance or location of objects in the environment. This can then be used to determine the location of the robot.nInfrared sensors determine the distance to an object by measuring the amount of infrared light the object reflects back to the robotnUltrasonic sensors (sonars) measure the time that an ultrasonic signal takes until it returns to the robot nLaser range finders determine distance by measuring either the time it takes for a laser beam to be reflected back to the robot or by measuring where the laser hits the object nComputer Vision provides robots with the capability to passively observe the environmentnStereo vision systems provide complete location information using triangulationnHowever, computer vision is very complexnCorrespondence problem makes stereo vision even more difficultRobot SensorsUncertainty in Robot SystemsRobot systems in intelligent environments have to deal with sensor noise and uncertaintySensor uncertaintySensor readings are imprecise and unreliableNon-observabilityVarious aspects of the environment can not be observed The environment is initially unknown Action uncertaintyActions can failActions have nondeterministic outcomesProbabilistic Robot LocalizationExplicit reasoning about Uncertainty using Bayes filters:Used for: Localization Mapping Model buildingDeliberative Robot Control ArchitecturesnIn a deliberative control architecture the robot first plans a solution for the task by reasoning about the outcome of its actions and then executes itnControl process goes through a sequence of sencing, model update, and planning stepsDeliberative Control ArchitecturesnAdvantagesnReasons about contingenciesnComputes solutions to the given tasknGoal-directed strategiesnProblemsnSolutions tend to be fragile in the presence of uncertaintynRequires frequent replanningnReacts relatively slowly to changes and unexpected occurrencesBehavior-BasedRobot Control ArchitecturesnIn a behavior-based control architecture the robots actions are determined by a set of parallel, reactive behaviors which map sensory input and state to actions. Behavior-BasedRobot Control ArchitecturesnReactive, behavior-based control combines relatively simple behaviors, each of which achieves a particular subtask, to achieve the overall task.nRobot can react fast to changes nSystem does not depend on complete knowledge of the environmentnEmergent behavior (resulting from combining initial behaviors) can make it difficult to predict exact behaviornDifficult to assure that the overall task is achieved nComplex behavior can be achieved using very simple control mechanismsnBraitenberg vehicles: differential drive mobile robots with two light sensorsnComplex external behavior does not necessarily require a complex reasoning mechanismComplex Behavior from Simple Elements: Braitenberg Vehicles+“Coward”“Aggressive”+-“Love”“Explore”-Behavior-Based Architectures: Subsumption ExamplenSubsumption architecture is one of the earliest behavior-based architecturesnBehaviors are arranged in a strict priority order where higher priority behaviors subsume lower priority ones as long as they are not inhibited.Subsumption ExamplenA variety of tasks can be robustly performed from a small number of behavioral elements MIT AI Labhttp:/www-robotics.usc.edu/majaReactive, Behavior-Based Control ArchitecturesnAdvantagesnReacts fast to changesnDoes not rely on accurate modelsn“The world is its own best model”nNo need for replanningnProblemsnDifficult to anticipate what effect combinations of behaviors will havenDifficult to construct strategies that will achieve complex, novel tasksnRequires redesign of control system for new tasksHybrid Control ArchitecturesnHybrid architectures combine reactive control with abstract task planningnAbstract task planning layernDeliberative decisionsnPlans goal directed policiesnReactive behavior layernProvides reactive actionsnHandles sensors and actuatorsHybrid Control PoliciesTask Plan:Behavioral Strategy:Example Task: Changing a Light BulbHybrid Control ArchitecturesnAdvantagesnPermits goal-based strategiesnEnsures fast reactions to unexpected changesnReduces complexity of planningnProblemsnChoice of behaviors limits range of possible tasksnBehavior interactions have to be well modeled to be able to form plansTraditional Human-Robot Interface: TeleoperationRemote Teleoperation: Direct operation of the robot by the userUser uses a 3-D joystick or an exoskeleton to drive the robotSimple to installRemoves user from dangerous areasProblems:Requires insight into the mechanismCan be exhaustive Easily leads to operation errorsHuman-Robot Interaction in Intelligent EnvironmentsnPersonal service robotnControlled and used by untrained usersnIntuitive, easy to use interfacenInterface has to “filter” user inputnEliminate dangerous instructionsnFind closest possible actionnReceive only intermittent commandsnRobot requires autonomous capabilitiesnUser commands can be at various levels of complexitynControl system merges instructions and autonomous operationnInteract with a variety of humansnHumans have to feel “comfortable” around robotsnRobots have to communicate intentions in a natural wayExample: Minerva the Tour Guide Robot (CMU/Bonn) CMU Robotics Institutehttp:/www.cs.cmu.edu/thrun/movies/minerva.mpgIntuitive Robot Interfaces:Command InputnGraphical programming interfacesnUsers construct policies form elemental blocksnProblems:nRequires substantial understanding of the robotnDeictic (pointing) interfacesnHumans point at desired targets in the world ornTarget specification on a computer screennProblems: nHow to interpret human gestures ?nVoice recognitionnHumans instruct the robot verballynProblems:nSpeech recognition is very difficultnRobot actions corresponding to words has to be definedIntuitive Robot Interfaces:Robot-Human InteractionnHe robot has to be able to communicate its intentions to the humannOutput has to be easy to understand by humansnRobot has to be able to encode its intentionnInterface has to keep humans attention without annoying hernRobot communication devices:nEasy to understand computer screensnSpeech synthesisnRobot “gestures”Example: The Nursebot Project CMU Robotics Institutehttp:/www/cs/cmu.edu/thrunHuman-Robot InterfacesnExisting technologiesnSimple voice recognition and speech synthesisnGesture recognition systemsnOn-screen, text-based interactionnResearch challengesnHow to convey robot intentions ?nHow to infer user intent from visual observation (how can a robot imitate a human) ?nHow to keep the attention of a human on the robot ?nHow to integrate human input with autonomous operation ?Integration of Commands and Autonomous OperationnAdjustable AutonomynThe robot can operate at varying levels of autonomy nOperational modes:nAutonomous operation nUser operation / teleoperation nBehavioral programmingnFollowing user instructionsnImitationnTypes of user commands:nContinuous, low-level instructions (teleoperation)nGoal specifications nTask demonstrationsExample SystemSocial Robot InteractionsnTo make robots acceptable to average users they should appear and behave “natural” nAttentional Robots nRobot focuses on the user or the tasknAttention forms the first step to imitationnEmotional RobotsnRobot exhibits “emotional” responsesnRobot follows human social norms for behaviornBetter acceptance by the user (users are more forgiving)nHuman-machine interaction appears more “natural”nRobot can influence how the human reactsSocial Robot Example: Kismet MIT AI Labhttp:/www.ai.mit.eduSocial Robot InteractionsnAdvantages: nRobots that look human and that show “emotions” can make interactions more “natural”nHumans tend to focus more attention on people than on objectsnHumans tend to be more forgiving when a mistake is made if it looks “human”nRobots showing “emotions” can modify the way in which humans interact with themnProblems:nHow can robots determine the right emotion ?nHow can “emotions” be expressed by a robot ?Human-Robot Interfaces for Intelligent EnvironmentsnRobot Interfaces have to be easy to usenRobots have to be controllable by untrained usersnRobots have to be able to interact not only with their owner but also with other peoplenRobot interfaces have to be usable at the humans discretionnHuman-robot interaction occurs on an irregular basisnFrequently the robot has to operate autonomouslynWhenever user input is provided the robot has to react to itnInterfaces have to be designed human-centricnThe role of the robot is it to make the humans life easier and more comfortable (it is not just a tech toy)nIntelligent Environments are non-stationary and change frequently, requiring robots to adaptnAdaptation to changes in the environmentnLearning to address changes in inhabitant preferencesnRobots in intelligent environments can frequently not be pre-programmednThe environment is unknown nThe list of tasks that the robot should perform might not be known beforehandnNo proliferation of robots in the homen Different users have different preferences Adaptation and Learning for Robots in Smart HomesAdaptation and LearningIn Autonomous RobotsnLearning to interpret sensor informationnRecognizing objects in the environment is difficultnSensors provide prohibitively large amounts of datanProgramming of all required objects is generally not possiblenLearning new strategies and tasksnNew tasks have to be learned on-line in the home nDifferent inhabitants require new strategies even for existing tasksnAdaptation of existing control policies nUser preferences can change dynamicallynChanges in the environment have to be reflected Learning Approaches for Robot Systems nSupervised learning by teaching nRobots can learn from direct feedback from the user that indicates the correct strategynThe robot learns the exact strategy provided by the usernLearning from demonstration (Imitation)nRobots learn by observing a human or a robot perform the required task nThe robot has to be able to “understand” what it observes and map it onto its own capabilitiesnLearning by exploration nRobots can learn autonomously by trying different actions and observing their results nThe robot learns a strategy that optimizes rewardLearning Sensory PatternsChairnLearning to Identify ObjectsnHow can a particular object be recognized ?nProgramming recognition strategies is difficult because we do not fully understand how we perform recognitionnLearning techniques permit the robot system to form its own recognition strategynSupervised learning can be used by giving the robot a set of pictures and the corresponding classification nNeural networksnDecision trees:Learning Task Strategies by ExperimentationnAutonomous robots have to be able to learn new tasks even without input from the user nLearning to perform a task in order to optimize the reward the robot obtains (Reinforcement Learning)nReward has to be provided either by the user or the environmentnIntermittent user feedbacknGeneric rewards indicating unsafe or inconvenient actions or occurrencesnThe robot has to explore its actions to determine what their effects arenActions change the state of the environmentnActions achieve different amounts of rewardnDuring learning the robot has to maintain a level of safety Example: Reinforcement Learning in a Hybrid Architecturen Policy Acquisition Layer nLearning tasks without supervisionn Abstract Plan LayernLearning a system modelnBasic state space compressionn Reactive Behavior LayernInitial competence and reactivityExample Task: Learning to WalkScaling Up: Learning Complex Tasks from Simpler TasksnComplex tasks are hard to learn since they involve long sequences of actions that have to be correct in order for reward to be obtainednComplex tasks can be learned as shorter sequences of simpler tasksnControl strategies that are expressed in terms of subgoals are more compact and simpler nFewer conditions have to be considered if simpler tasks are already solved nNew tasks can be learned fasternHierarchical Reinforcement LearningnLearning with abstract actions nAcquisition of abstract task knowledgeExample: Learning to WalkConclusionsnRobots are an important component in Intelligent EnvironmentsnAutomate devices nProvide physical servicesnRobot Systems in these environments need particular capabilitiesnAutonomous control systemsnSimple and natural human-robot interfacenAdaptive and learning capabilitiesnRobots have to maintain safety during operationnWhile a number of techniques to address these requirements exist, no functional, satisfactory solutions have yet been developednOnly very simple robots for single tasks in intelligent environments exist
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号