外筒衬套工艺及车床心轴铣床靠模夹具设计【含CAD图纸】
毕业设计(论文) 外文资料翻译系 别: 机电信息系 专 业: 机械设计制造及其自动化专业 班 级: 姓 名: 学 号: 外文出处: 机器人和计算机集成制造 21(2005)368-378 附 件: 1. 原文; 2. 译文 2013 年 3 月夹具定位规划中完整性评估和修订CAM 实验室,机械工程学系,伍斯特理工学院研究院, 100 路,伍斯特,硕士 01609,美国2004 年 9 月 14 日收稿;2004 年 11 月 9 日修订;2004 年 11 月 10 日发表摘 要几何约束是夹具设计中最重要的考虑因素之一。确定位置的解析拟订已发达。然而,如何分析和修改在实际夹具设计实践过程中的一个非确定性的定位计划尚未深入研究。在本文中,提出了一种方法来描述在限制约束下的重点夹具系统的几何约束状态。一种限制约束下状态,如果它存在,可以识别给定定位计划。可以自动识别工件的所有限制约束下约束状态的提案。这有助于改善逆差定位计划,并为修订提供指引,以最终实现确定性的定位。关键词:夹具设计;几何约束;确定性定位;限制约束; 过约束1.介绍夹具是用于制造工业进行工件牢固定位的一种机制。在零件加工过程中规划一个关键的第一步,夹具设计需要,以确保定位精度和三维工件的精度。 3-2-1原则,在一般情况下,是最广泛使用的指导原则发展的位置计划。 V型块和销孔定位原则也常用。一个加工夹具定位方案必须满足一些要求。最基本的要求是,必须提供工件确定的位置。这种观点指出,定位计划生产的确定位置,工件不能移动,而至少有一个定位不会失去联系。这一直是夹具设计的最根本的准则之一,许多研究人员关于几何约束状态的研究表明,工件在任何定位计划分为以下三个类别:1、良好的约束(确定性):工件在一个独特的位置进行配合,工件表面与 6 个定位器取得联系。2、限制约束:不完全约束工件的自由度。3、过约束:工件自由度超过 6 定位的制约。在1985年,浅田1提出了满秩为准则雅可比矩阵的约束方程, 基于分析形成了调研后,确定定位。周等2在1989年制定了在确定性定位问题上使用螺旋理论。结果表明,定位矩阵的定位需要压力满秩达到确定的位置。该方法的确定通过无数的研究。王等3考虑定位工件的接触的影响,而采用点接触面积。他们介绍了接触矩阵,并指出,两个接触的机构不应该有平等的,但在接触点曲率相反。卡尔森4认为,可能没有足够的应用,如一些不是非棱柱的表面或相对误差近似的非小线性。他提出一个二阶泰勒展开,其中也考虑到定位误差相互作用。马林和费雷拉5应用周对 3-2-1的位置拟订,制定若干按照规则的规划。尽管众多的位置上的确定分析研究很少注意非确定性分析的位置。在浅田的拟定方案中,他们假设工件夹具元件和点之间的联络无阻力。理想的位置q* ,而应放置工件表面和分片,可微函数是 gi(见图1) 。表面函数定义为:gi(q*)=0是确定的,应该有一个独一无二的解决方案为下列所有定位方程组。gi(q)=0,i=1,2,.,n (1)其中n是定位器的位置与 方向, 代表了工件的定位和方向。只有考虑到目标位置q*附近在 处:浅田表明(2)hi是几何函数的雅可比矩阵,矩阵式所示(3) 。确定定位如果雅可比矩阵满秩,可满足要求。 (2)只有q=q*一个解决办法(3)在1个3-2-1定位计划中,一个约束方程的雅可比矩阵的满秩的约束状态如表1所示。如果定位是小于6 ,工件是限制约束的,即存在至少有一个工件自由定位议案不受限制的。如果矩阵满秩,但定位大于6 定位,工件是过约束,这表明存在至少一个定位等;而几何约束工件被删除不影响的状态。找出一个模型除了3-2-1,可以建立基准框架提取等效的定位点。胡等 6已经发展出一种系统的方法,对这个用途。因此,这则能适用于所有的定位方案。图1 .夹具系统模型。表 1 等级 数量的定位 地位 6 过分约束康等 7遵循这些方法和他们实施制定的几何约束分析模块其自动化的计算机辅助夹具设计的核查制度。他们的 CAFDV 系统可以计算出雅可比矩阵和它的排名来确定定位的完整性。它也可以分析工件的位移和灵敏度定位错误。熊等人 8提出的等级检查方法的定位矩阵 WL(见附件)。他们还介绍了左/ 右边的定位矩阵广义逆理论,分析了工件的几何误差。结果表明,定位及发展方向误差 X 和位置误差 r 的工件定位相关如下:在受限:X=WLr, (4)约束:X=(WTLWL)-1WLTr, (5)过分约束:X=WLT(WTLWL)-1r+(I6*6-WLT(WTLWL)-1WL) , (6)是任意一个向量。他们还介绍了从这些矩阵的几个指标,评价定位配置,其次是通过约束非线性规划的优化。然而,他们的研究分析,不涉及非确定性定位的修订。目前,还没有就如何处理与提供确定的位置的夹具设计系统的研究。2.定位完整性评价如果不确定性的位置达到夹具系统设计的要求,设计师知道约束状态是什么,以如何改善设计是非常重要的条件。如果夹具系统是过度约束,是理想定位需要的不必要的信息。而下约束时,所有有关知识约束工件的议案,可以引导设计师选择额外的定位或使得修改定位计划更有效。的总体战略定位计划表征几何约束的状态描述图 2。在本文中,定位矩阵秩的几何约束的施加评价状态(见附件为获得的定位矩阵)。确定需要六个定位器定位提供矩阵的满秩定位 WL:如图 3 所示,在给定的定位器数量 n,定位法向量ai,bi,ci和定位的位置xi,yi,zi每一个定位器,i=1,2,.,n,n*6 定位矩阵可以确定如下 :(7)当等级(WL)=6,n=6 时,是工件良好约束。当等级(WL)=6,n6 时;是工件过约束。这意味着(n-6)有不必要的定位在定位方案上。工件将不存在限制(n-6)定位器。这种状态的数学表示方法,那就是(n-6)在定位向量矩阵,可表示为线性组合的其他六行向量。图 2 几何约束状态描述图 3 一个简化的定位方案。定位方案,提供了确定性的位置。发达国家的算法使用下列方法确定不必要的定位:1、找到所有的( n-6)组合定位的。2、为每个组合 ,从(n-6 )定位器确定定位方案。3、重新计算矩阵秩的定位为左六个定位器。4、如果等级不变 ,被删除的(n-6)定位器是负责过约束状态。这种方法可能会产生多种解决方案,并要求设计师来决定哪一套不必要的定位应该被删除以最佳定位性能。当等级(WL)6,工件的限制约束。3。算法的开发和实施在这里待开发的算法,将致力于提供信息的不受限运动工件在不足的约束状态。假设有 n 个定位器之间的关系的工件的位置/ 定向误差和定位误差可以表示为如下其中,DX; DY,DZ,AX,AY,AZ 沿 X,Y,Z 轴和 X,Y,Z 轴的旋转,分别是位移。直接还原铁第 i 个定位器的几何误差。 WIJ 的定义是正确的广义逆的定位矩阵 WRWTLWLWTL为了找出所有未受限运动的工件,V =dxi ; dyi ; dzi; daxi; dayi; dazi 介绍了 V DX = 0.由于 rank(X)6 必须存在有非零 V 满足式,每个非零的解决方案的 V 代表一个无约束运动。每学期的 V 代表该运动的一个组成部分。例如,0; 0; 0; 3; 0; 0 说绕 x轴的旋转不约束装置 ,0; 1; 1; 0; 0; 0 工件可以沿着由下式给出的方向向量0; 1; 1 有可能是无限的解决方案。解空间,然而,可以构造 6- rank(WL)基本的解决方案,致力于以下分析,找出基本的解决方案。示出,Wr 的行向量之间的依赖关系:在特殊情况下,例如,所有 W1J 等于零,V 具有一个明显的解决方案,1,0 ,0,0,0,0,表示沿 x 轴的位移还没有限制。这是很容易理解,因为=0 在此情况下,这意味着相应的工件的位置误差是不依赖任何定位错误。因此,相关的动议未约束的定位器。此外,结合动议不约束,如果是X 的元素之一,可以作为其他元素的线性组合表示。然而,它可以移动向量定义的 x-和 y-轴之间的沿对角线为了找到解决办法一般情况下,以下策略:1. 在定位矩阵消除依赖的行(S) 。2。计算 6 不正确的修改后的定位矩阵的广义逆34 规范的自由运动空间。5 计算未定的 V6. 基于该算法,一个 C +程序的目的是为了查明受限的状态下,不受约束的运动。实施例 1。在一个表面的磨削操作中,位于一个工件的夹具系统上,如示于图。正常矢量和每个定位器的位置如下:因此,定位矩阵被确定。在有限的定位方案这种定位系统提供了根据有限的定位因为 rank(WL)=56,该程序,然后计算正确的定位矩阵的广义逆第一行是公认的依赖行,因为这一行的去除不影响矩阵的秩。 “其他五排是独立的行。发现根据独立的行的线性组合规定下约束状态的程序的步骤 5。这种特殊情况下的解决方案是显而易见的,所有系数均为零。因此,所述 un-约束运动的工件可以被确定为 V=100000这表明,工件可沿 x 方向移动。基于这个结果,一个额外的定位器应该是采用约束沿 x 轴的工件位移。实施例 2。图 5 示出了铰接 3-2-1 定位系统。的法线矢量和每个定位器的位置,在这最初的设计如下:这种配置的定位矩阵是610 真正的设计修改修改定位矩阵变为修改后的定位矩阵是正确的广义逆检查的程序依赖行,每一行是依赖其它五个行。不失概括性的,第一行被视为依赖行。 55 改进的逆矩阵根据第 5 步中,计算五个未确定的 V 条件该矢量表示的位移的组合定义的自由运动,沿1,0,1.713方向结合旋转0.0432,0.0706,0.04。要修改这个定位的配置,另一种定位器被添加到限制这种自由运动的工件,假设定位 L1 删除在步骤 1 中。该程序可以也算自由运动的工件,如果一个定位器以外 L1 删除在步骤 1 中。这提供了多的设计师的修订选项。4.总结确定性的位置是一个重要的要求夹具定位方案设计。分析标准决定性的地位已经确立。为了进一步研究非确定性状态,提出了一种用于检查几何约束的状态已经研制成功。该算法可以识别欠约束状态,并指示不受限运动的工件。它也承认过约束的状态和不必要的定位器。输出信息,可以帮助设计师来分析和改进现有的定位方案。参考文献1 Asada H, By AB.。自动重构夹具的柔性装配夹具的运动学分析。 IEEE J 机器人 autom1985; RA-1:86-93。2 zhou YC, Chandru V,Barash MM。加工装置的自动配置的数学方法分析和综合。反 ASME J 英工业 1989;111:299-306。3 Wang MY, Liu T, Pelinescu DM.。夹具运动学分析的基础上充分接触刚体模型。 J 制造业科学与工程 2003;125:316-24。4 Carlson JS。刚性零件的装夹和定位计划的二次灵敏度分析“。 ASME J 制造业 2001 年科学与工程;123(3):462-72。5 Marin R, Ferreira P.确定性 3-2-1 定位计划的运动学分析和综合加工装置。 ASME J 制造业 科学与工程 2001 年;123:708-19。6 Hu W.设置规划和公差分析。博士论文中,伍斯特理工学院;2001 年。7 Kang Y, Rong Y, Yang J, Ma W.计算机辅助夹具设计验证。大会Autom2002;22:350-9。8 Rong KY, Huang SH, Hou Z.先进的计算机辅助夹具设计。波士顿:爱思唯尔;2005 年。 本科毕业设计论文题 目 外筒衬套零件的工艺规程和夹具设计专业名称 学生姓名 指导教师 毕业时间 毕业设计论文任务书一、题目外筒衬套零件的工艺规程和夹具设计二、指导思想和目的要求毕业设计(论文)是培养学生自学能力、综合应用能力、独立工作能力的重要教学实践环节。在毕业设计中,学生应独立承担一部分比较完整的工程技术设计任务。要求学生发挥主观能动性,积极性和创造性,在毕业设计中着重培养独立工作能力和分析解决问题的能力,严谨踏实的工作作风,理论联系实际,以严谨认真的科学态度,进行有创造性的工作,认真、按时完成任务。三、主要技术指标1、零件图一张;2、毛坯图一张;3、工艺规程一本;4、工艺装备(夹具 1-2 套) ;5、说明书一份四、进度和要求1、分析并绘制零件图 2 周2、绘制毛坯图 1 周3、设计工艺路线及编制工艺规程 5 周4、设计工艺装备 4 周5、编写说明书(论文) 2 周五、主要参考书及参考资料1、闫光明主编 现代制造工艺基础 西北工业大学出版社 20072、哈工大李益民主编 机械制造工艺设计简明手册 机械工业出版社,1994产 品 型 号 QAI 9-4 共 页 工 序 目 录零 组 件 号 YB458-71 第 页工序号工 序 名 称 设 备 工序卡片数 附 注5 备料 110 车 C620 115 车 C620 120 粗铣 6H11 125 镗孔 C620 130 车外圆 C620 135 铣外形 6H11 140 车外圆 C620 145 攻螺纹 CH12A 150 铣圆弧 6H11 155 修锉 钳工台 160 研磨孔 研磨头 165 检验 检验台 170 磨 3153 175 车 C620 180 去毛刺 钳工台 185 研磨孔 研磨头 190 检验 检验台 195 钝化 1100 检验 检验台 1零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI 9-4 备料 5设 备 锯床 定 位 夹 紧 共 页 第 页备料 50*31 0.5序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI 9-4 车 10设 备 C620 定 位 夹 紧 共 1 页 第 1 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具1 三爪卡盘 钻头 塞规卡规零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI9-4 车 15设 备 C620 定 位 夹 紧 共 1 页 第 1 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具1 三爪卡盘 卡规零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI9-4 粗铣 20设 备 6H11 定 位 夹 紧 共 1 页 第 1 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具2d326048靠模铣刀转盘 B-4x6零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI-94 镗孔 25设 备 C620 定 位 夹 紧 共 1 页 第 1 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI9-4 车外圆 30设 备 C620 定 位 夹 紧 共 1 页 第 1 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI9-4 铣外形 35设 备 6H11 定 位 夹 紧 共 1 页 第 1 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片外筒衬套 QAI-94 车外圆 40设 备 C620 定 位 夹 紧 共 1 页 第 1 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具零 件 名 称 材 料 硬 度 工序名称 工 序 号工 序 卡 片设 备 定 位 夹 紧 共 页 第 页序 号 加 工 要 求 说 明 夹 具 刀 具 量 具毕业(设计)论文开 题 报 告系 别 机电工程系 专 业 机械设计制造及其自动化班 级学生姓名学 号指导教师报告日期毕业(设计)论文开题报告表论文题目 外筒衬套工艺及车床心轴铣床靠模夹具设计学生姓名题目来源(划) 科研 生产 实验室 专题研究论文类型(划) 设计 论文 其 他 1、选题的意义外筒衬套零件在产品中一端与液压助力器外筒组件连接,中间孔与活塞外圆配合,起到一端的支撑作用。因此在机器中有很重要的作用,在生产中会大量生产所以完成这次选题有以下几点意义:1.能使我了解外筒衬套的特点和工作原理.2.能帮我掌握外筒衬套的工艺过程令设计出来的零件符合精度及表面粗糙度等各方面工艺要求。3.通过对所用夹具进行合理设计来满足工艺要求,从而将自己的理论知识与实际相结合。4.通过此次选题可以提高学生的自主设计能力进一步满足企业对毕业生的能力要求。二、基本内容及重点1.零件的的分析(1)零件的作用(2)零件的工艺分析2.工艺规程设计(1)确定毛坯制造形式(2)定位基准的选择(3)拟定工艺路线(4)选择加工设备及刀具夹具量具(5)机械加工余量工序尺寸及毛坯尺寸的确定(6)确定切削用量及基本时间3.专用夹具设计(1)机床夹具概述(2)定位基准选择(3)切削力和卡紧力计算(4)问题的提出(5)夹具设计(6)夹具设计中的特点(7)夹具分类(8)夹具设计技的发展(9)夹具的基础件(10)设计方法和步骤4.说明书 三、预期达到的成果设计的外筒衬套工艺过程合理,设计出来的零件工艺规程满足图纸实际尺寸,表面粗糙度,加工精度等各方面要求。夹具设计合理符合工艺要求。四、存在的问题及拟采取的解决措施存在问题:1.对利用 CAD 软件制图不熟练。2.对零件图的理解不够充分。3.对零件加工工艺过程方面知识理解不够深刻解决方法:1. 认真复习对 CAD 软件的运用。2. 关于对零件图不理解的地方请教老师和同学。3. 认真学习零件加工工艺过程反面知识弥补自己的不足。五、进度安排1、 分析并绘制零件图 1 周2、 绘制毛坯图 1 周3、 设计工艺路线及编制工艺规程 4 周4、 设计工艺装备 3 周5、 编写说明书(论文) 2 周六、参考文献和书目1 王先逵编著.机械制造工艺学M.北京:清华大学出版社,19892 邓文英、宋力宏.金属工艺学 第五版 北京:高等教育出版社.2008.43 赵志修主编.机械制造工艺学M.北京机械工业出版社,19854 肖继德.机床夹具设计M.北京:北京机械工业出版社,20055 关慧贞、冯辛安.机械制造装备设计(第三版).北京:机械工业出版社,20096 孙丽媛.机械制造工艺及专用夹具设计指导M.北京:冶金工业出版社,20077 杨叔子.机械加工工艺师手册M.北京:机械工业出版社,20018 朱耀祥,蒲林祥.现代夹具手册M.北京:机械工业出版社,20109 吴宗泽,高志.机械设计(第二版).北京:高等教育出版社.2009导师意见指导教师签字: 年 月 日 系意见 系主任签字: 年 月 日注:内容用小四,宋体 Robot companion localization at home and in the officeArnoud Visser Jurgen Sturm Frans GroenIntelligent Autonomous Systems, Universiteit van Amsterdamhttp:/www.science.uva.nl/research/ias/AbstractThe abilities of mobile robots depend greatly on the performance of basic skills such asvision and localization. Although great progress has been made to explore and map extensivepublic areas with large holonomic robots on wheels, less attention is paid on the localizationof a small robot companion in a confined environment as a room in office or at home. Inthis article, a localization algorithm for the popular Sony entertainment robot Aibo inside aroom is worked out. This algorithm can provide localization information based on the naturalappearance of the walls of the room. The algorithm starts making a scan of the surroundings byturning the head and the body of the robot on a certain spot. The robot learns the appearanceof the surroundings at that spot by storing color transitions at different angles in a panoramicindex. The stored panoramic appearance is used to determine the orientation (including aconfidence value) relative to the learned spot for other points in the room. When multiplespots are learned, an absolute position estimate can be made. The applicability of this kind oflocalization is demonstrated in two environments: at home and in an office.1 Introduction1.1 ContextHumans orientate easily in their natural environments. To be able to interact with humans, mobilerobots also need to know where they are. Robot localization is therefore an important basic skillof a mobile robot, as a robot companion like the Aibo. Yet, the Sony entertainment softwarecontained no localization software until the latest release1. Still, many other applications for arobot companion - like collecting a news paper from the front door - strongly depend on fast,accurate and robust position estimates. As long as the localization of a walking robot, like theAibo, is based on odometry after sparse observations, no robust and accurate position estimatescan be expected.Most of the localization research with the Aibo has concentrated on the RoboCup. At theRoboCup2 artificial landmarks as colored flags, goals and field lines can be used to achieve localizationaccuracies below six centimeters 6, 8.The price that these RoboCup approaches pay is their total dependency on artificial landmarksof known shape, positions and color. Most algorithms even require manual calibration of the actualcolors and lighting conditions used on a field and still are quite susceptible for disturbances aroundthe field, as for instance produced by brightly colored clothes in the audience.The interest of the RoboCup community in more general solutions has been (and still is) growingover the past few years. The almost-SLAM challenge3 of the 4-Legged league is a good example ofthe state-of-the-art in this community. For this challenge additional landmarks with bright colorsare placed around the borders on a RoboCup field. The robots get one minute to walk around andexplore the field. Then, the normal beacons and goals are covered up or removed, and the robotmust then move to a series of five points on the field, using the information learnt during the first1Aibo Mind 3 remembers the direction of its station and toys relative to its current orientation2RoboCup Four Legged League homepage, last accessed in May 2006, http:/www.tzi.de/4legged3Details about the Simultaneous Localization and Mapping challenge can be found at http:/www.tzi.de/4legged/pub/Website/Downloads/Challenges2005.pdf1minute. The winner of this challenge 6 reached the five points by using mainly the information ofthe field lines. The additional landmarks were only used to break the symmetry on the soccer field.A more ambitious challenge is formulated in the newly founded RoboCup Home league4. Inthis challenge the robot has to safely navigate toward objects in the living room environment. Therobot gets 5 minutes to learn the environment. After the learning phase, the robot has to visit 4distinct places/objects in the scenario, at least 4 meters away from each other, within 5 minutes.1.2 Related WorkMany researchers have worked on the SLAM problem in general, for instance on panoramic images1, 2, 4, 5. These approaches are inspiring, but only partially transferable to the 4-Legged league.The Aibo is not equipped with an omni-directional high-quality camera. The camera in the nosehas only a horizontal opening angle of 56.9 degrees and a resolution of 416 x 320 pixels. Further,the horizon in the images is not a constant, but depends on the movements of the head and legs ofthe walking robot. So each image is taken from a slightly different perspective, and the path of thecamera center is only in first approximation a circle. Further, the images are taken while the headis moving. When moving at full speed, this can give a difference of 5.4 degrees between the top andthe bottom of the image. So the image seems to be tilted as a function of the turning speed of thehead. Still, the location of the horizon can be calculated by solving the kinematic equations of therobot. To process the images, a 576 Mhz processor is available in the Aibo, which means that onlysimple image processing algorithms are applicable. In practice, the image is analyzed by followingscan-lines with a direction relative the calculated horizon. In our approach, multiple sectors abovethe horizon are analyzed, with in each sector multiple scan-lines in the vertical direction. One ofthe general approaches 3 divides the image in multiple sectors, but this image is omni-directionaland the sector is analyzed on the average color of the sector. Our method analysis each sector ona different characteristic feature: the frequency of colortransitions.2 ApproachThe main idea is quite intuitive: we would like the robot to generate and store a 360o circularpanorama image of its environment while it is in the learning phase. After that, it should aligneach new image with the stored panorama, and from that the robot should be able to derive itsrelative orientation (in the localization phase). This alignment is not trivial because the new imagecan be translated, rotated, stretched and perspectively distorted when the robot does not stand atthe point where the panorama was originally learned 11.Of course, the Aibo is not able (at least not in real-time) to compute this alignment on fullresolutionimages. Therefore a reduced feature space is designed so that the computations becometractable5 on an Aibo. So, a reduced circular 360o panorama model of the environment is learned.Figure 1 gives a quick overview of the algorithms main components.The Aibo performs a calibration phase before the actual learning can start. In this phase theAibo first decides on a suitable camera setting (i.e. camera gain and the shutter setting) basedon the dynamic range of brightness in the autoshutter step. Then it collects color pixels byturning its head for a while and finally clusters these into 10 most important color classes in thecolor clustering step using a standard implementation of the Expectation-Maximization algorithmassuming a Gaussian mixture model 9. The result of the calibration phase is an automaticallygenerated lookup-table that maps every YCbCr color onto one of the 10 color classes and cantherefore be used to segment incoming images into its characteristic color patches (see figure 2(a).These initialization steps are worked out in more detail in 10.4RoboCup Home League homepage, last accessed in May 2006, http:/www.ai.rug.nl/robocupathome/5Our algorithm consumes per image frame approximately 16 milliseconds, therefore we can easily process imagesat the full Aibo frame rate (30fps).Figure 1: Architecture of our algorithm(a) Unsupervised learned color segmentation.(b) Sectors and frequent color transitionsvisualized.Figure 2: Image processing: from the raw image to sector representation. This conversion consumesapproximately 6 milliseconds/frame on a Sony Aibo ERS7.2.1 Sector signature correlationEvery incoming image is now divided into its corresponding sectors6. The sectors are located abovethe calculated horizon, which is generated by solving the kinematics of the robot. Using the lookuptable from the unsupervised learned color clustering, we can compute the sector features by countingper sector the transition frequencies between each two color classes in vertical direction. This yieldsthe histograms of 10x10 transition frequencies per sector, which we subsequently discretize into 5logarithmically scaled bins. In figure 2(b) we displayed the most frequent color transitions for eachsector. Some sectors have multiple color transitions in the most frequent bin, other sectors have asingle or no dominant color transition. This is only visualization; not only the most frequent colortransitions, but the frequency of all 100 color transitions are used as characteristic feature of thesector.In the learning phase we estimate all these 80x(10x10) distributions7 by turning the head andbody of the robot. We define a single distribution for a currently perceived sector byPcurrent (i, j, bin) =_1 discretize (freq (i, j) = bin0 otherwise(1)where i, j are indices of the color classes and bin one of the five frequency bins. Each sector isseen multiple times and the many frequency count samples are combined into a distribution learned680 sectors corresponding to 360o; with an opening angle of the Aibo camera of approx. 50o, this yields between10 and 12 sectors per image (depending on the head pan/tilt)7When we use 16bit integers, a complete panorama model can be described by (80 sectors)x(10 colors x 10colors)x(5 bins)x(2 byte) = 80 KB of memory.for that sector by the equation:Plearned (i, j, bin) = Pcountsector (i, j, bin)bin2frequencyBinscountsector (i, j, bin)(2)After the learning phase we can simply multiply the current and the learned distribution to getthe correlation between a currently perceived and a learned sector:Corr(Pcurrent, Plearned) =Yi,j2colorClasses,bin2frequencyBinsPlearned (i, j, bin) Pcurrent (i, j, bin) (3)2.2 AlignmentAfter all the correlations between the stored panorama and the new image signatures were evaluated,we would like to get an alignment between the stored and seen sectors so that the overall likelihoodof the alignment becomes maximal. In other words, we want to find a diagonal path with theminimal cost through the correlation matrix. This minimal path is indicated as green dots in figure3. The path is extended to a green line for the sectors that are not visible in the latest perceivedimage.We consider the fitted path to be the true alignment and extract the rotational estimate robotfrom the offset from its center pixel to the diagonal (_sectors):robot =360_80_sectors (4)This rotational estimate is the difference between the solid green line and the dashed white linein figure 3, indicated by the orange halter. Further, we try to estimate the noise by fitting again apath through the correlation matrix far away from the best-fitted path.SNR =P(x,y)2minimumPathCorr(x, y)P(x,y)2noisePathCorr(x, y)(5)The noise path is indicated in figure 3 with red dots.(a) Robot standing on the trained spot (matchingline is just the diagonal)(b) Robot turned right by 45 degrees (matchingline displaced to the left)F igure 3: Visualization of the alignment step while the robot is scanning with its head. Thegreen solid line marks the minimum path (assumed true alignment) while the red line marks thesecond-minimal path (assumed peak noise). The white dashed line represents the diagonal, whilethe orange halter illustrates the distance between the found alignment and the center diagonal(_sectors).2.3 Position Estimation with Panoramic LocalizationThe algorithm described in the previous section can be used to get a robust bearing estimatetogether with a confidence value for a single trained spot. As we finally want to use this algorithmto obtain full localization we extended the approach to support multiple training spots. Themain idea is that the robot determines to which amount its current position resembles with thepreviously learned spots and then uses interpolation to estimate its exact position. As we thinkthat this approach could also be useful for the RoboCup Home league (where robot localizationin complex environments like kitchens and living rooms is required) it could become possible thatwe finally want to store a comprehensive panorama model library containing dozens of previouslytrained spots (for an overview see 1).However, due to the computation time of the feature space conversion and panorama matching,per frame only a single training spot and its corresponding panorama model can be selected.Therefore, the robot cycles through the learned training spots one-by-one. Every panorama modelis associated with a gradually changed confidence value representing a sliding average on the confidencevalues we get from the per-image matching.After training, the robot memorizes a given spot by storing the confidence values received fromthe training spots. By comparing a new confidence value with its stored reference, it is easy todeduce whether the robot stands closer or farther from the imprinted target spot.We assume that the imprinted target spot is located somewhere between the training spots.Then, to compute the final position estimate, we simply weight each training spot with its normalizedcorresponding confidence value:positionrobot =XipositioniPconfidenceij confidencej(6)This should yield zero when the robot is assumed to stand at the target spot or a translationestimate towards the robots position when the confidence values are not in balance anymore.To prove the validity of this idea, we trained the robot on four spots on regular 4-Legged fieldin our robolab. The spots were located along the axes approximately 1m away from the center.As target spot, we simply chose the center of the field. The training itself was performed fullyautonomously by the Aibo and took less than 10 minutes. After training was complete, the Aibowalked back to the center of the field. We recorded the found position and kidnapped the robot toan arbitrary position around the field and let it walk back again.Please be aware that our approach for multi-spot localization is at this moment rather primitiveand has to be only understood as a proof-of-concept. In the end, the panoramic localization datafrom vision should of course be processed by a more sophisticated localization algorithm, like aKalman or particle filter (last not least to incorporate movement data from the robot).3 Results3.1 EnvironmentsWe selected four different environments to test our algorithm under a variety of circumstances. Thefirst two experiments were conducted at home and in an office environment8 to measure performanceunder real-world circumstances. The experiments were performed on a cloudy morning, sunnyafternoon and late in the evening. Furthermore, we conducted exhaustive tests in our laboratory.Even more challenging, we took an Aibo outdoors (see 7).3.2 Measured resultsFigure 4(a) illustrates the results of a rotational test in a normal living room. As the error in therotation estimates ranges between -4.5 and +4.5 degrees, we may assume an error in alignment ofa single sector9; moreover, the size of the confidence interval can be translated into maximal twosectors, which corresponds to the maximal angular resolution of our approach.8XX office, DECIS lab, Delft9full circle of 3600 divided by 80 sectors(a) Rotational test in natural environment (livingroom, sunny afternoon)(b) Translational test in natural environment (childsroom, late in the evening)Figure 4: Typical orientation estimation results of experiments conducted at home. In the rotationalexperiment on the left the robot is rotated over 90 degrees on the same spot, and every 5 degrees itsorientation is estimated. The robot is able to find its true orientation with an error estimate equalto one sector of 4.5 degrees. The translational test on the right is performed in a childs room. Therobot is translated over a straight line of 1.5 meter, which covers the major part of the free spacein this room. The robot is able to maintain a good estimate of its orientation; although the errorestimate increases away from the location where the appearance of the surroundings was learned.Figure 4(b) shows the effects of a translational dislocation in a childs room. The robot wasmoved onto a straight line back and forth through the room (via the trained spot somewhere in themiddle). The robot is able to estimate its orientation quite well on this line. The discrepancy withthe true orientation is between +12.1 and -8.6 degrees, close to the walls. This is also reflected inthe computed confidence interval, which grows steadily when the robot is moved away from thetrained spot. The results are quite impressive for the relatively big movements in a small room andthe resulting significant perspective changes in that room.Figure 5(a) also stems from a translational test (cloudy morning) which has been conducted inan office environment. The free space in this office is much larger than at home. The robot wasmoved along a 14m long straight line to the left and right and its orientation was estimated. Notethe error estimate stays low at the right side of this plot. This is an artifact which nicely reflectsthe repetition of similarly looking working islands in the office.In both translational tests it can be seen intuitively that the rotation estimates are withinacceptable range. This can also be shown quantitatively (see figure 5(b): both the orientationerror and the confidence interval increase slowly and in a graceful way when the robot is movedaway from the training spot.Finally, figure 6 shows the result of the experiment to estimate the absolute position with multiplelearned spots. It can be seen that the localization is not as accurate as traditional approaches,but can still be useful for some applications (bearing in mind that no artificial landmarks are required).We recorded repeatedly a derivation to the upper right that we think can be explained bythe fact that different learning spots dont produce equally strong confidence values; we believe tobe able to correct for that by means of confidence value normalization in the near future.4 ConclusionAlthough at first sight the algorithm seems to rely on specific texture features of the surroundingsurfaces, in practice no dependency could be found. This can be explained by two reasons: firstly, asthe (vertical) position of a color transition is not used anyway, the algorithm is quite robust against(vertical) scaling. Secondly, as the algorithm aligns on many color transitions in the background(typically more than a hundred in the same sector), the few color transitions produced by objectsin the foreground (like beacons and spectators) have a minor impact on the match (because theirsizes relative to the background are comparatively small).The lack of an accurate absolute position estimates seems to be a clear drawback with respect tothe other methods, but bearing information alone can already be very useful for certain applications.(a) Translational test in natural environment (office,cloudy morning)(b) Signal degradation as a function of the distance tothe learned spot (measured in the laboratory)Figure 5: Challenging orientation results. On the left a translational test in office environmentover 14 meters along a line 80 centimeters from the learned spot (only one). A translation tothe left of the office increases the error estimate increases, as expected. When translating to theright of the office to the orientation estimate oscillates, but the error estimate stays low. This isdue to repeating patterns in the office, after 4 meters there is another group of desks and chairswhich resem
收藏