载重跃进货车驱动桥总成设计(含CAD图纸和说明书)
资源目录里展示的全都有预览可以查看的噢,下载就有,请放心下载,原稿可自行编辑修改=【QQ:11970985 可咨询交流】=喜欢就充值下载吧。资源目录里展示的全都有,下载后全都有,请放心下载,原稿可自行编辑修改=【QQ:197216396 可咨询交流】=
毕 业 设 计(论 文)大 纲 设计(论文)题目: 3吨载重跃进货车 驱动桥总成的设计 学生姓名: 学号: 专业: 所在学院: 指导教师: 职称: 20XX年 02月25日毕 业 设 计(论 文)大 纲一、摘要二、绪论1概述2驱动桥的结构和分类3设计的主要内容三、总体方案的确定1主要技术参数2主减速器结构方案的确定3差速器结构方案的确定4半轴形式的确定5桥壳形式的确定6本章小结四、主减速器的设计1主减速器锥齿轮的材料2主减速器锥齿轮设计,包括:主减速器齿轮计算载荷的确定;主减速器螺旋锥齿轮的几何尺寸计算3主减速器锥齿轮的强度计算4主减速器锥齿轮轴承的设计计算5主减速器的润滑6本章小结五、差速器设计1差速器的结构形式2差速器的齿轮材料3圆锥齿轮式差速器齿轮设计4圆锥齿轮式差速器齿轮强度计算5本章小结五、半轴设计1半轴的形式2半轴的材料与热处理3半轴的设计与计算4本章小结六、驱动桥桥壳设计1桥壳的结构形式2桥壳的受力分析与强度计算,包括:桥壳的弯曲应力设计;在不平路面冲击载荷作用下桥壳的强度计算;汽车以最大牵引力行驶时的桥壳的强度计算;汽车紧急制动是的桥壳的强度计算3本章小结七、结论 毕 业 设 计(论 文) 设计(论文)题目: 3吨载重跃进货车驱动桥总成的设计 学生姓名: 指导教师: 二级学院: 专业: 班级: 学号: 提交日期: 20xx年05月06日 答辩日期: 20xx年05月14日 毕 业 设 计(论 文)外 文 参 考 资 料 及 译 文译文题目: A new vehicle detection method 一种新的车辆检测方法 学生姓名: 学号: 专业: 所在学院: 指导教师: 职称: 20xx年 02 月 25 日说明:要求学生结合毕业设计(论文)课题参阅一篇以上的外文资料,并翻译至少一万印刷符(或译出3千汉字)以上的译文。译文原则上要求打印(如手写,一律用400字方格稿纸书写),连同学校提供的统一封面及英文原文装订,于毕业设计(论文)工作开始后2周内完成,作为成绩考核的一部分。英语原文A new vehicle detection methodZebbara Khalid Abdenbi Mazoul Mohamed El AnsariLabSIV, Department of Computer ScienceLabSIV, Department of Computer ScienceLabSIV, Department of Computer ScienceFaculty of Science, Ibn Zohr University Agadir, MoroccoFaculty of Science, University of Ibn Zohr Agadir, MoroccoFaculty of Science, University of Ibn Zohr Agadir, MoroccoAbstractThis paper presents a new vehicle detection method from images acquired by cameras embedded in a moving vehicle. Given the sequence of images, the proposed algorithms should detect out all cars in realtime. Related to the driving direction, the cars can be classified into two types. Cars drive in the same direction as the intelligent vehicle (IV) and cars drive in the opposite direction. Due to the distinct features of these two types, we suggest to achieve this method in two main steps. The first one detects all obstacles from images using the so-called association combined with corner detector. The second step is applied to validate each vehicle using AdaBoost classifier. The new method has been applied to different images data and the experimental results validate the efficacy of our method.Keywords-component; intelligent vehicle; vehicle detection; Association; Optical Flow; AdaBoost; Haar filter.I. INTRODUCTIONDetection of road obstacles 1 2 3 4 5 6 is an important task in the intelligent transportation. A number of sensors embedded in IV to perform the vehicle detection task. These sensors can be classified into passive and active sensors. Known that active sensors are expensive and cause pollution to the environment, we propose to use passive sensors in our vehicle detection approach. The data we are going to process to achieve vehicle detection are images taken from a camera embedded in a moving car.In the field of technical obstacle detected by vision system, two approaches existed: the first approach is unicameral approach that uses a single camera that consists of an image interpretation with former knowledge of information about these obstacles. This information can be texture information 7, color 8, 9. The second one is the stereo or multi-camera approach which is based on the variation map after matching primitives between different views of the sensor 10, 11 and 12. Vehicle detection algorithms have two basic step; Hypothesis Generation (HG) and Hypothesis Verification (HV) 13. In the hypothesis Generation step, the algorithm hypothesizes the locations of vehicles in an image. In the Hypothesis Verification (HV) step, he algorithm verifies the presence of vehicle in an image. The methods in the HG step can be categorized into tree methods; Knowledge-based methods which use symmetry of object, color, corners and edges; Stereo-vision-based methods which use two cameras;Motion-based Methods which track the motion of pixels between the consecutive frames 14. The methods in the HV step are Template-based methods and Appearance methods. Template-based methods use predefined patterns of the vehicle class. Appearance-based methods include pattern classification system between vehicle and non vehicle. There are a many works 151617 tackling realtime on-road vehicle detection problem. All the papers used monocular cameras and have real- time constraints. 15 used horizontal and vertical edges (Knowledge-based methods) in HG step. The selected regions at HG step are matched with predefined template in HV step. 16 used horizontal and vertical edges in HG step. However, they use Haar Wavelet Transform and SVMs (Appearance- based methods) in HV step. 17 detected long-distance stationary obstacles including vehicles. They used an efficient optical flow algorithm 18 in HG step. They used Sum of squared differences (SSD) with a threshold value to verify their hypothesis.This paper presents a new approach for vehicle detection. At each time, the decision of the presence of vehicles in the road scene is made based on the current frame and its preceding one. We use the association approach 20, which consists in finding the relationship between consecutive frames. This method exploits the displacement of edges in the frames. At each edge point in one frame we look for its associate one in the preceding frame if any. Obstacles can be detected on the basis of the analysis of association results. Adaboost classifier is used to verify is an obstacle is a vehicle.II. METHOD VEHICLE DETECTIONThis section details de main steps of the proposed method. We extract the edge points and corners of the consecutive images. We keep only the edge points belonging to curves containing corners. The association is performed between consecutive images. We analyze the association results to detect obstacles (objects). Finally, Adaboost is used to decide if a detected object is a vehicle or not.A. Detecting CornerWe use Shi and Tomasi 19 corner detector that is modied from the Harris corner detector. Shi and Tomasi corner detector is based on the Harris corner detector. Affine transformation is used instead of a simple translation. Given the image patch over the area (u,v). Shi and Tomasi corner detector finds corner with applying Affine transformation A and shifting it by (x,y) (Eq. 1). .(1) S = (I (u, v) - I ( A(u, v) - (x, y)2 u vAfter calculating the points corners threshold was performed to remove small close points corners, points cornersin a vehicle are much more compared to trees or features of the road.B. Detecting Edge and filtringCanny operator is used to detect edge points of the consecutive images. The edge curves are formed by grouping edge points using morphological operations. Among the resulting curves, we keep only the ones crossing at least one of the corners calculated in subsection A.C. AssociationThe rest of this subsection describes the method we use to find association between edges of successive frames. Let Ck-1 be a curve in the image IK-1and Ck be its corresponding one in the image IK. Consider two edges Pk-1 and Qk-1 belonging to the curves Ck-1 and their corresponding ones Pk and Qk belonging to the curve Ck (see Fig. 1). We define the associate point of the point Pk-1 as the point belonging to the curve Ck which has the same y coordinate as Pk-1. Note that the association is not correspondence neither motion. Two associate points are two points belonging to two corresponding curves of two successive images of the same sequence and having the same y-coordinate. From Fig. 1, we remark that the point Qk meets these constraints. Consequently, Qk constitutes the associate point of the point Pk-1.In practice, we assume that the movement of the objects from one frame to the other is small. So, if x1 and x2 represent the x-coordinates of Pk-1 and Qk, respectively, x2 should belongs to the interval x1 - Dx,x1 + Dx,where Dx is a threshold to be selected. This constraint allows the reduction of the number of associate candidates. The gradient magnitude is used to choose the best associate one. As a similarity criterion, the absolute difference between the gradient magnitudes of the edges is used. As we see in Fig. 1, the point Pk represents the match of the point Pk-1. However, the point Qk constitutes the associate of the point Pk-1. We remark that the points Pk and Qk are different because of the movement of the point Pk in the image Ik.We could not find the association for all edges because ofthe different viewpoints and then objects movement. It is the same as in the matching algorithm, where some parts are visible in one image but occluded in the other one.Association approach is a technique used to find the relationship between successive frames, this method exploit the displacement of edges in the frames. Let Qk be an edgeFigure 1. IK-1 and Ik represent successive images of the same sequence, e.g. left sequence. The point Qk in the image Ik constitutes the associate point of the point Pk-1 in the image Ik-1. The points Pk and Pk-1 are in red color. The points Qk and Qk-1 are in green color.point belonging to the curves Ck in the image Ik. The associate point of Qk can be found as a correspondent point Pk-1 belonging to the curves Ck-1 in the horizontal neighborhood of Qk in previous image Ik-1.(More details about association method are described in 20).The associated points should belong to the same object contour and they should have similar or closer gradient magnitudes and orientation. In this work, we use an important cost function (Eq. 2) described below in this paper. This function computes the distance between two candidate associate points using gradient magnitudes. The edge with smaller cost will be considered as associated pairs of features. Because of vertical movement of scene, the association approach does not guarantee that each feature in the image have its associated point. But some good associates points are enough to construct the vehicle objects. u + w F (dx ) = min(I (x, y)- I (x + dx , y) (2) x =u - wWhere dx is the distance that a contour moves between instant t0 and t1. Given point (u, v) in image It, the algorithm should find the point (if exist) (u + dx, v) in image It+1 that minimizes function of cost F (Fig. 2). And w is the neighbourhood window around (x,y).D. Detection of ObjectsLet us consider Ass the image association and M and N be the image width and height, respectively. At each pixel (x,y) in the current image, Ass(x,y) is the distance between the pixel (x,y) and its associate one in the preceding image (frame). The obstacles can be detected by using the following functions. F1 (i) = Ass(i, j) (3) j =1 M F2 ( j) = Ass(i, j) (4) i=1 (a) (b)(c)Figure 2. (a) Edge detection at instant t0. (b) Edge detection at instant t1 (c) Vector Association. Where i = 1,.,M and j = 1,.,NThe values of the function F1 and F2 should be maximum at the areas where there are obstacles. The function F1 allows to determine the horizontal bounds of obstacles. The function F2 allows to determine the vertical bounds of obstacles. The segmentation of the two functions helps to determine the horizontal and vertical bounds of obstacles. Fig. 3 illustrates an example of the computation by equations F1 and F2. Fig. 3 (a) depicts the image association, Fig. 3(b) the computed function F1, and Fig. 3(c) the computed function F2.E. Validation using AdaboostIn the step of detecting and locating faces, we propose an approach for robust and fast algorithm based on the density of images, AdaBoost, which combines simple descriptors (Haar feature) for a strong classifier.The concept of Boosting was proposed in 1995 by Freund 21. The Boosting algorithm uses the weak hypothesis (error rate 0.5) a priori knowledge to build a strong assumption. In 1996 Freund (a) (b) (c)Figure 3. (a) Image Association (b) the computed function by equation F1 (3). (c) the computed function by equation F2 (4).and Schapire proposed the AdaBoost algorithm which allowed automatic choosing weak hypothesis with adjusted weight. AdaBoost does not depend on a priori knowledge 22.In 2001, Viola and Jones applied the AdaBoost algorithm in the detection of faces for the first time. With simple descriptors (Haar feature), the method of calculating value descriptors (full image), the cascade of classifiers, this method has become reference face detection for its qualities of speed and robustness.In 2002, Lienhart et al. extended Haar descriptors, experienced in several of AdaBoost algorithms: Discrete Adaboost, Real Adaboost, Gentle Adaboost and Logitboost.These codes learning and detection algorithm AdaBoost are published in the function library OpenCV (Open Source Computer Vision) 23 24. 25 Using descriptors histograms of oriented gradients for human detection and bicycles.In our work we applied the algorithm Gentle AdaBoost using the OpenCV library function using two waterfalls - haarcascade_car_1 and haarcascade_car_2 - to detect and locate most vehicles in the sequences images.III. RESULTSWe have performed a number of experiments and comparisons to demonstrate the proposed Association approach in the context of vehicle detection.The system was implemented on a Intel Core CPU 2.99 Ghz. We tested the system on different frames of images. The system is able to detect most vehicles in different images in 20 milliseconds its fast more than algorithm of optical flow 26. (a)(b)(c)(d) (e)Figure 4. (a) at instant t0 and t1. (b) points corners of images (a). (c) edges that cross points corners detected at (b). (d) Association vectors of edges from (c) calculated between instants t0 and t1. (e right) vertical and (e left) horizontal projection of the associated edges pointsFig. 4 shows each step of our approach for detection of vehicles using Association and Adaboost. The results illustrate several Strong points of the proposed method. Fig. 4.a shows an image at instant t0 ant t1. In Fig. 4.b, the points corners are calculated successfully after threshold to eliminate other obstacles (tree,), although we have only points corners of vehicles. In Fig. 4.c, shows edges that cross points corners and we keep only edges of vehicles. In Fig. 4.d, shows associations vectors for each edge in the frame t0. In Fig. 4.e shows results calculate in section detection of objects by formulas (3) and (4) to determine abscises and ordinates of obstacles.The proposed method has been tested on other real road images depicted in Fig. 6. The HG and HV results are shown in Fig. 6.b and Fig. 6.c respectively. Its clear that the results computed by our approach are very satisfactory.(a) (b)Figure 5. (a) Bounding box. (b) Validation of objects using AdaBoost.(a)(b) (c)Figure 6. (a) Original image.(b) Hypothesis Generation (HG) and (c) Hypothesis Verification (HV).IV. CONCLUSIONThis paper presents a new vehicle detection method based on association notion descrebed above. In order to select more reliable features, the corner detector is used. Based on horizontal and vertical projection of the associated edge points, the focused sub-region is selected as region of interest.The experiment results have validated the efficacy of our method, and they show that this method is capable to work in real time. In the future, we plan to improve our vehicle detection method, which will be tested to detect much more complex obstacles (pedestrian, traffic light.) under different weather conditions.REFERENCES1 R Manduchi, A Castano, A Talukder, L Matthies 2005. Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation.2 R. Labayrade, D. Aubert, J. P. Tarel, Real Time Obstacle Detection on Non Flat Road Geometry through V-Disparity Representation, IEEE Intel- ligent Vehicules Symposium, Versailles, June 2002.3 M. Bertozzi, A. Broggi - GOLD: A parallel real- time stereo vision system for generic obstacle and lane detection, IEEE Transaction on image pro- cessing, Vol. 7, N1, January 1998.4 T.A. Williamson - A high-performance stereo vi-sion system for obstacle detection , Phd, Carnegie Mellon University, September 1998.5 G. Toulminet, A. Bensrhair, S. Mousset, A. Broggi, P. Mich, Systeme de stereovision pour la detec- tion dobstacles et de vehicule temps reel. In Procs. 18th Symposium GRETSIIO1 on Signal and Image Processing, Toulouse, France, September 20016 Tuo-Zhong Yao, Zhi-Yu Xiang, Ji-Lin Liu 2009 .Robust water hazard detection for autonomous off-road navigation in Journal of Zhejiang UniversityScience.7 T. Kalinke, C.Tzomakas, and W.Seelen (1998). A Texture-based object detection and an adaptive model-based classification.8 R.Aufrere, F.Marmoiton, R.Chapuis, F.Collange, and J.Derutin (2000).Road detection and vehicles tracking by vision for acc.9 R. Chapuis, F.Marmoiton and R.Aufrere (2000). Road detection and vehicles tracking by vision for acc system in the velac vehicle.10 U.Franke and A.Joos (2000). Real-time stereo vision for urban traffic scene understanding.11 D.Koller, T.Luong, and J.Malik (1994). Binocular stereopsis and lane marker flow for vehicle navigation: lateral and longitudinal control.12 R. Labayade, D.Aubert, and J. Tarel (2002). Real time obstacle detection in stereo vision on non flatroad geometry through v-disparity representation.13 R. Miller. On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell., 28(5):694711, 2006. Member- Zehang Sun and Member-George Bebis. 214 B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proc. of the 7th IJCAI, pages 674679, Vancouver, Canada, 1981. 1, 2, 315 M. Betke, E. Haritaoglu, and L. Davis. Real-time multiple vehicle detection and tracking from a moving vehicle. 12(2):6983, 2000. 216 G. B. Z. Sun, R. Miller and D. DiMeo. A real-time precrash vehicle detection system. In Proceedings of the 2002 IEEE Workshop on Applications of Computer Vision, Orlando, FL, Dec. 2002. 2.17 A. Wedel, U. Franke, J. Klappstein, T. Brox, and D. Cremers.Realtime depth estimation and obstacle detection from monocular video. In K. F. et al., editor, Pattern Recognition (Proc. DAGM), volume 4174 of LNCS, pages 475484, Berlin, Germany, September 2006. Springer. 218 G. D. Hager and P. N. Belhumeur. Efficient region tracking with parametric models of geometry and illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(10):10251039, 1998. 219 J. Shi and C. Tomasi. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR94), Seattle, June 1994. 1, 2, 3.20 M. El-Ansari, S. Mousset, and A. Bensrhair, “Temporal consistent real- time stereo for intelligent vehicles,” Pattern Recognition Letters, vol. 31, no. 11, pp. 12261238, August 2010.21 Y. Freund, R.E. Schapire, Experiments with a New Boosting Algorithm, In Proc. 13th Int. Conf. on Machine Learning, pp. 148.-156, 1996.22 Y. Freund, Boosting a weak learning algorithm by majority, Information and Computation, 121(2):256285, 1995.23 R. Lienhart, J. Maydt, An Extended Set of Haar-like Features for Rapid Object Detection, IEEE ICIP 2002, Vol. 1, pp. 900-903, Sep. 2002.24 R. Lienhart, A. Kuranov, V. Pisarevsky, Empirical Analysis of Detection Cascades of Boosted Classifiers for Rapid Object Detection, MRL Technical Report, May 2002.25 R. Miller. On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell., 28(5):694711, 2006. Member-Zehang Sun and Member-George Bebis.26 Jaesik Choi. Realtime On-Road Vehicle Detection with Optical Flows and Haar-like feature detector. 中文翻译一种新的车辆检测方法
收藏