资源预览内容
第1页 / 共10页
第2页 / 共10页
第3页 / 共10页
第4页 / 共10页
第5页 / 共10页
第6页 / 共10页
第7页 / 共10页
第8页 / 共10页
第9页 / 共10页
第10页 / 共10页
亲,该文档总共10页全部预览完了,如果喜欢就下载吧!
资源描述
A Robust Vision-based Moving Target Detection and Tracking System Abstract.In this paper we present a new algorithm for realtime detection and tracking of moving targets in terrestrial scenes using a mobile camera. Our algorithm consists of two modes: detection and tracking. In the detection mode, background motion is estimated and compensated using an affine transformation. The resultant motion rectified image is used for detection of the target location using split and merge algorithm. We also checked other features for precise detection of the target location. When the target is identified, algorithm switches to the tracking mode. Modified Moravec operator is applied to the target to identify feature points. The feature points are matched with points in the region of interest in the current frame. The corresponding points are further refined using disparity vectors. The tracking system is capable of target shape recovery and therefore it can successfully track targets with varying distance from camera or while the camera is zooming. Local and regional computations have made the algorithm suitable for real-time applications. The refined points define the new position of the target in the current frame. Experimental results have shown that the algorithm is reliable and can successfully detect and track targets in most cases. Keywords: real time moving target tracking and detection, feature matching, affine transformation, vehicle tracking, mobile camera image.1 Introduction Visual detection and tracking is one of the most challenging issues in computer vision. Application of the visual detection and tracking are numerous and they span a wide range of applications including surveillance system, vehicle tracking and aerospace application, to name a few. Detection and tracking of abstract targets (e.g. vehicles in general) is a very complex problem and demands sophisticated solutions using conventional pattern recognition and motion estimation methods. Motion-based segmentation is one of the powerful tools for detection and tracking of moving targets. It is simple to detect moving objects in image sequences obtained by stationary camera the conventional difference-based methods fail to detect moving targets when the camera is also moving. In the case of mobile camera all of the objects in the image sequence have an apparent motion, which is related to the camera motion. A number of methods have been proposed for detection of the moving targets in mobile camera including direct camera motion parameters estimation, optical flow, and geometric transformation . Direct measurement of camera motion parameters is the best method for cancellation of the apparent background motion but in some application it is not possible to measure these parameters directly. Geometric transformation methods have low computation cost and are suitable for realtime purpose. In these methods, a uniform background motion is assumed. An affine motion model could be used to model this motion. When the apparent motion of the background is estimated, it can be exploited to locate moving objects. In this paper we propose a new method for detection and tracking of moving targets using a mobile monocular camera. Our algorithm has two modes: detection and tracking. This paper is organized as follows. In Section 2, the detection procedure is discussed. Section 3 describes the tracking method. Experimental results are shown in Section 4 and conclusion appears in Section 5. 2 Target detection In the detection mode we used affine transformation and LMedS (Least median squared) method for robust estimation of the apparent background motion. After the compensation of the background motion, we apply split and merge algorithm to the difference of current frame and the transformed previous frame to obtain an estimation of the target positions. If no target is found, then it means either there is no moving target in the scene or, the relative motion of the target is too small to be detected. In the latter case, it is possible to detect the target by adjusting the frame rate of the camera. The algorithm accomplishes this automatically by analyzing the proceeding frames until a major difference is detected. We designed a voting method to verify the targets based on apriori knowledge of the targets. For the case of vehicle detection we used vertical and horizontal gradients to locate interesting features as well as constraint on area of the target as discussed in this section. 2.1 Background motion estimation Affine transformation has been used to model motion of the camera. This model includes rotation, scaling and translation. 2D affine transformation is described as follow: (1)where (xi , yi ) are locations of points in the previous frame and (Xi , Yi ) are locations of points in the current frame and a1a6 are motion parameters. This transformation has six parameters; therefore, three matching pairs
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号