资源预览内容
第1页 / 共4页
第2页 / 共4页
第3页 / 共4页
第4页 / 共4页
亲,该文档总共4页全部预览完了,如果喜欢就下载吧!
资源描述
Accurate Image Registration Using SIFT for Extended-Field-of-View Sonography Liya Kong, Qinghua Huang School of Electronics and Information engineering South China University of Technology Guangzhou, China qhhuangscut.edu.cn Shuohe Zheng, Lianwen Jin School of Electronics and Information engineering South China University of Technology Guangzhou, China Minhua Lu Department of Biomedical Engineering, Medical School Shenzhen University Shenzhen, China Siping Chen Department of Biomedical Engineering, Medical School Shenzhen University Shenzhen, China AbstractExtended-field-of-view (EFOV) sonography has been proven to be capable of illustrating panoramic ultrasound images. In EFOV technology, accurate registration of original images is the basis. Conventional registration methods based on block matching cannot provide adequate accuracy when the relative rotations among the images are large. To register the images with improved accuracy, this paper proposes a novel methodological framework based on SIFT (Scale Invariant Feature Transform) and Levenberg-Marquardt (L-M) algorithms to obtain the transforming matrices. The SIFT is used to detect plenty of distinctive invariant features which can be used for reliable matching between the images with overlapping regions. After obtaining a series of pairwise matching points, we use the L-M algorithm, a non-linear optimization, to obtain the translation and rotation parameters between two adjacent images. Using the parameters, any pair of two adjacent images can be transformed into one coordinate system to generate a panorama. Preliminary experiments have been performed and the effectiveness of our approach is demonstrated. Keywords- EFOV; sonography; image registration; SIFT; Levenberg-Marquardt I. INTRODUCTION Medical ultrasound imaging has been widely used in medical diagnostics for decades. It incorporates real-time capability with low examination costs and very low impact on the human body. However , one obvious disadvantage of ultrasound imaging is its limited image field of view (FOV) due to the limited size of the probe. To overcome this drawback, the extended-FOV (EFOV) ultrasound imaging was proposed to provide a panorama for inspecting larger anatomical objects 8. The panorama images can be generated using two steps: image registration and image fusion. The image registration is the kernel of the whole problem. The task of registration is to place overlapping images together by estimating the transformations between different images. Consequently, an appropriate combination of the joint image information, a procedure referred to as image fusion, is performed. Generally, the methods for image registration in connection with medical imaging fall into two camps 1: intensity based and feature based. Intensity based methods attempt to iteratively estimate the transform parameters by minimizing an error function based on some similarity measure, e.g. mutual information 3, which is used for images of different modalities, and the 2L-measure 4, which is often used for the same image modality. Feature based methods 2 make use of geometrical features in the images such as points, lines and surfaces to determine the transformation matrix by identifying features such as sets of image points Ax and Bx that correspond to the same physical entity visible in both images. However, intensity based methods may not perform well on ultrasound data due to the low signal to noise ratio and the existence of shadows, speckles and other noises in ultrasound images. Besides, since the ultrasound probe has to be rotated sometimes to obtain the US image of curvilinear anatomy, such variation of rotation may have a great effect on the feature matching procedure of the traditional feature based methods. In contrast, the SIFT 5 is invariant to image scaling and rotation and partially invariant to change in illumination. Moreover, it has been successfully applied to panorama creation 7. Therefore, the advantages of the SIFT motivate us to incorporate it into US image registration. Further, we propose a methodological framework combining the SIFT and an optimization algorithm for the EFOV sonography in this study. The paper is organized as follows. The implementation details of the proposed method are presented in Section . Section shows and discusses the experimental results where comparisons with a classic method designed by Weng and Tirumalai 8 are included. Finally, the conclusions and future work are given in section . II. METHODS During a real-time US scan, we obtain a sequence of US images by slowly moving the probe in lateral direction, where This work was partially supported by the National Natural Science Foundationof China (No. 60901015 and No. 60772147), Natural Science Foundation ofGuangdong Province (No. 9351064101000003) and Natural ScienceFoundation of SCUT (No. x2dxE5090860). 978-1-4244-4713-8/10/$25.00 2010 IEEEadjacent images have large overlapping areas because of high image sampling rate. Image features within the overlapping areas are very similar. To form an EFOV image using a sequence of successively captured 2D images, the transforming relationship between each pair of adjacent images should be achieved. The transformation between a fixed image and a moving image is often derived using image registration methods, in which the rotation and translations can be estimated when a part of image content from the moving image is matched onto the fixed image. Using the transformation parameters estimated, the two images can be stitched to generate a panorama. In this study, we propose a methodological framework combing the SIFT and L-M algorithms for the EFOV sonography. A. Detection of pairwise feature points based on SIFT In order to register two images, we need to detect identical “structures” between the two images. Given a structural region in the moving image, the SIFT algorithm 6 is conducted to accurately find the corresponding region in the fixed image due to its invariance to image scaling and rotation and robustness to change of illumination and addition of noise. The SIFT finally outputs a number of features with correspondences between the two images. Fig. 1 gives a flowchart of the SIFT algorithm. First of all, the SIFT detects local extrema in the difference of Gaussian (DoG) scale space. In detail, an input image),(yxI, is convolved with a Gaussian filter ),(kyxG with different scales k, to obtain the scale spaces ),(kyxS, k=1N. Subsequently, the DoG scale space can be obtained by ).,(),(),(11iiikyxSkyxSkyxD=+ (1) where ik and 1+ik are Gaussian scales in the neighboring scale spaces. A sample point is selected as a candidate keypoint only if it is smaller or larger than all of its 26 neighbors (8 neighbors from the current image and 18 neighbors from the scale below and above, respectively). Thereafter, a detailed fit based on Taylor expansion 6 is performed to obtain the more accurate localization and scale of the keypoints and to reject candidates with low contrast or edge responses. To obtain the rotation invariance, each keypoint is first assigned with a dominant orientation according to its local image gradients. The rotation invariance for a keypoint requires that the relative orientations of its neighboring regions are kept unchanged in both of the moving and fixed images. The rotation invariance can therefore be denoted as a keypoint descriptor. To generate a keypoint descriptor, its 44 (=16) neighboring sub-regions, each of which is a 44 image block, are taken into consideration. Each subregion corresponds to an orientation histogram with 8 discrete orientations (bins) constructed using the gradients of all pixels. Each orientation histogram is adjusted relative to . As a result, a feature vector for the keypoint descriptor with 448=128 dimensions are obtained. The final step in the SIFT is to find the correspondences between the keypoints from the two images. In the feature space, a correspondence between a keypoint A from the moving image and a keypoint B from the fixed image can be established if B is the closest keypoint to A among all of the keypoints from the fixed image. If there are multiple closest keypoints from the fixed image, the correspondence for A cannot be established and the next keypoint in the moving image is taken into account until all of the keypoints have been considered. The keypoints with established correspondences are treated as feature points in this paper. For more technical details, the readers are referred to the reference 6. B. Registration A correspondence means a connection between two feature points from different images. The pairwise feature points can be used for deriving the spatial relationship between the two images. Assuming that the transformation between two adjacent images is rigid, we have the transformation equation written as, +=yxttyxyxcossinsincos (2) where ( , )x yand ( ,)x yare the feature points from the moving image and the fixed image, respectively, is the rotation parameter, and ,xytt two translation parameters along the horizontal and vertical directions, respectively. To extract the three parameters denoting three degree-of-freedom using the pairwise feature points, we adopt the L-M algorithm, which has been successfully used for the calibration of a freehand 3D ultrasound system 9, 10. It can be regarded as a combination of steepest descent and the Gaussian-Newton method, and can reach the optimal solutions with arbitrary initials. Using the L-M algorithm, we can obtain accurate estimations for the three parameters. C. Stitching Once the translations and rotation between two images are obtained, we can stitch them into a large EFOV image which combines both of the information of the two original images. A blank image with a size that is able to contain a large number of original ultrasound images should be pre-defined. The fixed image is gradually expanded when it is registered with new moving images. Obviously, we focus only on the overlapping region for stitching a new moving image as the non-overlapping part can be simply preserved. Scale-space extrema detectionKeypoint localizationKeypoint discriptorOrient assignment Figure1. Flowchart of SIFT algorithm. By taking into account a fact that the value of a pixel in the fixed image may have included the information from multiple images due to multiple overlappings at the same location, we propose a new weighted average method to slightly increase the weight for the pixels in the fixed image. Specifically, we define a weight assignment method for the pixels in the two images, as follows ()()(),1,007. 05 . 0,jiwjiwnjiwfixmovfix=+= (5) where wfix and wmov are the weights assigned to the two pixels from the fixed and moving images, respectively, at the same location (i, j) in the coordinates of the fixed image, and n indicates the number of images that have contributed to computing the pixel (i, j) in the fixed image. The maximum of wfix is set to 0.75 in this study. D. The proposed algorithmic framework for EFOV The flow chart for the proposed methodological framework is given in Fig. 2. What are different from previously used method in this framework can be seen in three aspects, i.e. the SIFT based pairwise feature points detection, the use of an iterative L-M algorithm for determining transforming parameters, and the stitching procedure using a novel weighted average method. Based on this framework, a sequence of 2D ultrasound images can be accurately registered and stitched together to acquire the EFOV sonography. III. RESULTS AND DISCUSSIONS In our experiments, we collect two sets of ultrasound data. One data set is captured when the probe is moved smoothly along the lateral direction, and the images have little relative rotations. The other data set is captured when the probe is moved on a subjects forearm along the latitudinal direction so that there are relatively large rotations among the frames. Data set 1 has totally 400 frames with the size of 248336. Fig. 3(a) shows two EFOV images computed by Wengs mthod 8 and our method. Data set 2 has 200 frames, and the image size is reduced to 168 152 for removing the artifacts. Fig. 4 demonstrates the EFOV images generated by Wengs method and ours. The rotation angle in the whole registration process reaches as high as 29 degrees. Running on a system equipped with a 2.8GHz Pentium CPU and 1G RAM, it takes about 1.2s to register and stitch two adjacent images in data set 2 which contains relatively large rotations. As shown in Fig. 3, it can be observed that our method produces a similar EFOV image to Wengs method. Because there is little rotation in data set 1, both of Wengs method and our method can well detect the translations among the frames, hence producing the images with similar quality. However, as illustrated in Fig. 4, our method can well show the rotations in data set 2, but Wengs method is unable to form the EFOV image with significant rotations. It can be explained that the Figure 2. Flowchart of the proposed framework. (a) (b) Figure 4. Extended-FOV images generated by data set 2 with relatively large rotations. The result (a) is computed using our method, and (b) is computed using Wengs method. Figure 3. EFOV images generated by data set 1 with little rotations among the frames. The result (a) is computed using our method, and (b) is computed using Wengs method. block registrations in Wengs method become inaccurate when the rotations among the image frames are relatively large. Hence, the advantage of accurate registration of images for our method can be validated in the experiments. IV. CONCLUSIONS In this paper, we have proposed a methodological framework for ultrasound image registration and EFOV sonography. The framework consists of the SIFT and L-M algorithms for significantly improving the registration accuracy. Experimental results have demonstrated that our method can produce EFOV image with high accuracy in comparison with a traditional method, especially when the rotations among the captured images are relatively large. Hence, the effectiveness and usefulness of our method can be approved. However, due to the high complexity of the SIFT algorithm, our method is computationally expensive in practice. In our future work, we will optimize the proposed framework to speed up the computation by using parallel computation technique. ACKNOWLEDGMENT This work is supported by the research funding scheme titled as the sixth “Hundred Talent” program in South China University of Technology. REFERENCES 1 D. L. G. Hill, P. G. Batchelor, M. Holden, D. J. Hawkes, “Medical image registration,” Physics in Medicine and Biology, vol. 46, no. 3, pp.1-45, 2001. 2 J. B. Maintz, P.A. van den Elsen, M.A.Viergever, “Comparison of feature-based matching of CT and MR brain images”, In Proceedings of First International Conference,CVRMed, pp. 219-228, 1995. 3 C. Studholme, D. L. G. Hill, D. J. Hawkes, “Automated 3D registration of MR and CT images of the head”, Medical Image Analysis, vol.1, no. 2, pp.163-175, 1996. 4 B. Fischer, J. Modersitzki, “Fast inversion of matrices arising in image processing”, vol. 22, no.1, pp. 1-11, 1999. 5 D. G. Lowe, “ Object recognition from local scale-invariant features”, In Proceedings of the Seventh IEEE International Conference on Computer Vision, vol.2, pp. 1150-1157, 1999. 6 D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004. 7 M. Brown, D. G. Lowe, “Reconnising panoramas,” In Proceedings of the Ninth IEEE International Conference, vol. 2, pp.1218-1225, 2003. 8 L. Weng, A. P. Tirumalai, “US Extended-field-of-view imaging technology,” Radiology, vol. 203, pp. 877-880, 1997. 9 Q. H. Huang, Y. P. Zheng, M. H. Lu, T. F. Wang, S. P. Chen, “A new adaptive interpolation algorithm for 3D ultrasound imaging with speckle reduction and edge preservation,” Computerized Medical Imaging and Graphics, vol. 33, no. 2, pp. 100-110, 2009. 10 Q. H. Huang, Y. P. Zheng, “Volume reconstruction of freehand three-dimensional ultrasound using median filters,” Ultrasonics, vol. 48, no. 3, pp. 182-192, 2008.
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号