资源预览内容
第1页 / 共21页
第2页 / 共21页
第3页 / 共21页
第4页 / 共21页
第5页 / 共21页
第6页 / 共21页
第7页 / 共21页
第8页 / 共21页
第9页 / 共21页
第10页 / 共21页
亲,该文档总共21页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
2011-01-25The Compressive Sensing Theory And Practice of OMP Algorithm AnOverviewofCompressiveSensingFor 1-D signal XRN1, mostly,the information is redundant. 。We can compress it by orthogonal transformation.coding:make orthogonal matrix , transformation y= x, remain the most important K components of y and the corresponding positions. decoding:put K components back to the corresponding positions, let other positions be zero, make H,inverse transformation x*=Hy*.Coding SamplingTransformationSignal xyDecodingReceived data yInverse transformationReconstructed signal x*AnOverviewofCompressiveSensingBut there are some flaws of this method:1) Considering the Shannon sampling theorem,the sampling interval will be very narrow to gain better signal resolution,which will make the original signal very long, so the processing of transformation costs lots of time.2) The positions of K components required to remain vary while the signal changes. Therefore, this strategy is self-adaptive, and we need to allocate more space to store these positions.3) Poor anti-interference. Once one of the K components lost in transmission, the output will be changed greatly.AnOverviewofCompressiveSensingIn 2004, Donoho and Candes put forward the theory of compressive sensing.This theory indicates that when the signal is sparse or compressible, the signal can be reconstructed accurately or approximately by gathering very few projective values of the signal.The measured value is not the signal itself, but the projective value from higher dimension to lower dimension.CodingSparse signal xMeasurement, codingyDecodingReceived signal yDecoding, reconstructionConstructed signal x*AnOverviewofCompressiveSensingThe advantages of compressive sensing:1) Non-adaptive, break through the limitation of Shannon sampling theorem.2) Strong Anti-interference ability, every component of the measurement is important, or unimportant. It can still be reconstructed while some components are lost.The application prospect of compressive sensing is broad: digital camera and audio acquisition device with low cost; astronomy (stars are sparse); network; military. AnOverviewofCompressiveSensingSuppose x (n) is a digital signal, if its a K-sparse (has K non-zero values) or compressible signal, then we can estimate it with few coefficients by linear transformation. By compressive sensing we get the signal y (m) (mKlog(n) and has restricted isometry property (RIP), x (n) can be rebuilt.AnOverviewofCompressiveSensingThe definition of norm:For a vector x, if there is a corresponded real function |x|, which fits such conditions: 1) |x|0, only if x=0, |x|=0; 2) for any number a, |ax|=|a|x|; 3) for any vector x and y, |x+y|x|+|y|;Then we call |x| the norm of x.RIP: for K(0,1) (1-K)|x|22|x|22 (1+K)|x|22AnOverviewofCompressiveSensingBut few of natural signal is sparse.According to compressive sensing theory, signal x can be sparse by some reversible transformation , that is x= s, so we havey=x= sBaraniuk indicates that the equivalent condition of RIP is that the measurement matrix and the sparse base is irrelevant. Its confirmed that when is Guass random matrix, the condition is well fitted. OMP Algorithm In some circumstance, we can replace l0 norm with l1 norm, that is x*=min|x|1 s.t. y= xThe problem above can be solved by greed iterative algorithm, one of the most commonly used algorithm is the orthogonal matching pursuit (OMP) method.The main idea of the OMP algorithm: choose the column of by greed iterative method, which makes the chosen column and the present redundant vector related to the greatest extent, we subtract the related part from measurement vector, repeat the procedure above until the number of iterations up to K.OMP AlgorithmInput: sensing matrix , sampling vector y, sparse degree K;Output: the K-sparse approximation x* of x;Initialization: the residual r0=y, index set 0=, t=1; OMP AlgorithmExecute steps 1 to 5 circularly:Step 1: find the maximum value of the inner product of residual r and the column of sensing matrix j, the corresponding foot mark is ;Step 2: renew the index set t=t-1 , the sensing matrix t=t-1, ;Step 3: solve x*t=min|y- tx*|2 by least-square method;Step 4: renew the residual rt=y-tx*t, t=t+1;Step 5: if tK, stop the iteration, else do step 1.Simulation Results of OMPWrite programs of OMP in matlabFor 1-D signal x=0.3sin(250t)+0.6sin(2100t) Simulation Results of OMPFor images which are 2-D signals, we first execute discreet wavelet transformation, changing image signal into the sparse coefficients of corresponding basis (here we use Haar basis), for each column of the coefficients matrix, we execute OMP method. Then we execute wavlet inverse transformation on the OMP results, and the reconstructed image has been gained. Simulation Results of OMPThere are two images adopted in the simulation, one is a remote sensing image, and the other is lena. The size of images is both 512512 pixels. Simulation Results of OMPFor the remote sensing image, here are images reconstructed at different sampling rate.Simulation Results of OMPThe PSNR of each image at different sampling rateSimulation Results of OMPFor the lena imageSimulation Results of OMPThe PSNR of each image at different sampling rateSimulation Results of OMPIn order to decrease the running time, we try to divide images into small blocks, here is the graph of running time and numbers of blocks.For the remote sensing imageSimulation Results of OMPFor the lena imageThank you
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号