资源预览内容
第1页 / 共21页
第2页 / 共21页
第3页 / 共21页
第4页 / 共21页
第5页 / 共21页
第6页 / 共21页
第7页 / 共21页
第8页 / 共21页
第9页 / 共21页
第10页 / 共21页
亲,该文档总共21页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
2011-01-25The Compressive Sensing Theory And Practice of OMP Algorithm An Overview of Compressive Sensing For 1-D signal XRN1, mostly,the information is redundant. 。 We can compress it by orthogonal transformation.coding:make orthogonal matrix , transformation y= x, remain the most important K components of y and the corresponding positions. decoding:put K components back to the corresponding positions, let other positions be zero, make H,inverse transformation x*=Hy*.Coding SamplingTransformationSignal xyDecodingReceived data yInverse transformationReconstructed signal x*An Overview of Compressive Sensing But there are some flaws of this method: 1) Considering the Shannon sampling theorem,the sampling interval will be very narrow to gain better signal resolution,which will make the original signal very long, so the processing of transformation costs lots of time. 2) The positions of K components required to remain vary while the signal changes. Therefore, this strategy is self-adaptive, and we need to allocate more space to store these positions. 3) Poor anti-interference. Once one of the K components lost in transmission, the output will be changed greatly.An Overview of Compressive Sensing In 2004, Donoho and Candes put forward the theory of compressive sensing. This theory indicates that when the signal is sparse or compressible, the signal can be reconstructed accurately or approximately by gathering very few projective values of the signal.The measured value is not the signal itself, but the projective value from higher dimension to lower dimension.CodingSparse signal xMeasurement, codingyDecodingReceived signal yDecoding, reconstructionConstructed signal x*An Overview of Compressive Sensing The advantages of compressive sensing: 1) Non-adaptive, break through the limitation of Shannon sampling theorem. 2) Strong Anti-interference ability, every component of the measurement is important, or unimportant. It can still be reconstructed while some components are lost. The application prospect of compressive sensing is broad: digital camera and audio acquisition device with low cost; astronomy (stars are sparse); network; military. An Overview of Compressive Sensing Suppose x (n) is a digital signal, if its a K-sparse (has K non-zero values) or compressible signal, then we can estimate it with few coefficients by linear transformation. By compressive sensing we get the signal y (m) (mKlog(n) and has restricted isometry property (RIP), x (n) can be rebuilt.An Overview of Compressive Sensing The definition of norm: For a vector x, if there is a corresponded real function |x|, which fits such conditions:1) |x|0, only if x=0, |x|=0;2) for any number a, |ax|=|a|x|;3) for any vector x and y, |x+y|x|+|y|; Then we call |x| the norm of x. RIP:for K(0,1)(1-K)|x|22|x|22 (1+K)|x|22An Overview of Compressive Sensing But few of natural signal is sparse. According to compressive sensing theory, signal x can be sparse by some reversible transformation , that is x= s, so we have y=x= s Baraniuk indicates that the equivalent condition of RIP is that the measurement matrix and the sparse base is irrelevant. Its confirmed that when is Guass random matrix, the condition is well fitted. OMP Algorithm In some circumstance, we can replace l0 norm with l1 norm, that is x*=min|x|1 s.t. y= x The problem above can be solved by greed iterative algorithm, one of the most commonly used algorithm is the orthogonal matching pursuit (OMP) method. The main idea of the OMP algorithm: choose the column of by greed iterative method, which makes the chosen column and the present redundant vector related to the greatest extent, we subtract the related part from measurement vector, repeat the procedure above until the number of iterations up to K.OMP Algorithm Input: sensing matrix , sampling vector y, sparse degree K; Output: the K-sparse approximation x* of x; Initialization: the residual r0=y, index set 0=, t=1; OMP Algorithm Execute steps 1 to 5 circularly: Step 1: find the maximum value of the inner product of residual r and the column of sensing matrix j, the corresponding foot mark is ; Step 2: renew the index set t=t-1 , the sensing matrix t=t-1, ; Step 3: solve x*t=min|y- tx*|2 by least-square method; Step 4: renew the residual rt=y-tx*t, t=t+1; Step 5: if tK, stop the iteration, else do step 1.Simulation Results of OMP Write programs of OMP in matlab For 1-D signal x=0.3sin(250t)+0.6sin(2100t) Simulation Results of OMP For images which are 2-D signals, we first execute discreet wavelet transformation, changing image signal into the sparse coefficients of corresponding basis (here we use Haar basis), for each column of the coefficients matrix, we execute OMP method. Then we execute wav
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号