Reconstruction Algorithms for Compressive Sensing II

Download Report

Transcript Reconstruction Algorithms for Compressive Sensing II

Graduate Institute of Electronics Engineering, NTU
Reconstruction Algorithms for Compressive
Sensing II
Presenter: 黃乃珊
Advisor: 吳安宇 教授
Date: 2014/04/08
ACCESS IC LAB
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Schedule
 19:30 @ EEII-225
日期
內容
3/11
Introduction to Compressive Sensing System
Nhuang
3/25
Reconstruction Algorithm
Nhuang
4/8
Reconstruction Algorithm
4/15
Break; 決定期末題目方向
4/22
Sampling Algorithm:
4/29
Midterm Presentation (Tutorial, Survey)
5/6
Application: Single Pixel Camera
Lab & HW
Lab1
Speaker
Nhuang
Yumin
Lab2
Yumin
5/13 ~ 6/10 期末報告討論
6/24
Final Presentation
2
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Outline
 Reconstruction Algorithms for Compressive Sensing
 Bayesian Compressive Sensing
 Iterative Thresholding
 Approximate Message Passing
 Implementation of Reconstruction Algorithms
 Lab1: OMP Simulation
 Reference
3
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Recovery Algorithms for Compressive
Sensing
 Linear Programming
 Basis Pursuit (BP)
 Greedy Algorithm
 Matching Pursuit




Orthogonal Matching Pursuit (OMP)
Stagewise Orthogonal Matching Pursuit (StOMP)
Compressive Sampling Matching Pursuit (CoSaMP)
Subspace Pursuit (SP)
 Iterative Thresholding
 Iterative Hard Thresholding (IHT)
 Iterative Soft Thresholding (IST)
 Bayesian Compressive Sensing (BCS)
 Approximate Matching Pursuit (AMP)
4
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Compressive Sensing in Mathematics
 Sampling matrices should satisfy restricted isometry
property (RIP)
 Random Gaussian matrices
 Reconstruction solves an underdetermined question
 min 𝒙
𝑥
1 s. t. 𝚽𝒙
=𝒚, 𝒙
1
≔
𝑖 𝑥𝑖
 Linear Programming
 Orthogonal Matching Pursuit(OMP)
min 𝑥
𝑥
𝒙𝑵
Sampling
Channel
𝒚𝑴 = 𝚽𝑴×𝑵 𝒙𝑵
0
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝒚 = Φ 𝒙
Reconstruction
𝒙𝑵
𝒚𝑴 + 𝒏𝒐𝒊𝒔𝒆
5
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Compressive Sensing in Linear Algebra
 Reconstruction is composed of two parts:
 Localize nonzero terms
 Approximate nonzero value
 Do correlation to find the location of non-zero terms
 Solve least square problem to find the value
 Projection (pseudo-inverse)
coefficient
Measurement
=
Input
basis
6
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Orthogonal Matching Pursuit (OMP) [3]
 Use greedy algorithm to iteratively recover sparse signal
 Procedure:
1.
2.
3.
4.
5.
6.
Initialize
Find the column that is most correlated
Set Union (add one col. every iter.)
Solve the least squares
Update data and residual
Back to step 2 or output
𝒚𝑀 = Φ𝑀×𝑁 𝒙𝑁
[14]
7
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Iterative Threshold [4]
 Iterative hard thresholding (IHT)
𝑥 𝑡 = ℍ𝑆 (𝑥 𝑡−1 + Φ𝑇 (𝑦 − Φ𝑥 𝑡−1 ))
ℍ𝑆 ∙ 𝑖𝑠 𝑡𝑜 𝑓𝑖𝑛𝑑 𝑡ℎ𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 𝑆 𝑒𝑙𝑒𝑚𝑒𝑛𝑡
 Iterative soft thresholding (IST) [2]
𝑥 𝑡+1 = 𝜂𝑡 𝐴∗ 𝑧 𝑡 + 𝑥 𝑡
𝑧 𝑡 = 𝑦 − 𝐴𝑥 𝑡
𝜂 𝑥; 𝜏
𝑥 + 𝜏, 𝑥 < −𝜏
𝜂 𝑥; 𝜏 = 0, −𝜏 ≤ 𝑥 ≤ 𝜏
𝑥 − 𝜏, 𝑥 > 𝜏
𝑥
8
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Compressive Sensing
From Mathematics to Engineering
 Fourier transform was invented in 1812, and published in
1822. Not until FFT was developed in 1965, Fourier
transform started to change the world.
 Hardware design is limited by algorithm
 Engineering perspective can help compressive sensing
more powerful in practical application
9
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Message Passing
 Messages pass from sender to receiver
 Reliable transfer, and deliver in order
 Belief propagation (BP)
 Sum-product message passing
 Calculate distribution for unobserved nodes on graph
 Ex. low-density parity-check codes (LDPC), turbo codes
 Approximate message passing (AMP) [8][9][10]
10
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Approximate Message Passing (AMP)
𝜂 𝑥; 𝜏
 Iterative soft thresholding (IST)
𝑥 𝑡+1 = 𝜂𝑡 𝐴∗ 𝑧 𝑡 + 𝑥 𝑡
𝑧 𝑡 = 𝑦 − 𝐴𝑥 𝑡
𝑥 + 𝜏, 𝑥 < −𝜏
𝜂 𝑥; 𝜏 = 0, −𝜏 ≤ 𝑥 ≤ 𝜏
𝑥 − 𝜏, 𝑥 > 𝜏
𝑥
 Approximate message passing (AMP) [8][9][10]
𝑥 𝑡+1 = 𝜂𝑡 𝐴∗ 𝑧 𝑡 + 𝑥 𝑡
1
𝑧 𝑡 = 𝑦 − 𝐴𝑥 𝑡 + 𝑧 𝑡−1 (𝑥 𝑡 )′
𝛿
 Onsager reaction term cancels the self-feedback effects
 Approximate sum-product messages for basis pursuit
 Fast and good performance, but not suit for all random input
11
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Relevance Vector Machine (RVM)
 Use Bayesian inference for regression and probabilistic
classification
 Support Vector Machine (SVM)
 Classification and regression analysis
 RVM is faster but at risk of local minima
12
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Bayesian Compressive Sensing [5][6][7]
 Consider CS from Bayesian perspective
 Provide a full posterior density function
 Adopt the relevance vector machine (RVM)
 Solve the problem of maximum a posterior (MAP) efficiently
 Adaptive Compressive Sensing
 Adaptively select projection with the goal to reduce uncertainty
 Bayesian Compressive Sensing via Belief Propagation
13
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Compressive Sensing in Engineering
A. Message passing
A. Message Passing
 Sum-product message passing
 Ex. Low-density parity-check codes (LDPC)
B. Bayesian model
 Bayesian learning, a kind of machine learning
C. Adaptive filtering framework
 Self-adjust to optimize desired signal
B. Bayesian Model
C. Adaptive Filter
14
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Outline
 Reconstruction Algorithms for Compressive Sensing
 Bayesian Compressive Sensing
 Iterative Thresholding
 Approximate Message Passing
 Implementation of Reconstruction Algorithms
 Lab1: OMP Simulation
 Reference
15
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Implementation of Reconstruction
Algorithms
 Choose Greedy rather than Linear programing
 Optimization is better in terms of accuracy, but its implementation
is very complex and time consuming.
 Design issues
 Matrix multiplication
 Matrix inverse
Processing Flow in Greedy Pursuits
Matrix
Multiplication
 Related works
 OMP – ASIC & FPGA
 CoSaMP – FPGA
 IHT – GPU
 AMP – ASIC & FPGA
Matrix
Inverse
16
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
OMP with Cholesky Decomposition
1
 [11] is the earliest hardware
implementation
 Cholesky decomposition does not
require square root calculations
 Bottleneck
2
 Kernel 1: 655/1645 cycles
 Kernel 2 (Matrix inversion): 769/1645 cycles
(N, M, K)
OMP [11]
(128,32,5)
ISCAS, 2010
OMP [13]
ISSPA, 2012
(128,32,5)
SQNR
Max Freq.
Latency
X
39MHz
24us
47dB
107MHz
16us
3
[9]
17
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
OMP with QR Decomposition
 Cholesky increases the latency with increasing dimension
 QRD-RLS and fast inverse square algorithm are used in [14]
 Remove columns with low coherence by an empirical
threshold to reduce computational time
 Tradeoff between MSE and reconstruction cycles
Reconstruction Time
Normalized MSE
18
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Outline
 Reconstruction Algorithms for Compressive Sensing
 Bayesian Compressive Sensing
 Iterative Thresholding
 Approximate Message Passing
 Implementation of Reconstruction Algorithms
 Lab1: OMP Simulation
 Reference
19
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
OMP Simulation
 Please design SolveOMP.m
 Test the recovery performance of OMP with different size
of measurement or different sparsity
20
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Reference
[1] E. J. Candes, and M. B. Wakin, "An Introduction To Compressive Sampling," Signal Processing
Magazine, IEEE , vol.25, no.2, pp.21-30, March 2008
[2] G. Pope, “Compressive Sensing – A Summary of Reconstruction Algorithm”, Swiss Federal Instituute
of Technology Zurich
[3] J. A. Tropp, A. C. Gilbert, “Signal Recovery from Random Measurements via Orthogonal Matching
Pursuit,” IEEE Transactions on Information Theory, vol.53, no.12, pp. 4655-4666, Dec. 2007
[4] T. Blumensath, and M. E. Davies, "Iterative hard thresholding for compressed sensing." Applied and
Computational Harmonic Analysis 27.3 (2009): 265-274.
[5] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, no. 6,
pp. 2346–2356, Jun. 2008.
[6] M. E. Tipping, "Sparse Bayesian learning and the relevance vector machine." The Journal of Machine
Learning Research 1 (2001): 211-244.
[7] D. Baron, S. Sarvotham, and R. G. Baraniuk, "Bayesian compressive sensing via belief
propagation." Signal Processing, IEEE Transactions on 58.1 (2010): 269-280.
[8] D. L. Donoho, A. Maleki, and A. Montanari, "Message-passing algorithms for compressed
sensing." Proceedings of the National Academy of Sciences 106.45 (2009)
[9] D. L. Donoho, A. Maleki, and A. Montanari, "Message passing algorithms for compressed sensing: I.
motivation and construction." Information Theory Workshop (ITW), 2010 IEEE, Jan. 2010
[10] D. L. Donoho, A. Maleki, and A. Montanari, "Message passing algorithms for compressed sensing: II.
analysis and validation," Information Theory Workshop (ITW), 2010 IEEE , Jan. 2010
21
ACCESS IC LAB
Graduate Institute of Electronics Engineering, NTU
Reference
[11] A. Septimus, and R. Steinberg, "Compressive sampling hardware reconstruction," Circuits and
Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on , vol., no., pp.3316,3319, May
30 2010-June 2 2010
[12] Lin Bai, P. Maechler, M. Muehlberghuber,and H. Kaeslin, "High-speed compressed sensing
reconstruction on FPGA using OMP and AMP," Electronics, Circuits and Systems (ICECS), 2012 19th
IEEE International Conference on , vol., no., pp.53,56, 9-12 Dec. 2012
[13] P. Blache, H. Rabah, and A. Amira, "High level prototyping and FPGA implementation of the
orthogonal matching pursuit algorithm," Information Science, Signal Processing and their
Applications (ISSPA), 2012 11th International Conference on , vol., no., pp.1336,1340, 2-5 July 2012
[14] J.L.V.M. Stanislaus, and T. Mohsenin, "Low-complexity FPGA implementation of compressive
sensing reconstruction," Computing, Networking and Communications (ICNC), 2013 International
Conference on , vol., no., pp.671,675, 28-31 Jan. 2013s
22