Amazon cover image
Image from Amazon.com

Compressed Sensing : Theory and Applications.

By: Contributor(s): Publisher: Cambridge : Cambridge University Press, 2012Copyright date: ©2012Description: 1 online resource (558 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781139339995
Subject(s): Genre/Form: Additional physical formats: Print version:: Compressed Sensing : Theory and ApplicationsDDC classification:
  • 621.3822
LOC classification:
  • QA601 .C638 2012
Online resources:
Contents:
Cover -- Compressed Sensing -- Title -- Copyright -- Contents -- Contributors -- Preface -- 1: Introduction to compressed sensing -- 1.1 Introduction -- 1.2 Review of vector spaces -- 1.2.1 Normed vector spaces -- 1.2.2 Bases and frames -- 1.3 Low-dimensional signal models -- 1.3.1 Sparse models -- 1.3.2 Finite unions of subspaces -- 1.3.3 Unions of subspaces for analog signal models -- 1.3.4 Low-rank matrix models -- 1.3.5 Manifold and parametric models -- 1.4 Sensing matrices -- 1.4.1 Null space conditions -- 1.4.2 The restricted isometry property -- 1.4.3 Coherence -- 1.4.4 Sensing matrix constructions -- 1.5 Signal recovery via L1 minimization -- 1.5.1 Noise-free signal recovery -- 1.5.2 Signal recovery in noise -- 1.5.3 Instance-optimal guarantees revisited -- 1.5.4 The cross-polytope and phase transitions -- 1.6 Signal recovery algorithms -- 1.7 Multiple measurement vectors -- 1.8 Summary -- Acknowledgements -- A Appendix: proofs for Chapter 1 -- A.1 Proof of Theorem 1.4 -- A.2 Proof of Lemma 1.3 -- A.3 Proof of Lemma 1.6 -- A.4 Proof of Theorem 1.13 -- References -- 2: Second-generation sparse modeling: structured and collaborative signal analysis -- 2.1 Introduction -- 2.2 Image restoration and inverse problems -- 2.2.1 Traditional sparse modeling -- 2.2.2 Structured sparse modeling -- 2.2.3 Experimental results -- 2.3 Source identification and separation via structured and collaborative models -- 2.3.1 The Group Lasso -- 2.3.2 The Hierarchical Lasso -- 2.3.3 Collaborative Hierarchical Lasso -- 2.3.4 Experimental results -- 2.4 Concluding remarks -- Acknowledgements -- References -- 3: Xampling: compressed sensing of analog signals -- 3.1 Introduction -- 3.2 From subspaces to unions -- 3.3 Xampling -- 3.3.1 Union of subspaces -- 3.3.2 Architecture -- 3.4 Sparse shift-invariant framework -- 3.4.1 Sampling in shift-invariant subspaces.
3.4.2 Sparse union of SI subspaces -- 3.4.3 Infinite measurement model and continuous to finite -- 3.5 From theory to hardware of multiband sampling -- 3.5.1 Signal model and sparse-SI formulation -- 3.5.2 Analog compressed sensing via nonuniform sampling -- 3.5.3 Modeling practical ADC devices -- 3.5.4 Modulated wideband converter -- 3.5.5 Hardware design -- 3.5.6 Sub-Nyquist signal processing -- 3.6 Finite rate of innovation signals -- 3.6.1 Analog signal model -- 3.6.2 Compressive signal acquisition -- 3.6.3 Recovery algorithms -- 3.7 Sequences of innovation signals -- 3.7.1 Analog signal model -- 3.7.2 Compressive signal acquisition -- 3.7.3 Recovery algorithms -- 3.7.4 Applications -- 3.8 Union modeling vs. finite discretization -- 3.8.1 Random demodulator -- 3.8.2 Finite-model sensitivity -- 3.8.3 Hardware complexity -- 3.8.4 Computational loads -- 3.8.5 Analog vs. discrete CS radar -- 3.9 Discussion -- 3.9.1 Extending CS to analog signals -- 3.9.2 Is CS a universal sampling scheme? -- 3.9.3 Concluding remarks -- Acknowledgements -- References -- 4: Sampling at the rate of innovation: theory and applications -- 4.1 Introduction -- 4.1.1 The sampling scheme -- 4.1.2 History of FRI -- 4.1.3 Chapter outline -- 4.1.4 Notation and conventions -- 4.2 Signals with finite rate of innovation -- 4.2.1 Definition of signals with FRI -- 4.2.2 Examples of FRI signals -- 4.3 Sampling and recovery of FRI signals in the noise-free setting -- 4.3.1 Sampling using the sinc kernel -- 4.3.2 Sampling using the sum of sincs kernel -- 4.3.3 Sampling using exponential reproducing kernels -- 4.3.4 Multichannel sampling -- 4.4 The effect of noise on FRI recovery -- 4.4.1 Performance bounds under continuous-time noise -- 4.4.2 Performance bounds under sampling noise -- 4.4.3 FRI techniques improving robustness to sampling noise -- 4.5 Simulations.
4.5.1 Sampling and reconstruction in the noiseless setting -- 4.5.2 Sampling and reconstruction in the presence of noise -- 4.5.3 Periodic vs. semi-periodic FRI signals -- 4.6 Extensions and applications -- 4.6.1 Sampling piecewise sinusoidal signals -- 4.6.2 Signal compression -- 4.6.3 Super-resolution imaging -- 4.6.4 Ultrasound imaging -- 4.6.5 Multipath medium identification -- 4.6.6 Super-resolution radar -- 4.7 Acknowledgement -- Appendix to Chapter 4: Cramér-Rao bound derivations -- Cramér-Rao bounds for the sinc kernel -- Cramér-Rao bounds for the SoS kernel -- Cramér-Rao bounds for B-splines -- Cramér-Rao bounds for E-splines -- References -- 5: Introduction to the non-asymptotic analysis of random matrices -- 5.1 Introduction -- 5.2 Preliminaries -- 5.2.1 Matrices and their singular values -- 5.2.2 Nets -- 5.2.3 Sub-gaussian random variables -- 5.2.4 Sub-exponential random variables -- 5.2.5 Isotropic random vectors -- 5.2.6 Sums of independent random matrices -- 5.3 Random matrices with independent entries -- 5.3.1 Limit laws and Gaussian matrices -- 5.3.2 General random matrices with independent entries -- 5.4 Random matrices with independent rows -- 5.4.1 Sub-gaussian rows -- 5.4.2 Heavy-tailed rows -- 5.4.3 Applications to estimating covariance matrices -- 5.4.4 Applications to random sub-matrices and sub-frames -- 5.5 Random matrices with independent columns -- 5.5.1 Sub-gaussian columns -- 5.5.2 Heavy-tailed columns -- 5.6 Restricted isometries -- 5.6.1 Sub-gaussian restricted isometries -- 5.6.2 Heavy-tailed restricted isometries -- 5.7 Notes -- References -- 6: Adaptive sensing for sparse recovery -- 6.1 Introduction -- 6.1.1 Denoising -- 6.1.2 Inverse problems -- 6.1.3 A Bayesian perspective -- 6.1.4 Structured sparsity -- 6.2 Bayesian adaptive sensing -- 6.2.1 Bayesian inference using a simple generative model.
6.2.1.1 Single component generative model -- 6.2.1.2 Measurement adaptation -- 6.2.2 Bayesian inference using multi-component models -- 6.2.2.1 Multi-component generative model -- 6.2.2.2 Measurement adaptation -- 6.2.3 Quantifying performance -- 6.3 Quasi-Bayesian adaptive sensing -- 6.3.1 Denoising using non-adaptive measurements -- 6.3.2 Distilled sensing -- 6.3.2.1 Analysis of distilled sensing -- 6.3.3 Distillation in compressed sensing -- 6.4 Related work and suggestions for further reading -- References -- 7: Fundamental thresholds in compressed sensing: a high-dimensional geometry approach -- 7.1 Introduction -- 7.1.1 Threshold bounds for L1 minimization robustness -- 7.1.2 Weighted and iterative reweighted L1 minimization thresholds -- 7.1.3 Comparisons with other threshold bounds -- 7.1.4 Some concepts in high-dimensional geometry -- 7.1.5 Organization -- 7.2 The null space characterization -- 7.3 The Grassmann angle framework for the null space characterization -- 7.4 Evaluating the threshold bound ζ -- 7.5 Computing the internal angle exponent -- 7.6 Computing the external angle exponent -- 7.7 Existence and scaling of ρN(δ,C) -- 7.8 ``Weak,'' ``sectional,'' and ``strong'' robustness -- 7.9 Numerical computations on the bounds of ζ -- 7.10 Recovery thresholds for weighted L1 minimization -- 7.11 Approximate support recovery and iterative reweighted L1 -- 7.12 Conclusion -- 7.13 Appendix -- 7.13.1 Derivation of the internal angles -- 7.13.2 Derivation of the external angles -- 7.13.3 Proof of Lemma 7.7 -- 7.13.4 Proof of Lemma 7.8 -- Acknowledgement -- References -- 8: Greedy algorithms for compressed sensing -- 8.1 Greed, a flexible alternative to convexification -- 8.2 Greedy pursuits -- 8.2.1 General framework -- 8.2.2 Variations in coefficient updates -- 8.2.3 Variations in element selection -- 8.2.4 Computational considerations.
8.2.5 Performance guarantees -- 8.2.6 Empirical comparisons -- 8.3 Thresholding type algorithms -- 8.3.1 Iterative Hard Thresholding -- 8.3.2 Compressive Sampling Matching Pursuit and Subspace Pursuit -- 8.3.3 Empirical comparison -- 8.3.4 Recovery proof -- 8.4 Generalizations of greedy algorithms to structured models -- 8.4.1 The union of subspaces model -- 8.4.2 Sampling and reconstructing union of subspaces signals -- 8.4.3 Performance guarantees -- 8.4.4 When do the recovery conditions hold? -- 8.4.5 Empirical comparison -- 8.4.6 Rank structure in the MMV problem -- 8.5 Conclusions -- Acknowledgement -- References -- 9: Graphical models concepts in compressed sensing -- 9.1 Introduction -- 9.1.1 Some useful notation -- 9.2 The basic model and its graph structure -- 9.3 Revisiting the scalar case -- 9.4 Inference via message passing -- 9.4.1 The min-sum algorithm -- 9.4.2 Simplifying min-sum by quadratic approximation -- 9.5 Approximate message passing -- 9.5.1 The AMP algorithm, some of its properties, … -- 9.5.2 … and its derivation -- 9.6 High-dimensional analysis -- 9.6.1 Some numerical experiments with AMP -- 9.6.2 State evolution -- 9.6.3 The risk of the LASSO -- 9.6.4 A decoupling principle -- 9.6.5 A heuristic derivation of state evolution -- 9.6.6 The noise sensitivity phase transition -- 9.6.7 On universality -- 9.6.8 Comparison with other analysis approaches -- 9.7 Generalizations -- 9.7.1 Structured priors… -- 9.7.2 Sparse sensing matrices -- 9.7.3 Matrix completion -- 9.7.4 General regressions -- Acknowledgements -- References -- 10: Finding needles in compressed haystacks -- 10.1 Introduction -- 10.2 Background and notation -- 10.2.1 Notation -- 10.2.2 Concentration inequalities -- 10.2.3 Group theory -- 10.3 Support Vector Machines -- 10.4 Near-isometric projections -- 10.5 Proof of Theorem 10.3.
10.6 Distance-Preserving via Johnson-Lindenstrauss Property.
Summary: A detailed presentation of compressed sensing by leading researchers, covering the most significant theoretical and application-oriented advances.
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Ebrary Ebrary Afghanistan Available EBKAF00061350
Ebrary Ebrary Algeria Available
Ebrary Ebrary Cyprus Available
Ebrary Ebrary Egypt Available
Ebrary Ebrary Libya Available
Ebrary Ebrary Morocco Available
Ebrary Ebrary Nepal Available EBKNP00061350
Ebrary Ebrary Sudan Available
Ebrary Ebrary Tunisia Available
Total holds: 0

Cover -- Compressed Sensing -- Title -- Copyright -- Contents -- Contributors -- Preface -- 1: Introduction to compressed sensing -- 1.1 Introduction -- 1.2 Review of vector spaces -- 1.2.1 Normed vector spaces -- 1.2.2 Bases and frames -- 1.3 Low-dimensional signal models -- 1.3.1 Sparse models -- 1.3.2 Finite unions of subspaces -- 1.3.3 Unions of subspaces for analog signal models -- 1.3.4 Low-rank matrix models -- 1.3.5 Manifold and parametric models -- 1.4 Sensing matrices -- 1.4.1 Null space conditions -- 1.4.2 The restricted isometry property -- 1.4.3 Coherence -- 1.4.4 Sensing matrix constructions -- 1.5 Signal recovery via L1 minimization -- 1.5.1 Noise-free signal recovery -- 1.5.2 Signal recovery in noise -- 1.5.3 Instance-optimal guarantees revisited -- 1.5.4 The cross-polytope and phase transitions -- 1.6 Signal recovery algorithms -- 1.7 Multiple measurement vectors -- 1.8 Summary -- Acknowledgements -- A Appendix: proofs for Chapter 1 -- A.1 Proof of Theorem 1.4 -- A.2 Proof of Lemma 1.3 -- A.3 Proof of Lemma 1.6 -- A.4 Proof of Theorem 1.13 -- References -- 2: Second-generation sparse modeling: structured and collaborative signal analysis -- 2.1 Introduction -- 2.2 Image restoration and inverse problems -- 2.2.1 Traditional sparse modeling -- 2.2.2 Structured sparse modeling -- 2.2.3 Experimental results -- 2.3 Source identification and separation via structured and collaborative models -- 2.3.1 The Group Lasso -- 2.3.2 The Hierarchical Lasso -- 2.3.3 Collaborative Hierarchical Lasso -- 2.3.4 Experimental results -- 2.4 Concluding remarks -- Acknowledgements -- References -- 3: Xampling: compressed sensing of analog signals -- 3.1 Introduction -- 3.2 From subspaces to unions -- 3.3 Xampling -- 3.3.1 Union of subspaces -- 3.3.2 Architecture -- 3.4 Sparse shift-invariant framework -- 3.4.1 Sampling in shift-invariant subspaces.

3.4.2 Sparse union of SI subspaces -- 3.4.3 Infinite measurement model and continuous to finite -- 3.5 From theory to hardware of multiband sampling -- 3.5.1 Signal model and sparse-SI formulation -- 3.5.2 Analog compressed sensing via nonuniform sampling -- 3.5.3 Modeling practical ADC devices -- 3.5.4 Modulated wideband converter -- 3.5.5 Hardware design -- 3.5.6 Sub-Nyquist signal processing -- 3.6 Finite rate of innovation signals -- 3.6.1 Analog signal model -- 3.6.2 Compressive signal acquisition -- 3.6.3 Recovery algorithms -- 3.7 Sequences of innovation signals -- 3.7.1 Analog signal model -- 3.7.2 Compressive signal acquisition -- 3.7.3 Recovery algorithms -- 3.7.4 Applications -- 3.8 Union modeling vs. finite discretization -- 3.8.1 Random demodulator -- 3.8.2 Finite-model sensitivity -- 3.8.3 Hardware complexity -- 3.8.4 Computational loads -- 3.8.5 Analog vs. discrete CS radar -- 3.9 Discussion -- 3.9.1 Extending CS to analog signals -- 3.9.2 Is CS a universal sampling scheme? -- 3.9.3 Concluding remarks -- Acknowledgements -- References -- 4: Sampling at the rate of innovation: theory and applications -- 4.1 Introduction -- 4.1.1 The sampling scheme -- 4.1.2 History of FRI -- 4.1.3 Chapter outline -- 4.1.4 Notation and conventions -- 4.2 Signals with finite rate of innovation -- 4.2.1 Definition of signals with FRI -- 4.2.2 Examples of FRI signals -- 4.3 Sampling and recovery of FRI signals in the noise-free setting -- 4.3.1 Sampling using the sinc kernel -- 4.3.2 Sampling using the sum of sincs kernel -- 4.3.3 Sampling using exponential reproducing kernels -- 4.3.4 Multichannel sampling -- 4.4 The effect of noise on FRI recovery -- 4.4.1 Performance bounds under continuous-time noise -- 4.4.2 Performance bounds under sampling noise -- 4.4.3 FRI techniques improving robustness to sampling noise -- 4.5 Simulations.

4.5.1 Sampling and reconstruction in the noiseless setting -- 4.5.2 Sampling and reconstruction in the presence of noise -- 4.5.3 Periodic vs. semi-periodic FRI signals -- 4.6 Extensions and applications -- 4.6.1 Sampling piecewise sinusoidal signals -- 4.6.2 Signal compression -- 4.6.3 Super-resolution imaging -- 4.6.4 Ultrasound imaging -- 4.6.5 Multipath medium identification -- 4.6.6 Super-resolution radar -- 4.7 Acknowledgement -- Appendix to Chapter 4: Cramér-Rao bound derivations -- Cramér-Rao bounds for the sinc kernel -- Cramér-Rao bounds for the SoS kernel -- Cramér-Rao bounds for B-splines -- Cramér-Rao bounds for E-splines -- References -- 5: Introduction to the non-asymptotic analysis of random matrices -- 5.1 Introduction -- 5.2 Preliminaries -- 5.2.1 Matrices and their singular values -- 5.2.2 Nets -- 5.2.3 Sub-gaussian random variables -- 5.2.4 Sub-exponential random variables -- 5.2.5 Isotropic random vectors -- 5.2.6 Sums of independent random matrices -- 5.3 Random matrices with independent entries -- 5.3.1 Limit laws and Gaussian matrices -- 5.3.2 General random matrices with independent entries -- 5.4 Random matrices with independent rows -- 5.4.1 Sub-gaussian rows -- 5.4.2 Heavy-tailed rows -- 5.4.3 Applications to estimating covariance matrices -- 5.4.4 Applications to random sub-matrices and sub-frames -- 5.5 Random matrices with independent columns -- 5.5.1 Sub-gaussian columns -- 5.5.2 Heavy-tailed columns -- 5.6 Restricted isometries -- 5.6.1 Sub-gaussian restricted isometries -- 5.6.2 Heavy-tailed restricted isometries -- 5.7 Notes -- References -- 6: Adaptive sensing for sparse recovery -- 6.1 Introduction -- 6.1.1 Denoising -- 6.1.2 Inverse problems -- 6.1.3 A Bayesian perspective -- 6.1.4 Structured sparsity -- 6.2 Bayesian adaptive sensing -- 6.2.1 Bayesian inference using a simple generative model.

6.2.1.1 Single component generative model -- 6.2.1.2 Measurement adaptation -- 6.2.2 Bayesian inference using multi-component models -- 6.2.2.1 Multi-component generative model -- 6.2.2.2 Measurement adaptation -- 6.2.3 Quantifying performance -- 6.3 Quasi-Bayesian adaptive sensing -- 6.3.1 Denoising using non-adaptive measurements -- 6.3.2 Distilled sensing -- 6.3.2.1 Analysis of distilled sensing -- 6.3.3 Distillation in compressed sensing -- 6.4 Related work and suggestions for further reading -- References -- 7: Fundamental thresholds in compressed sensing: a high-dimensional geometry approach -- 7.1 Introduction -- 7.1.1 Threshold bounds for L1 minimization robustness -- 7.1.2 Weighted and iterative reweighted L1 minimization thresholds -- 7.1.3 Comparisons with other threshold bounds -- 7.1.4 Some concepts in high-dimensional geometry -- 7.1.5 Organization -- 7.2 The null space characterization -- 7.3 The Grassmann angle framework for the null space characterization -- 7.4 Evaluating the threshold bound ζ -- 7.5 Computing the internal angle exponent -- 7.6 Computing the external angle exponent -- 7.7 Existence and scaling of ρN(δ,C) -- 7.8 ``Weak,'' ``sectional,'' and ``strong'' robustness -- 7.9 Numerical computations on the bounds of ζ -- 7.10 Recovery thresholds for weighted L1 minimization -- 7.11 Approximate support recovery and iterative reweighted L1 -- 7.12 Conclusion -- 7.13 Appendix -- 7.13.1 Derivation of the internal angles -- 7.13.2 Derivation of the external angles -- 7.13.3 Proof of Lemma 7.7 -- 7.13.4 Proof of Lemma 7.8 -- Acknowledgement -- References -- 8: Greedy algorithms for compressed sensing -- 8.1 Greed, a flexible alternative to convexification -- 8.2 Greedy pursuits -- 8.2.1 General framework -- 8.2.2 Variations in coefficient updates -- 8.2.3 Variations in element selection -- 8.2.4 Computational considerations.

8.2.5 Performance guarantees -- 8.2.6 Empirical comparisons -- 8.3 Thresholding type algorithms -- 8.3.1 Iterative Hard Thresholding -- 8.3.2 Compressive Sampling Matching Pursuit and Subspace Pursuit -- 8.3.3 Empirical comparison -- 8.3.4 Recovery proof -- 8.4 Generalizations of greedy algorithms to structured models -- 8.4.1 The union of subspaces model -- 8.4.2 Sampling and reconstructing union of subspaces signals -- 8.4.3 Performance guarantees -- 8.4.4 When do the recovery conditions hold? -- 8.4.5 Empirical comparison -- 8.4.6 Rank structure in the MMV problem -- 8.5 Conclusions -- Acknowledgement -- References -- 9: Graphical models concepts in compressed sensing -- 9.1 Introduction -- 9.1.1 Some useful notation -- 9.2 The basic model and its graph structure -- 9.3 Revisiting the scalar case -- 9.4 Inference via message passing -- 9.4.1 The min-sum algorithm -- 9.4.2 Simplifying min-sum by quadratic approximation -- 9.5 Approximate message passing -- 9.5.1 The AMP algorithm, some of its properties, … -- 9.5.2 … and its derivation -- 9.6 High-dimensional analysis -- 9.6.1 Some numerical experiments with AMP -- 9.6.2 State evolution -- 9.6.3 The risk of the LASSO -- 9.6.4 A decoupling principle -- 9.6.5 A heuristic derivation of state evolution -- 9.6.6 The noise sensitivity phase transition -- 9.6.7 On universality -- 9.6.8 Comparison with other analysis approaches -- 9.7 Generalizations -- 9.7.1 Structured priors… -- 9.7.2 Sparse sensing matrices -- 9.7.3 Matrix completion -- 9.7.4 General regressions -- Acknowledgements -- References -- 10: Finding needles in compressed haystacks -- 10.1 Introduction -- 10.2 Background and notation -- 10.2.1 Notation -- 10.2.2 Concentration inequalities -- 10.2.3 Group theory -- 10.3 Support Vector Machines -- 10.4 Near-isometric projections -- 10.5 Proof of Theorem 10.3.

10.6 Distance-Preserving via Johnson-Lindenstrauss Property.

A detailed presentation of compressed sensing by leading researchers, covering the most significant theoretical and application-oriented advances.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2019. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.