Results 1  10
of
41
Phase Retrieval via Wirtinger Flow: Theory and Algorithms
, 2014
"... We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complexvalued signal x ∈ Cn about which we have phaseless samples of the form yr = ∣⟨ar,x⟩∣2, r = 1,...,m (knowledge of the phase of these samples would yield a linear system). This pape ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complexvalued signal x ∈ Cn about which we have phaseless samples of the form yr = ∣⟨ar,x⟩∣2, r = 1,...,m (knowledge of the phase of these samples would yield a linear system). This paper develops a nonconvex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a nearlinear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of nonconvex optimization schemes that may have implications for computational problems beyond phase retrieval.
Phase retrieval using alternating minimization
 In NIPS
, 2013
"... Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estima ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estimating the missing phase information, and the candidate solution. In this paper, we show that a simple alternating minimization algorithm geometrically converges to the solution of one such problem – finding a vector x from y,A, where y = ATx  and z  denotes a vector of elementwise magnitudes of z – under the assumption that A is Gaussian. Empirically, our algorithm performs similar to recently proposed convex techniques for this variant (which are based on “lifting ” to a convex matrix problem) in sample complexity and robustness to noise. However, our algorithm is much more efficient and can scale to large problems. Analytically, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the only known theoretical guarantee for alternating minimization for any variant of phase retrieval problems in the nonconvex setting. 1
Square deal: Lower bounds and improved relaxations for tensor recovery
 CoRR
"... Recovering a lowrank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially subopt ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
Recovering a lowrank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a Kway tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(rK+nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sumofnuclearnorms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for lowrank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly. 1
Phase Retrieval from Coded Diffraction Patterns
, 2013
"... This paper considers the question of recovering the phase of an object from intensityonly measurements, a problem which naturally appears in Xray crystallography and related disciplines. We study a physically realistic setup where one can modulate the signal of interest and then collect the inten ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
This paper considers the question of recovering the phase of an object from intensityonly measurements, a problem which naturally appears in Xray crystallography and related disciplines. We study a physically realistic setup where one can modulate the signal of interest and then collect the intensity of its diffraction pattern, each modulation thereby producing a sort of coded diffraction pattern. We show that PhaseLift, a recent convex programming technique, recovers the phase information exactly from a number of random modulations, which is polylogarithmic in the number of unknowns. Numerical experiments with noiseless and noisy data complement our theoretical analysis and illustrate our approach.
The squarederror of generalized LASSO: A precise analysis
 In 51st Annual Allerton Conference on Communication, Control, and Computing, Allerton Park & Retreat
"... We consider the problem of estimating an unknown signal x0 from noisy linear observations y = Ax0 + z ∈ Rm. In many practical instances of this problem, x0 has a certain structure that can be captured by a structure inducing function f (·). For example, `1 norm can be used to encourage a sparse solu ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We consider the problem of estimating an unknown signal x0 from noisy linear observations y = Ax0 + z ∈ Rm. In many practical instances of this problem, x0 has a certain structure that can be captured by a structure inducing function f (·). For example, `1 norm can be used to encourage a sparse solution. To estimate x0 with the aid of a convex f (·), we consider three variations of the widely used LASSO estimator and provide sharp characterizations of their performances. Our study falls under a generic framework, where the entries of the measurement matrix A and the noise vector z have zeromean normal distributions with variances 1 and σ2, respectively. For the LASSO estimator x∗, we ask: “What is the precise estimation error as a function of the noise level σ, the number of observations m and the structure of the signal?". In particular, we attempt to calculate the Normalized Square Error (NSE) defined as ‖x ∗−x0‖22 σ2. We show that, the structure of the signal x0 and choice of the function f (·) enter the error formulae through the summary parameters D f (x0,R+) and D f (x0,λ), which are defined as the “Gaussian squareddistances ” to the subdifferential cone and to the λscaled subdifferential of f at x0, respectively. The first estimator assumes apriori knowledge of f (x0) and is given by arg minx {‖y−Ax‖2 subject to f (x) ≤ f (x0)}. We prove that its worst case NSE is achieved when σ → 0 and concentrates around D f (x0,R+)m−D f (x0,R+). Secondly, we consider arg minx {‖y−Ax‖2 + λ f (x)}, for
Sparsity Averaging for Compressive Imaging
"... We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted ℓ1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms stateoftheart priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.
Near optimal compressed sensing of sparse rankone matrices via sparse power factorization. arXiv preprint arXiv:1312.0525
, 2013
"... ar ..."
(Show Context)
Intersecting singularities for multistructured estimation
"... We address the problem of designing a convex nonsmooth regularizer encouraging multiple structural effects simultaneously. Focusing on the inference of sparse and lowrank matrices we suggest a new complexity index and a convex penalty approximating it. The new penalty term can be written as the tra ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We address the problem of designing a convex nonsmooth regularizer encouraging multiple structural effects simultaneously. Focusing on the inference of sparse and lowrank matrices we suggest a new complexity index and a convex penalty approximating it. The new penalty term can be written as the trace norm of a linear function of the matrix. By analyzing theoretical properties of this family of regularizers we come up with oracle inequalities and compressed sensing results ensuring the quality of our regularized estimator. We also provide algorithms and supporting numerical experiments. 1.
Simple bounds for noisy linear inverse problems with exact side information. Available at arXiv.org/abs/1312.0641
, 2013
"... ar ..."
(Show Context)
Model consistency of partly smooth regularizers
, 2014
"... This paper studies leastsquare regression penalized with partly smooth convex regularizers. This class of functions is very large and versatile allowing to promote solutions conforming to some notion of lowcomplexity. Indeed, they force solutions of variational problems to belong to a lowdimensi ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
This paper studies leastsquare regression penalized with partly smooth convex regularizers. This class of functions is very large and versatile allowing to promote solutions conforming to some notion of lowcomplexity. Indeed, they force solutions of variational problems to belong to a lowdimensional manifold (the socalled model) which is stable under small perturbations of the function. This property is crucial to make the underlying lowcomplexity model robust to small noise. We show that a generalized “irrepresentable condition ” implies stable model selection under small noise perturbations in the observations and the design matrix, when the regularization parameter is tuned proportionally to the noise level. This condition is shown to be almost a necessary condition. We then show that this condition implies model consistency of the regularized estimator. That is, with a probability tending to one as the number of measurements increases, the regularized estimator belongs to the correct lowdimensional model manifold. This work unifies and generalizes several previous ones, where model consistency is known to hold for sparse, group sparse, total variation and lowrank regularizations. Lastly, we also show that this generalized “irrepresentable condition ” implies that the forwardbackward proximal splitting algorithm identifies the model after a finite number of steps.