Randomized low-rank approximation of matrices and tensors
Numerical Analysis and High-Performance Computing Group, EPFL, 2017
Low-rank matrix and tensor approximation is a central topic in computational linear algebra, with countless applications in scientific computing and data analysis. In this thesis, we explore the possibility to compute low-rank approximations of matrices and tensors by using randomized techniques. In particular, we provide an overview of the existing literature on the topic, and we derive several extensions to tackle two variations of the compression problem: the computation of low-rank approximations with fixed rank and with fixed precision.
This thesis shows that randomized approaches provide a very appealing alternative to deterministic techniques. In particular, in the case of matrix compression, we describe methods which have (approximately) the same complexity as the most efficient deterministic algorithms, but are more appealing because of their robustness.
Regarding tensors, this thesis focuses on the Tensor Train (TT) format, which represents a generalization of the low-rank decomposition for matrices. In particular, we derive a randomized approach which performs better than the state-of-the-art deterministic algorithm for the compression of tensors in the TT format, by exploiting the efficiency with which some operations can be carried out in this format.
Moreover, this thesis extends such procedures to the compression of Hadamard products of matrices and of tensors, which represent fundamental operations in several linear algebra algorithms. In both cases, we devise randomized techniques which outperform their deterministic counterparts. Also, we show that randomness leads to a diminished memory exploitation.
To corroborate all these claims, we provide extensive numerical experiments.