Speaker
Description
Tensor decompositions have become a central tool in data science, with applications in areas such as data analysis, signal processing, and machine learning. A key property of many tensor decompositions, such as the canonical polyadic decomposition, is identifiability, that is, the factors are unique, up to trivial scaling and permutation ambiguities. This allows one to recover the groundtruth sources that generated the data. The Tucker decomposition (TD) is a central and widely used tensor decomposition model. However, it is in general not identifiable. In this talk, we first introduce and motivate matrix and tensor decomposition models, with a focus on nonnegative matrix factorization (NMF) and nonnegative Tucker decomposition (NTD). Then, we study the identifiability of NTD. For order-2 tensors, that is, matrices, NTD is equivalent to a nonnegative tri-factorization model. By adapting and extending identifiability results of NMF, we provide uniqueness results for order-2 NTD. The conditions require the nonnegative matrix factors to have some degree of sparsity (namely, satisfy the sufficiently scattered condition), while the core matrix only needs to be full rank. We extend the result to order-3 tensors, which requires the nonnegative matrix factors to satisfy the same sufficiently scattered condition, while the core tensor only needs to have some slices (or linear combinations of them) or unfoldings with full column rank. We also discuss how this result can be extended to higher-order tensors. Finally, we propose an efficient algorithm to compute these unique NTDs, which we illustrate on synthetic and real data.